Lenses
Lenses are reusable AI prompts that extract specific insights from your meetings. Think of them as specialized filters you can apply to any meeting, or across multiple meetings at once, to pull out exactly the information you need.
What Are Lenses
A Lens is a structured AI prompt that you run against one or more meeting transcripts. Unlike a standard summary, which gives you a general overview, a Lens is purpose-built to answer a specific question or extract a specific type of insight.
For example, a "Competitor Mentions" Lens scans your meetings for any time a competitor is discussed, extracts the context, and produces a structured report. A "Budget & Pricing" Lens pulls out every financial figure, pricing discussion, and budget constraint mentioned in a call.
Karnyx ships with 25 built-in system Lenses covering the most common use cases, and you can create unlimited custom Lenses tailored to your exact workflow.
Lenses vs. Templates
System Lenses Library
Karnyx includes 25 pre-built system Lenses organized into five categories. System Lenses cannot be edited, but you can duplicate any of them as a starting point for a custom Lens.
Sales & Revenue
| Lens | Description |
|---|---|
| Deal Qualification | Extracts BANT (Budget, Authority, Need, Timeline) signals from sales calls and scores lead qualification |
| Objection Tracker | Identifies every objection raised by the prospect, the response given, and whether it was resolved |
| Budget & Pricing | Pulls out all financial figures, pricing discussions, budget constraints, and discount requests |
| Competitor Mentions | Finds every mention of a competitor, the context of the discussion, and sentiment (positive, negative, neutral) |
| Next Steps & Follow-ups | Extracts all commitments, promised deliverables, and follow-up timelines with assigned owners |
Product & Engineering
| Lens | Description |
|---|---|
| Feature Requests | Extracts every feature request or product suggestion mentioned, with requester context and priority signals |
| Bug Reports | Identifies reported issues, steps to reproduce (when mentioned), severity assessments, and affected users |
| Technical Decisions | Catalogs every architectural or technical decision, alternatives considered, and the reasoning behind each choice |
| Sprint Review | Summarizes demo feedback, completed stories, carry-overs, and sprint velocity observations |
| Incident Retro | Produces a structured retrospective: timeline, root cause, what went well, what went wrong, and remediation items |
Customer Success
| Lens | Description |
|---|---|
| Health Score | Analyzes customer sentiment, engagement level, and risk indicators to produce a 1-10 account health score |
| Churn Signals | Detects language patterns indicating dissatisfaction, frustration, or intent to leave |
| Expansion Signals | Identifies upsell and cross-sell opportunities based on mentioned needs, growing teams, and new use cases |
| Onboarding Progress | Tracks onboarding milestone completion, blockers, and time-to-value metrics from customer calls |
| Support Escalation | Flags issues that need escalation to engineering or management based on severity and customer impact |
Leadership & Strategy
| Lens | Description |
|---|---|
| Decision Log | Catalogs every decision made, who proposed it, who approved it, and the rationale |
| Risk Register | Identifies risks discussed, their likelihood, potential impact, and proposed mitigations |
| OKR Progress | Maps discussion topics to OKRs and extracts progress updates, blockers, and confidence levels |
| Stakeholder Map | Identifies key stakeholders, their positions, influence levels, and relationships mentioned in the meeting |
| Meeting Effectiveness | Evaluates whether the meeting had a clear agenda, reached its objectives, and used time efficiently |
Collaboration & People
| Lens | Description |
|---|---|
| Talk Time Analysis | Breaks down speaking time per participant with balance metrics and dominance indicators |
| Sentiment Analysis | Analyzes emotional tone throughout the meeting, tracks sentiment shifts, and flags tense moments |
| Question Tracker | Extracts all questions asked, who asked them, whether they were answered, and the responses given |
| Commitment Tracker | Identifies all verbal commitments and promises made by each participant with timestamps |
| Key Quotes | Extracts the most impactful, quotable, or noteworthy statements from the meeting with speaker attribution |
Creating Custom Lenses
Custom Lenses let you define your own AI prompts for any use case not covered by the system library. Navigate to Settings > Lenses > Create Lens or use the Cmd+K command palette and type "New Lens".
- Name your Lens: Choose a descriptive name like "Compliance Check" or "Interview Scorecard".
- Write the prompt: Define exactly what you want the AI to extract or analyze. Use template variables (see below) to inject meeting data.
- Set the category: Organize your Lens under a category for easy discovery in the command menu.
- Choose the output format: Select between structured (JSON), markdown, or plain text output.
- Preview against a real meeting: Test your Lens against any previously captured meeting before saving.
// Example custom Lens: Interview Scorecard
{
"name": "Interview Scorecard",
"category": "Recruiting",
"output_format": "markdown",
"prompt": "You are an interview evaluation assistant. Analyze this interview transcript and produce a structured scorecard.
MEETING: {{meeting_title}}
DATE: {{date}}
INTERVIEWER(S): {{participants}}
CANDIDATE: Identify from context
TRANSCRIPT:
{{transcript}}
INTERVIEWER NOTES:
{{notes}}
Produce the following:
1. **Candidate Overview** - Name, role applied for, interview stage
2. **Technical Assessment** (1-5) - Evaluate problem-solving and technical depth
3. **Communication** (1-5) - Clarity, conciseness, and articulation
4. **Culture Fit** (1-5) - Values alignment and team compatibility
5. **Key Strengths** - Top 3 standout qualities with examples
6. **Areas of Concern** - Any yellow or red flags observed
7. **Questions Asked by Candidate** - List and quality assessment
8. **Recommendation** - Strong Yes / Yes / Maybe / No with justification"
}Start from a system Lens
Lens Template Variables
Lenses use double-brace syntax to inject meeting data into your prompt. Karnyx replaces these variables with actual data when the Lens runs.
| Variable | Type | Description |
|---|---|---|
{{transcript}} | string | Full meeting transcript with speaker labels and timestamps |
{{participants}} | string[] | List of all meeting participants with names and roles |
{{notes}} | string | Your notepad content written during the meeting |
{{highlights}} | string[] | User-highlighted portions of notes |
{{meeting_title}} | string | Calendar event title |
{{date}} | string | ISO 8601 date of the meeting |
{{duration_minutes}} | number | Meeting length in minutes |
{{speaker_map}} | object | Mapping of speaker IDs to display names and talk-time percentages |
{{summary}} | string | The AI-generated summary (if already produced) |
{{action_items}} | object[] | List of extracted action items with assignees and due dates |
{{company}} | object | Company information for the external participants (name, domain, past meetings) |
Cross-meeting variables
{{transcript}} becomes a concatenation of all selected meeting transcripts, separated by meeting headers.Running Lenses
You can run a Lens against a single meeting or across multiple meetings at once.
Single Meeting
- Open a meeting from the dashboard.
- Click the "Run Lens" button in the toolbar, or press
/to open the command menu. - Select a Lens from the list. System Lenses appear first, followed by your custom Lenses.
- The Lens runs in 5-15 seconds and the output appears in a new panel alongside the meeting.
- Lens outputs are saved automatically and accessible from the meeting detail page under the Lenses tab.
Cross-Meeting (Multi-Select)
- From the dashboard, select multiple meetings using
Cmd+Clickor the selection checkboxes. - Click "Run Lens" in the bulk action bar.
- Select a Lens. Karnyx concatenates the meeting data and runs the prompt against all selected meetings simultaneously.
- The output provides a unified analysis spanning all the selected meetings.
Great use cases for cross-meeting Lenses
The "/" Command Menu
The fastest way to run a Lens is through the command menu. When viewing a meeting, press / to open the Lens command menu. Start typing to filter Lenses by name or category.
// Command menu examples
/ → Opens full Lens list
/deal → Filters to "Deal Qualification"
/competitor → Filters to "Competitor Mentions"
/interview → Filters to your custom "Interview Scorecard" Lens
/health → Filters to "Health Score"
// You can also use Cmd+K and type "Run Lens" for the same menu
Keyboard shortcut
Cmd+K → type "Run Lens". If a meeting is currently open, the Lens runs against that meeting. Otherwise, you will be prompted to select one.Sharing Lenses with Your Workspace
Custom Lenses are private by default, visible only to you. You can share them with your entire workspace so that teammates can use them too.
- Open Settings > Lenses and find the Lens you want to share.
- Click the three-dot menu and select "Share with Workspace".
- Choose the workspace(s) you want to share with. You must be a member of the workspace.
- Shared Lenses appear for all workspace members in both the Lens list and the
/command menu.
Editing shared Lenses
Tips for Writing Effective Lens Prompts
The quality of a Lens depends entirely on the quality of the prompt. Here are best practices for getting the most out of your custom Lenses.
- Be specific about the output format. Tell the AI exactly what sections, headings, or structure you want. Vague prompts produce vague results. Use numbered lists, heading names, and explicit field labels in your prompt.
- Give the AI a role. Start with a persona like "You are a sales coach analyzing a discovery call" or "You are a compliance officer reviewing a financial advisory meeting." This sets the right context and tone.
- Include scoring criteria when applicable. If you want ratings (1-5, pass/fail, red/yellow/green), define the criteria for each level. Without clear criteria, ratings are inconsistent between runs.
- Use all relevant variables. Include
{{notes}}and{{highlights}}alongside{{transcript}}to give the AI your perspective, not just the raw conversation. - Test iteratively. Write a first draft, preview it against 2-3 different meetings, and refine. Different meeting types may expose edge cases in your prompt.
- Keep prompts focused. A Lens that tries to do everything at once will produce mediocre results for each part. Create separate Lenses for separate analyses and run them independently.
- Use conditional instructions. Add lines like "If no competitor was mentioned, state that explicitly rather than guessing" to handle edge cases gracefully.