How Tests Are Scheduled
Search tracking tests run on automated schedules managed by the DevTune system. Tests are triggered automatically at regular intervals to ensure consistent, ongoing monitoring of your AI search presence. You do not need to manually start tests or configure test frequency. The system handles scheduling to provide regular data points for trend analysis.Note: Manual test execution is not available to customers. Regular users rely on automated scheduled runs.
What Happens During a Test
Each test run follows a consistent process:1. Query Submission
DevTune sends each of your configured prompts to every enabled AI search platform. Each prompt-platform combination is a single query.2. Response Collection
The full AI response from each platform is captured, including:- The main response body text
- Any cited URLs and source references
- Inline links and recommendations
3. Analysis
Each response is analyzed against your configuration:- Citation source matching - Are any of your tracked domains cited?
- Brand term detection - Is your brand mentioned by name?
- Competitor analysis - Are competitor citation sources or brand terms present?
- Sentiment analysis - What is the tone of mentions about your brand?
- Position analysis - Where in the response does your brand appear?
4. Scoring
Metrics are calculated from the analysis:- Overall Presence Rate across all prompts and platforms
- Share of Voice relative to competitor brands
- Sentiment Score for your brand mentions
- Per-platform and per-prompt breakdowns
- Secondary metrics (Docs Presence, Blog Presence, Brand Mentions, Top of Answer, Avg Citation Rank, Primary Citation Share)
5. Insight Generation
After scoring, the insights engine analyzes results to identify:- Content gaps where competitors appear but you do not
- Untapped opportunities with low competition
- Positions to defend where competitors are gaining
- Competitive weaknesses to exploit
Test Duration
Test duration depends on:- Number of prompts - More prompts means more queries to process
- Number of enabled platforms - Each platform adds queries
- Platform response times - Some platforms respond faster than others
Credit and Quota Consumption
Each test run consumes credits from your monthly allocation. Credit usage is based on:- The number of prompts tested
- The number of platforms queried
- Your subscription tier
Credit Usage Estimation
The number of credits per test run equals:Viewing Test Results
Real-Time Progress
While tests are running, you can see progress on the Search Tracking Overview dashboard:- Progress percentage
- Number of completed queries
- Platforms currently being queried
After Completion
When a test finishes:- The dashboard updates with the latest data
- Trend charts incorporate the new data point
- Insights are generated or updated
- KPI cards reflect the current state
Historical Results
All test results are stored and available for historical analysis. Use the trend chart time period selector to view results across different time windows.Understanding Results
Baseline Period
Your first few test runs establish a baseline. During this period:- Focus on understanding your current state rather than expecting trends
- Compare across platforms to see where you are strongest
- Identify the biggest gaps between your presence and competitor presence
Interpreting Trends
After multiple test runs, trend data becomes meaningful: Improving trend (upward):- Content improvements are working
- AI platforms are recognizing your brand more frequently
- Maintain and expand what is working
- Competitors may be gaining ground
- AI platform behavior may have changed
- Review insights for recommended actions
- Consistent presence across runs
- Monitor for changes and focus on growth opportunities
Variability Between Runs
AI responses are not fully deterministic. Some variation between test runs is normal. Reliable conclusions come from trends across multiple data points, not from comparing two individual runs.Troubleshooting
No Results After a Test
If a test completes but shows no results:- Verify your citation sources and brand terms are configured correctly on the Config page
- Check that at least one platform is enabled
- Confirm that prompts are configured for your project
Unexpected Low Presence
If presence is lower than expected:- Check that all relevant domains are added as citation sources
- Verify brand terms include all common variations of your brand name
- Review whether your content is accessible and indexable by AI platforms
Tests Seem Delayed
If you are not seeing regular test results:- Check your credit balance to ensure credits are available
- Verify your subscription is active
- Note that test scheduling is managed by the system and may not run at a fixed time each day
Next Steps
- Understanding Metrics - Deep dive into what each metric means
- Platform Comparison - Analyze results by platform
- Search Configuration - Review and adjust your tracking configuration