Our Methodology

How Marlvel.ai provides independent mobile app intelligence reports for the US market. We continuously improve our analysis, accuracy, and coverage.

Last updated: April 4, 2026

Our Mission

Marlvel.ai's mission is to help mobile builders improve their existing apps and create new ones, so they can unleash their creativity. We provide intelligence reports across all 44 App Store categories, covering 2,400+ top apps in the US market. Our reports offer an objective, data-driven view of the mobile app landscape. No app publisher pays for coverage, influences our analysis, or reviews reports before publication. Our goal is simple: give builders the insights they need to make better decisions.

Data Sources

Our analysis is built on publicly available signals from multiple independent sources. We cross-reference data points to reduce bias and increase confidence in our findings.

App Store Listings

Metadata, descriptions, version history, screenshots, and pricing from iOS App Store and Google Play.

User Reviews

Ratings and written reviews from both platforms, analyzed for sentiment patterns, recurring themes, and evolving user perception.

Developer Websites

Official websites, about pages, press releases, and public documentation from app publishers.

Public Market Signals

App store rankings, chart positions, category trends, and competitive landscape data from public sources.

Community Signals

Public discussions and user-generated content that reflect real-world user experience and market perception.

Store Metadata

Technical metadata including bundle identifiers, platform availability, content ratings, and update frequency.

Analysis Pipeline

Each intelligence report is produced through a multi-stage analysis pipeline that combines AI-powered processing with structured analytical frameworks.

1

Signal Collection & Normalization

We aggregate data from all available public sources for a given app. Raw signals are cleaned, deduplicated, and normalized into a structured dataset that can be analyzed consistently across thousands of apps.

2

Feature & Market Positioning Analysis

Our AI identifies the app's core features, monetization model, target audience, and competitive positioning. Each feature is classified as either a market standard or a differentiator based on category benchmarks.

3

User Sentiment Analysis

We analyze user reviews across platforms using a 5-level sentiment taxonomy: Thrilled (81-100), Excited (61-80), Mixed (41-60), Frustrated (21-40), Upset (0-20). The score combines star ratings and volume with AI analysis of review text (theme extraction, evidence quoting). This captures both the rating and the reasoning behind it.

4

Competitive Landscape Analysis

Each app's competitive environment is mapped using a 4-tier taxonomy: Nemesis (closest rival), Contenders (strong competitors with different angles), Same Space (broader ecosystem peers), and New Kids on the Block (emerging threats). We prioritize same sub-genre over broad category. A surf game gets compared to other surf games first, not all sports games.

5

Intelligence Synthesis

All collected signals are cross-referenced and synthesized into a structured intelligence report. Each app is compared against category peers and direct competitors to identify competitive advantages and gaps. The output includes SWOT analysis, market outlook, strengths and weaknesses derived from real user feedback, and actionable insights.

Quality Assurance & Expert Review

Our team of experienced mobile industry professionals, with over 15 years of expertise in app development, product management, and mobile market analysis, continuously reviews the generated content to ensure quality.

Reviewers check reports for factual accuracy, analytical coherence, and relevance. They flag and correct incoherencies, outdated information, and misleading conclusions that automated analysis may produce. This ongoing human-in-the-loop approach ensures that our reports meet the standard of quality that mobile builders rely on.

Confidence Scoring

Every report includes a transparent confidence score (0.0 to 1.0) that reflects how much data was available to produce the analysis. We believe in being honest about what we know and what we don't.

LevelScoreWhat it means
High0.7 to 1.0100+ reviews, diverse data sources, strong sentiment signal
Medium0.4 to 0.6920-99 reviews, limited source diversity, moderate signal
Low0.0 to 0.39Fewer than 20 reviews, limited data, or very recent launch

Confidence is calculated from review volume, website availability, about page content, sentiment data quality, and feature documentation depth. Additionally, our team evaluates reports and may downgrade the confidence score if the generated information appears inaccurate or inconsistent.

Update Frequency

Reports are updated on a continuous basis, with refreshes running every hour. Our target is that no report should be older than 15 days. Each refresh re-collects all public signals, re-runs the analysis pipeline, and regenerates the report when new reviews, version updates, or ranking changes are detected. Market pulse data (rankings, top movers) is refreshed daily. All reports display their last audit date prominently.

Independence & Ethics

Our commitment to independence is non-negotiable:

Corrections Policy

Every intelligence report on Marlvel.ai carries a Report an issue link. If you spot a factual error — wrong rating, wrong publisher, misattributed feature, outdated pricing, incorrect version history — submit it via that link or the support page with a short description and, when possible, the source that contradicts what we show.

We review reported issues within a few hours during business days. When a correction is warranted, we update the underlying data and regenerate the affected report from scratch so the intelligence reflects the corrected inputs end-to-end. We do not silently patch individual sentences: if the source changes, the analysis is rerun.

If a report is withdrawn pending correction, we return a noindex placeholder rather than a stale version, and we publish the corrected report once the inputs are verified. We do not maintain a public changelog of individual corrections, but the dateModified field in the report's metadata reflects the most recent meaningful update.

AI Usage & Editorial Policy

Marlvel.ai intelligence reports are produced by a pipeline that combines publicly available data (App Store and Google Play metadata, user reviews, developer websites, release notes, chart rankings) with large language models that structure, synthesize, and write the report sections. The AI does not invent data: it only organises and summarises what the input signals contain.

To keep outputs grounded, every generation prompt enforces an explicit discipline: no LLM-jargon filler (a blacklist of filler words is injected into every prompt), no speculation when the input does not support a claim, and no verbatim quoting of user reviews (we paraphrase recurring patterns to comply with store terms of service and copyright). When a section does not have enough input data, we display a transparent fallback saying so rather than padding with generic prose.

A separate AI judge scores every report along ten axes (factual grounding, analytical depth, editorial clarity, originality, actionability, and structural coverage of overview, features, sentiment, competitive position, and outlook). Reports that score poorly on grounding or originality are flagged for regeneration before they are published. The Reliability Index exposes a public-facing composite score (0-100) summarising the trust we place in each report based on data solidity, freshness, and completeness.

AI generation does not replace judgement. Our team reviews prompts, taxonomy, and scoring calibration on a rolling basis, and we retire or rewrite any prompt that produces consistently low-quality output. We explicitly do not make medical, financial, legal, or political recommendations — those categories receive hedged phrasing and skip the FAQ module entirely.

Content Compliance & Takedown

Before publishing, content is filtered with awareness of the laws and regulations applicable to the subjects we cover, to the best of our knowledge, so that what we publish stays in line with legal norms and platform policies. Sensitive categories — health, finance, politics, minors, hate, weapons, gambling, adult content — are treated with additional guardrails (hedged phrasing, disclaimers, exclusion from AI retrieval for kids content).

If a concern is raised — factual error, privacy issue, sensitive topic, rights complaint, or suspected non-compliance — we review it at our discretion and, where warranted, remove or amend the content within a few hours. Use the support page or the “Report an issue” link on any report.

We operate under a best-effort obligation (obligation de moyens), not a guarantee of result. Reports are strictly informational: they aggregate publicly available information, we are not accountable for the underlying public sources themselves, and we do not provide professional advice (medical, financial, legal, or otherwise). Always verify critical information independently.

See It in Action

Want to see what our methodology produces? Check out the Candy Crush Saga intelligence report as an example of a high-confidence report including user sentiment analysis with real quotes, key feature breakdown with competitive positioning, store rankings history, pros and cons, market outlook, pricing analysis, and related apps comparison.

Known Limitations

Disclaimer

All intelligence reports published on Marlvel.ai are provided strictly for informational purposes, to help mobile builders improve their apps and make more informed decisions. They do not constitute guaranteed advice, recommendations, or endorsements. Marlvel.ai declines all responsibility for any decisions made based on the information contained in our reports. Use at your own discretion.

Machine-Readable Access

All intelligence reports are available in machine-readable formats for AI systems, researchers, and developers:

EndpointFormatDescription
/llms.txtTextIndex of top apps per category with links to reports
/llms-full-{category}.txtMarkdownComplete reports for a single category
/api/llm/apps/{cat}/{slug}MarkdownIndividual app report with YAML frontmatter
/api/llm/categoriesMarkdownDynamic index of all categories
/api/llm/pulseMarkdownLive US App Store rankings
/.well-known/ai.jsonJSONAI discovery manifest
/about/methodology.mdMarkdownThis page in markdown
/ai-policy.mdMarkdownAI policy in markdown

All data is freely accessible. Only fair use is accepted, under CC-BY-NC 4.0 license.