Valtik Studios
Back to blog
Bug Bountyhigh2026-02-1312 min

Building a Bug Bounty Program in 2026: From Zero to Paying Researchers Without Ruining Your Week

Running a bug bounty program is not just launching on HackerOne and hoping for the best. We have seen programs burn through $2M in the first year because the scope was too broad and the triage process did not exist. Here is the 2026 playbook for launching a program that finds real bugs without destroying engineering velocity.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

# Building a Bug Bounty Program in 2026: From Zero to Paying Researchers Without Ruining Your Week

A well-run bug bounty program finds bugs before attackers do, gives security an external signal your internal team can't provide. And establishes your company as a serious target for talented researchers. A badly run bug bounty program is a firehose of duplicate submissions, clickjacking theoreticals. And self-XSS reports from people who saw a Medium article on bug bounties last week.

The difference is preparation. This is what we tell clients when they want to launch a program. What we configure on HackerOne, Bugcrowd, Intigriti, or self-hosted. And the 2026 considerations (AI-assisted research, MCP agent hunters, LLM scope abuse, and the new VDP mandates from CISA and the SEC).

Vulnerability Disclosure Program vs Bug Bounty Program

What we actually see in the field diverges from what the vendors describe. Here's the unvarnished version.

Different things. Pick the right one first.

Vulnerability Disclosure Program (VDP). Anyone can report a bug through a defined channel. You have a SLA for triage and response. No money changes hands. This is table stakes in 2026. CISA Binding Operational Directive 20-01 required all federal agencies to have a VDP. State governments and enterprises adopted similar policies. If you don't have a VDP, you're telling researchers "email legal@ourcompany.com and hope." That doesn't work.

Bug Bounty Program (BBP). VDP + monetary rewards. Researchers get paid per validated bug, often scaled by severity. Attracts more effort, raises the bar of findings, introduces operational complexity.

Private Bug Bounty Program. Invite-only subset of researchers. Lower volume, higher signal. Most mature companies start here.

Public Bug Bounty Program. Open to anyone who signs up to the platform. Higher volume, more noise, higher chance of finding obscure bugs.

The typical maturity arc:

  1. Start with a VDP
  2. Run private BBP for 6-12 months
  3. Go public when the triage process is battle-tested

Before you launch: the pre-flight checklist

Do you have a security team who can triage?

If your answer is "well, we've one senior SRE who also does security," you're not ready. A bug bounty program means someone is reading every submission, reproducing the issue, making severity calls. And coordinating with engineering to fix. Minimum for a company with any real product surface: one dedicated security engineer, two to three is better. If you don't have this, hire a managed triage service (HackerOne Triage, Bugcrowd Managed, Intigriti Triage).

Do you have a way to fix things fast?

Engineering needs to ship fixes. If your release cadence is quarterly and your on-call rotation doesn't include security, a bug bounty report will sit in the backlog for months and researchers will bounce. They'll post about your slow program on Twitter. Your reputation with researchers will crater.

Minimum infrastructure:

  • Security tickets in the same tracker as engineering (Jira, Linear, GitHub Issues)
  • Defined SLAs for triage, communication, and fix (by severity)
  • On-call rotation that includes security-sensitive issues
  • Ability to ship emergency fixes (hotfix release path, feature flags, rollback)

Do you know your attack surface?

An asset inventory is non-negotiable. When a researcher reports an issue, you need to know:

  • Is this domain ours?
  • Is it in scope?
  • Who owns this service?
  • Is this a third party that looks like us?

Tools for attack surface discovery:

  • Amass (OWASP Amass). Open source
  • Shodan. Global scanner data
  • Censys. Similar
  • Certificate Transparency logs (crt.sh). Find certs issued for your domains
  • DNSdumpster, SecurityTrails. DNS enumeration
  • Commercial ASM (Attack Surface Management): CrowdStrike Falcon Surface, Microsoft Defender EASM, Cyberint, Cycognito

Do you have a disclosure policy?

Safeharbor.txt-style policy that tells researchers:

  • What's in scope
  • What's out of scope
  • What they can and can't do
  • What constitutes good faith research
  • What the company commits to (safe harbor from legal action, SLAs, reward structure)

DOJ updated its CFAA policy in May 2022 to explicitly protect "good faith security research" from prosecution. Your safe harbor should mirror that language. The disclose.io project has standardized templates. Use them.

The 2026 policy template we use

The skeleton we configure for client programs. Full examples at disclose.io.

Scope

In scope:

  • *.valtikstudios.com web applications
  • Valtik iOS app (latest 2 versions)
  • Valtik Android app (latest 2 versions)
  • Valtik API (api.valtikstudios.com)

Out of scope:

  • Third-party services (our auth provider, CDN, etc. Report to them)
  • Social engineering of employees
  • Physical attacks
  • DoS / stress testing (we respect your effort but this isn't your bug bounty)
  • Reports from automated tools without exploitation (we don't pay for Nessus output)
  • Self-XSS, UI bugs without security impact, clickjacking on pages without sensitive actions, best-practice recommendations without a concrete vulnerability
  • Missing security headers without a specific exploit (CSP missing doesn't equal RCE)
  • Rate limiting unless tied to a specific attack
  • Email configuration issues (SPF, DMARC) without a demonstrable attack
  • Vulnerabilities in software we don't maintain (libraries without a working exploit against our deployment)
  • CSRF on endpoints that don't change state or where anti-CSRF is demonstrably in place
  • Reports about our use of known-vulnerable third-party code without a working exploit

Qualifying vulnerabilities (examples)

  • Remote code execution
  • SQL injection / NoSQL injection
  • Server-side request forgery (SSRF) hitting internal resources
  • Authentication bypass
  • Authorization flaws / IDOR / privilege escalation
  • Stored XSS, reflected XSS with concrete impact
  • CSRF on sensitive state-changing operations
  • XXE / SSTI with impact
  • Deserialization flaws
  • Path traversal / LFI / RFI
  • Significant information disclosure (PII, credentials, internal data)
  • Business logic flaws causing financial or data integrity impact
  • Subdomain takeovers
  • Exposed secrets in public repos

Rules of engagement

  • Use test accounts we provide. Don't test against real user data
  • Don't exfiltrate more data than needed to demonstrate the issue
  • Stop testing once you can demonstrate impact. Don't escalate (don't dump the whole database)
  • Don't share vulnerabilities with third parties until we've fixed and you've received disclosure permission
  • Report promptly. We ask within 24 hours of discovery for critical findings
  • Don't attempt to access accounts or data you don't own

Safe harbor

We won't take legal action against researchers who:

  • Act in good faith per this policy
  • Avoid privacy violations and data destruction
  • Report issues promptly and don't disclose until we've fixed
  • Don't exploit beyond what's necessary to demonstrate the issue

Platform selection: HackerOne vs Bugcrowd vs Intigriti vs self-hosted

HackerOne

Largest market share. Strongest researcher community. Integrations with Jira, ServiceNow, Slack, GitHub, every SIEM you can name. Strong managed triage (HackerOne Triage) for teams without internal capacity.

Pricing: custom, tiered by volume and managed services. Small programs start around $2-3K/month for the platform. Add triage services ($8K+/month) and total program cost including rewards can run $200K-$5M+/year depending on scope and severity.

Best for: mid-to-large enterprises, companies with global scope, organizations that want maximum researcher attention.

Bugcrowd

Second largest. Strong in private programs. "Crowdmatch" researcher targeting feature. Good managed triage.

Pricing: similar to HackerOne, negotiable. Government/defense programs often routed through Bugcrowd.

Best for: similar profile to HackerOne. Federal and enterprise heavy.

Intigriti

European-strong, FedRAMP-equivalent for EU regulatory environments. Growing US presence. Competitive pricing.

Best for: European-headquartered companies, GDPR-sensitive programs, startups looking for HackerOne/Bugcrowd alternative.

YesWeHack

European, strong in France and surrounding. Big government programs.

Self-hosted / direct program

Some companies run programs on their own infrastructure (Apple Security Bounty, Meta Bug Bounty Program, Microsoft Bounty Programs, Google Vulnerability Reward Program). Requires internal security team, legal resources. And operational infrastructure to handle researcher communication, payments, disclosure.

Pros: no platform fee, direct relationships with researchers

Cons: you're building the platform features yourself

Not recommended for companies below $1B revenue unless you already have a security team with bandwidth for it.

The 2026 reward structure

Bug bounty payouts have inflated significantly since 2020. The median payout for a critical across mature programs sits around $5,000-$15,000 in 2026. Top-tier programs (Apple, Meta, Google, crypto protocols) routinely pay $50,000-$1,000,000 for critical findings.

Baseline structure for a new program

| Severity | CVSS range | Payout range |

|---|---|---|

| Critical | 9.0-10.0 | $5,000 - $15,000 |

| High | 7.0-8.9 | $1,500 - $5,000 |

| Medium | 4.0-6.9 | $500 - $1,500 |

| Low | 0.1-3.9 | $100 - $500 |

| Informational | 0.0 | $0 (or token reward) |

Critical considerations

  • Consumer/enterprise SaaS companies typically pay the baseline above
  • Financial services and crypto protocols pay 2-10x higher (Immunefi leaderboard shows $10M+ single bounties for DeFi critical bugs)
  • Mobile app bugs tend to pay less unless they demonstrate account takeover or data exfiltration
  • API-only bugs often underpay. If your API handles sensitive data, treat API bugs at the same severity as web app bugs
  • Duplicate policies must be clear. First valid report wins. Subsequent reports get marked duplicate with no payout

Escalators

  • Chained exploits (low + low = high impact). Pay for the chain, not the individual bugs
  • High-impact business logic bugs (not CVSS-mapped). Evaluate individually
  • Critical infrastructure bugs (authentication, authorization, deployment). Top of band
  • Novel exploit techniques with wide applicability. Pay premium, consider CVE coordination

The triage process that keeps programs functional

Initial response SLA: 24 hours

Researcher submits a report. Within 24 hours someone on your team:

  1. Reads the report
  2. Verifies it's not a duplicate of something already reported
  3. Sets severity (tentative)
  4. Requests additional info if the repro steps are incomplete
  5. Responds to the researcher with "we've received this, we're investigating"

Triage SLA: 3-5 business days

Within 5 business days:

  1. Attempt to reproduce in a test environment
  2. Confirm severity with CVSS 3.1 or 4.0 calculation
  3. File an internal ticket with engineering
  4. Communicate final triage decision to researcher (valid, duplicate, out-of-scope, informational)

Fix SLA (by severity)

  • Critical: 7 days
  • High: 30 days
  • Medium: 60 days
  • Low: 90 days

Reward SLA

  • Critical: reward paid within 7 days of triage (don't wait for fix)
  • High/Medium/Low: within 14-30 days

Paying fast is reputation-critical. Researchers talk. A program that pays in 90 days when it paid in 14 days last year loses researcher attention.

Disclosure SLA

  • Coordinate with researcher on public disclosure once fixed
  • Default to disclosure within 60-90 days after fix ships
  • Researchers retain the right to disclose after a reasonable timeline even if not fixed. Don't try to bury findings

The 2026 complications

AI-generated reports

LLMs generate plausible-looking but unverified reports. Program managers in 2026 see a flood of submissions that claim to be vulnerabilities but fail reproduction because they're hallucinated scenarios.

Defense:

  • Require PoC in every report
  • Reject reports without working reproduction
  • Ban submitters who repeatedly submit LLM-generated noise
  • HackerOne and Bugcrowd added automated detection in 2024-2025. Use it

MCP and agent-based hunters

Research agents that chain tool calls (recon → scan → exploit → report) are finding bugs autonomously. Mostly still below the bar of skilled human researchers, but the volume is real. Anthropic's Claude Mythos, OpenAI's agent platforms, and open-source agent frameworks are all being pointed at bounty scope.

Neutral-to-positive for programs. More researchers, more bugs found. The automation raises noise but also surfaces legitimate findings. Program policies should require human verification of agent findings before submission (which is the current norm).

Regulatory pressure for VDP

  • SEC cyber incident disclosure rule (effective Dec 2023) creates pressure to have a structured vulnerability intake
  • CISA Binding Operational Directive 20-01. Federal VDP requirement
  • NIS2 (EU). Coordinated vulnerability disclosure requirements
  • FDA cybersecurity requirements for medical devices. VDP encouraged
  • PCI DSS 4.0 Requirement 6.5.1. Secure software development with vulnerability identification processes

A VDP is no longer optional for regulated industries.

Researcher taxes and payments

Bug bounty income is taxable. Platforms issue 1099-NEC (US) or equivalents. International researchers face currency conversion costs, banking limitations (some countries limited by sanctions/OFAC). And tax treaty complications.

Practical considerations:

  • Platforms handle most of this but communicate with researchers clearly
  • Some researchers in US-sanctioned countries can't receive US-based bounty payments at all
  • Cryptocurrency payment options (available on some platforms, standard on crypto protocol bounties) help with cross-border complexity

Safe harbor edge cases

  • A researcher pivots and finds bugs in a vendor's system. Is that in scope?
  • A researcher accesses customer data to demonstrate impact. Where's the line?
  • A researcher reports an issue, you don't fix in 90 days, they go public. Is that in policy?

Your policy needs to address these. Generic "don't do anything bad" language isn't enough. Specific rules prevent specific disputes.

Common program mistakes

Scope too broad, too fast. "everything in our org" with no caveats leads to 500 duplicate reports on CORS misconfigurations on a marketing subdomain you forgot about. Start narrow.

No duplicate policy. Researchers submit the same bug. First one wins. If your triage is slow, you get "Duplicate" decisions on your 3-week-old report, which kills researcher morale.

Silent programs. The researcher submits, you triage, you fix, you pay, but the researcher never hears what you fixed or when. Transparency retains researchers.

Paying too low. $50 for a critical makes your program a joke. Researchers deprioritize you. Set baseline payouts that signal respect for the work.

Using the bug bounty as QA. "we don't need internal security, we've a bug bounty." No. Bug bounties find bugs that slip past internal security. They're not a replacement for secure development, code review, pentesting, or a dedicated security team. Programs that treat bounty as primary security produce terrible security outcomes.

Fighting researchers on severity. Standard CVSS 3.1 / 4.0 exists. If you want to downgrade a clear high-severity bug to medium because the fix would be expensive, that's a reputation bomb. Pay what the impact warrants.

Unclear communication about duplicate threshold. "we found this internally two weeks ago" without proof or tracking data creates disputes. Maintain internal finding timestamps that can be shared with researchers if needed.

Legal team kills the safe harbor. Legal reviews the program and adds "we reserve the right to pursue legal action for any unauthorized access." That kills researcher participation instantly. Safe harbor needs to be safe harbor, cleared with legal in advance.

What we do for bug bounty program engagements

Typical engagement:

  • Week 1: program design workshop (scope, policy, rewards, SLAs)
  • Week 2: platform selection and configuration (HackerOne, Bugcrowd, Intigriti setup)
  • Week 3: internal triage workflow integration (Jira, Linear, on-call)
  • Week 4: soft launch with initial invited researchers (private beta)
  • Ongoing: monthly program reviews, payout calibration, scope expansion as organization matures

We also run "adversarial" pentests against new program scope before public launch. Find the easy wins internally so researchers don't collect them on day one at your expense. And shake out the operational readiness before you expose the program to the full research community.

Resources

  • disclose.io. Policy templates, safe harbor language: https://disclose.io/
  • HackerOne Top 10: https://hackerone.com/top-ten-vulnerabilities
  • FIRST CVSS Calculator: https://www.first.org/cvss/calculator/
  • CISA BOD 20-01: https://www.cisa.gov/news-events/directives/bod-20-01-develop-and-publish-vulnerability-disclosure-policy
  • DOJ CFAA good faith policy (2022): https://www.justice.gov/opa/pr/department-justice-announces-new-policy-charging-cases-under-computer-fraud-and-abuse-act
  • OWASP Vulnerability Disclosure Cheat Sheet
  • Bugcrowd VRT (Vulnerability Rating Taxonomy): https://bugcrowd.com/vulnerability-rating-taxonomy
  • Immunefi (crypto/web3 bug bounty): https://immunefi.com/
  • Bugcrowd University. Free researcher training (useful for program managers too)

Hire Valtik Studios

We advise on bug bounty program launches, manage triage for private programs. And run "program readiness" engagements that pressure-test your scope before you expose it publicly. If you've had a bug bounty program for more than two years and the volume of duplicate/informational reports is drowning your team, we also run program tuning engagements.

Reach us at valtikstudios.com.

bug bountyvulnerability disclosureVDPHackerOneBugcrowdIntigriti

Want us to check your Bug Bounty setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.