Skip to main content

Weekend Experiment: Selling a Non-Existent Product as a Hypothesis Validation Method

Konstantin Potapov
15 min

You're selling a promise of a product that doesn't exist. It works, but carries legal, ethical, and methodological risks. Analysis of the method for those who will try it anyway.

Key Takeaway

A weekend experiment selling a non-existent product is NOT a business hypothesis validation. It's a test of marketing messaging and maximum willingness to pay from the "hottest" market segment. It doesn't replace customer development, doesn't provide unit economics insights, and carries high reputational and legal risks. In 95% of cases, in-depth interviews and creating a Concierge MVP are sufficient for idea validation.

Use this method only to quickly filter out obviously failing hypotheses that couldn't be killed by previous research stages. It's not for proving success, but for preventing bigger losses.


Warning

This article is about a method I do NOT recommend as a first validation step. This is a high-risk tool that only works in a narrow range of situations and with strict ethical compliance.

Method essence: you sell a product that doesn't exist. You take real money for a promise to create a solution. After 48-72 hours, you decide: build the product or refund the money.

Why it's dangerous:

  • In B2C, it can be classified as fraud, even with full transparency
  • You risk your reputation — one failed experiment can close the market for you
  • The method doesn't test the business model, only maximum willingness to pay for a promise
  • You get a signal about the first reaction, but learn nothing about retention, LTV, real CAC

Why this article exists: because people do it anyway. And they do it poorly. The purpose of this text is to show the risks and provide instructions for those who will try it anyway.


Should You Do This (Access Checklist)

Before reading further, answer these questions. If at least one answer is "no" — do NOT use this method.

Mandatory conditions:

1. Have you already conducted customer development?

  • Conducted at least 10 in-depth interviews with target audience representatives
  • Heard the same pain point from different people in their own words
  • Understand why existing solutions don't work
  • Know who makes the purchasing decision and what their budget is

If no — start with interviews. Selling a promise without understanding the pain is not validation, it's a lottery.


2. Are you ready for legal consequences?

  • Understand consumer protection law risks: if you don't provide service on time — penalty of 3% of cost per day overdue
  • Know that the clause "if the experiment confirms..." may be deemed invalid by court
  • Ready for fraud accusations (Criminal Code Art. 159) if you can't prove good faith intentions
  • Ready to refund ALL money immediately upon first request
  • Have a legally competent offer/contract (not a template from the internet)
  • Ready for complaints, chargebacks, negative reviews

If no — use alternatives: pre-orders without charging, email list collection, MVP in a week.


3. Do you understand the method's limitations?

  • Realize that 2-3 payments are NOT business validation
  • Understand you're only testing maximum willingness to pay, not LTV/CAC/retention
  • Ready for first customers to potentially be the most toxic
  • Won't make strategic decisions based on one weekend

If no — you're deceiving yourself. The method gives a weak signal, not proof.


4. Do you have an alternative to your current income?

  • Can afford to spend 2-3 days + $300-1000 without guaranteed results
  • Not risking your only income source
  • Have time for refunds and handling complaints

If no — this is gambling, not a validation tool.


When the method DOESN'T work (stop-list):

  • Complex B2B products with long deal cycles (Enterprise, government contracts)
  • Regulated industries (fintech, medicine, education with licenses)
  • Retention-based products (subscriptions, SaaS, marketplaces with network effects)
  • Hardware products (impossible to create in 2-4 weeks after test)
  • You're unsure about the segment (haven't conducted interviews, "shooting at everyone")

Conclusion: If you passed all checklists — read on. If not — save time: do proper customer development.


What the Method Actually Tests (Without Illusions)

Let's be honest. Here's what you'll learn:

  • Whether at least a few people are willing to pay now for the solution
  • What objections kill the deal

What you WON'T learn (and this is critical):

  • Whether customers will return for repeat purchases
  • Real acquisition cost at scale
  • Operational problems of the actual product
  • Business model sustainability

If after the weekend you decide to continue — it only means one thing: the idea deserves deeper investigation, not that "the business took off".

What Weekend Experiment WON'T Show (Mandatory Reading)

Before starting, understand the method's limitations. Otherwise, you'll make a bad decision based on good data.

You won't learn about retention

2-3 payments from strangers are not business model validation. This is first reaction validation. You won't learn:

  • Whether they'll return for repeat purchases
  • Whether they'll recommend to friends
  • What the real LTV (lifetime value) is

Why it's dangerous: products with low retention look successful in the first days but die in a month.

You won't learn real CAC

Ad spend over a weekend is not your CAC. True acquisition cost will only be known after:

  • Creative optimization (weeks of testing)
  • Burning through first audiences
  • Increased competition for attention

Why it's dangerous: unit economics that work in weekend mode can collapse at scale.

You won't learn operational problems

You sold a promise, not a product. The reality of development, support, and delivery may be 10x more complex and expensive than it seemed.

Why it's dangerous: history is full of startups that proved demand but drowned in operational costs.

First customer quality ≠ market quality

People who buy quickly (over a weekend) are often the most:

  • Impatient
  • Demanding
  • Ready to leave as quickly as they came

If your entire market consists of these — business will be chaos.

Why it's dangerous: you might accidentally attract the most toxic audience segment and burn out trying to serve them.

Where the Method Works (and Where It Doesn't)

Suitable for:

  • Testing pitch and price for a known audience
  • Simple digital products: bots, services, content, consultations
  • B2B pilot agreements (not expecting payment in 48 hours)
  • Short decision cycle (client can say "yes" in one conversation)

Not suitable for:

  • Complex technologies (can't understand value without MVP)
  • Regulated industries (fintech, medicine)
  • Models based on repeat purchases/virality (weekend only shows first response)

Safer Method Alternatives

Before selling a non-existent product, seriously consider these options. They provide the same or better insights with lower risks.

1. In-depth customer development (15-20 interviews)

What it gives:

  • Same insights about pains, objections, purchase triggers
  • Understanding of existing solutions and why they don't work
  • Zero legal risk

Drawback: doesn't test maximum willingness to pay in the moment (willingness to pay vs. stated preference)

When to use: always as the first step, before any other methods


2. Pre-orders without charging money

How: collect email + promise to buy at a specific price, but DON'T charge money

What it gives:

  • Signal of interest with less commitment
  • List of potential first customers
  • No legal refund risks

Drawback: people easily promise but don't buy (stated vs. revealed preference)


3. Concierge MVP (manual value creation)

How: manually do the product's work for 3-5 clients over 1-2 weeks

What it gives:

  • Test real value, not a promise
  • Understand operational complexities
  • See if customers return

Drawback: labor-intensive (but legally safer)


4. No-code MVP in a week

How: build a working prototype on Bubble, Tilda, Airtable, Zapier in 3-7 days

What it gives:

  • Test product, not promise
  • Customers get real value
  • Fewer ethical questions

Drawback: requires assembly time (but modern no-code tools have sped this up significantly)


Conclusion: Selling a promise is a last resort. Only use it if:

  1. Passed all access checklists from the article beginning
  2. Confident that alternatives won't give the needed signal
  3. Ready for legal and reputational risks


INSTRUCTIONS FOR THOSE WHO PASSED ALL CHECKS AND UNDERSTOOD THE RISKS

If you've reached this section — you've passed all access checklists, considered alternatives, and still decided to use the promise-selling method.

Everything above was theory and warnings. Everything below is practical instructions for those who made a conscious decision to act.

Below is detailed methodology on how to minimize risks and extract maximum value from the experiment.



Preparation (Monday-Thursday)

1. Customer development interviews

Minimum: 3-5 conversations with target audience representatives.

Reality: finding 5 strangers in 4 days is unrealistic for most niches. Use existing contacts as a starting point, but ask the same questions as cold contacts:

Interview script (15-20 minutes):

  1. Tell me about the last time [problem] frustrated you (5 min)

    • Listen. Don't suggest solutions. Record formulations.
  2. What are you doing now to solve this? (3 min)

    • Important: what tools, processes, workarounds.
    • How much does it cost (money/time)?
  3. If a solution appeared [describe hypothesis in 2 sentences], how much would you pay? (2 min)

    • Don't accept abstract answers. Insist on a number.
  4. What needs to happen for you to buy this tomorrow? (5 min)

    • This is a question about purchase triggers and barriers.

Red flags in responses:

  • "Interesting, but not relevant now" = no pain
  • "Need to think/consult" = you didn't find the decision maker
  • Can't name a price = either don't understand value, or problem doesn't hurt

If 3 out of 5 interviews gave red flags — don't go for the weekend. Reformulate the hypothesis.

2. Competitive analysis

In 2-3 hours:

  • Find 3-5 direct/indirect competitors
  • Study their prices, positioning, traffic channels
  • Read negative reviews (this is your improvement roadmap)

If there are no competitors — two reasons are likely:

  1. You're a pioneer (unlikely)
  2. The idea was already tested and rejected (likely)

Find out which by talking to people who tried and failed.

3. Hypothesis formulation

[Segment] will pay [Price] for [What], because [Pain].
Success = [N] payments with budget ≤ [X]$ and time ≤ [Y] hours/client.

If you can't fill this in — don't start.

4. Technical preparation

Minimum stack:

  • Landing page (Tilda/Carrd/simple HTML page)
  • Application form → Google Sheets (Zapier/Make)
  • Payment system (Stripe/PayPal) — only if selling product, not consultations
  • Event tracker (at least UTM tags in links)

Time: 4-6 hours for setup. If more — you're over-engineering.

Risks that will kill the experiment:

  • Warm traffic (friends will buy out of politeness)
  • Manual labor >1.5 hours/client
  • Legal issues (can't take money without an offer)

5. Experiment rules

Not changing until Monday:

  • Value proposition
  • Price
  • Target segment

Can change:

  • Formulations (headlines, descriptions)
  • Channels (if one doesn't work)

If you want to change the hypothesis — it's a failure. Acknowledge it.

Weekend Mechanics: How to Collect Applications Without Data Loss

The most common mistake — applications come from everywhere (form, Telegram, email, Instagram DM), and by Sunday evening you don't remember who came from where and what they wanted. Chaos kills analysis.

Data funnel: from click to money

I track a simple chain:

Page view → Started form → Submitted → Started payment → Paid → Scheduled call → Had call

For each step I record:

  • Source (ad/post/referral)
  • Time
  • Message they saw
  • Offer price

Everything goes into one Google Sheet. One table = one truth.

Technical warning: Native Google Sheets forms can crash with concurrent requests (as my case showed). For reliability, set up "form → Zapier/Make → add row to sheet" instead of direct write. Or use Airtable — it's more stable under load.

Payments: no DIY

Only proven services: Stripe, PayPal, Square, Paddle. No "card transfers with promise to send receipt later". This isn't a startup, it's shadow business.

If taking prepayment — in the table:

  • Transaction number
  • Amount
  • Client contacts
  • Data processing consent

Security and trust

  • Don't store cards (payment provider does this)
  • Don't ask for excess data (for test, name and email/phone is enough)
  • Add processing policy (get template from lawyer, it's one hour of work)
  • All chats and calls transfer to notes — team should see context

Plan B: if experiment takes off

Monday morning (if there are payments):

  1. Set up proper analytics (Amplitude/Mixpanel)
  2. Move forms from constructor to code (Next.js + Stripe API, for example)
  3. Automate welcome emails and notifications
  4. Remove manual steps that don't scale

Why this matters? Because if 10x applications come Monday (and this can happen), your duct-tape system will collapse. Allocate time for tech debt before it becomes a fire.

Success and Failure Criteria (Without Self-Deception)

The key skill in using this method is the ability to coldly analyze data and make decisions not based on emotional attachment to the idea.

What DOESN'T count as success

  • ❌ "Lots of clicks" (clicks don't pay rent)
  • ❌ "Great comments" (likes don't convert to money)
  • ❌ "Friend bought" (friends buy out of politeness — not validation)
  • ❌ "Spent a lot of time, hate to quit" (this is sunk cost fallacy, not an argument)

What COUNTS as success

  • 2-3 payments from cold/warm leads (not from personal circle)
  • Acquisition costs + your time < revenue per client (first sale profitability)
  • Time to value ≤ 1.5 hours per client (if more — doesn't scale)
  • Clear phrases: "ready to pay X, but need Y" (this is a roadmap)

Stop signals (failure indicators)

  • 300-500 clicks → 20-30 conversations → zero payments
  • Applications only from acquaintances
  • Silence or "will think about it" (no clear objections)
  • 2+ hours manual work per client

Rule: hypothesis (segment, offer, price) doesn't change until Monday. Can change formulations and channels, but not the essence. If you want to change the hypothesis — it's a failure. Acknowledge it.

Conducting the Experiment (Friday-Sunday)

Friday evening: launch

Before start:

  • Test application went through entire funnel
  • Test payment for $1 works
  • Auto-responses configured

Launch: Turn on all channels simultaneously. First 2 hours — online, respond quickly. Record first reactions in table.

Saturday-Sunday: data collection

Main work:

  • Conduct conversations using single script (problem → alternative → price → offer → commit)
  • Each contact in table: source, message, objection, result
  • Don't change price and offer (even if asked)

Call script (15 min):

  1. What hurts? (listen 5 minutes)
  2. What are you doing about it now? (2 minutes)
  3. How much does the problem cost? (2 minutes)
  4. Offer (5 minutes)
  5. Next step (1 minute): payment or refusal with reasoning

By Sunday evening you have:

  • Funnel numbers (clicks → applications → payments)
  • List of objections
  • Response patterns

Success Criteria (Hypothesis Generation, Not Confirmation)

Forget about "confirmed hypothesis". Over a weekend you can't confirm a business model. You can only collect data for the next research step.

Experiment success is NOT the number of payments. Success is quality of collected data.

Experiment is considered successful if you got:

1. At least 5 in-depth conversations with people demonstrating purchasing power

  • Doesn't matter if they paid or not
  • Important: they're from target segment, have budget, can make decisions
  • They talked for at least 15 minutes, you asked open questions

2. Catalog of real objections (recorded verbatim)

  • Not your interpretations, but their exact formulations
  • At least 3 objections that occurred with different people
  • Understanding which objections are fixable, which are fatal

3. Understanding purchase triggers

  • Who makes the final decision (title, role)
  • What budget and approval cycle
  • What needs to happen for them to pay tomorrow (not abstractly, but specifically)

4. List of specific pains with context

  • In their words (not "productivity issues", but "losing 15% of orders due to slow responses")
  • With examples: when they last faced it, what they did, how much it cost
  • With numbers: how much they spend now, how much they're willing to pay for solution

Payments are a side effect, not the goal. If 3 people paid but you don't understand why — experiment failed. If no one paid, but you heard 5 identical objections and understood how to reformulate the hypothesis — experiment succeeded.

Profile of "right" first customer

Three mandatory characteristics:

  1. Matches ICP (Ideal Customer Profile)

    • From target segment (not "anyone who paid")
    • Has budget for full product (not looking for "cheapest")
    • Decision maker or close to one
  2. Reasonable expectations

    • Understands what they're buying (doesn't expect magic)
    • Ready for iterations (doesn't demand perfection in first version)
    • Reasonable, not toxic
  3. Bought problem, not you

    • Justified purchase with pain, not "I like the author"
    • Could buy from competitor with same offer

Green flags (strengthen signal):

  • Specifies pain without prompts: "Losing 15% of orders due to slow support responses"
  • Names budget numbers: "Currently spending $2K/month on solution X, willing to pay up to $800 for better"
  • Asks about limitations: "What's max requests it can handle?" instead of "Can you add Y?"
  • Offers references: "My colleague has same problem, can introduce"
  • Ready for imperfection: "Understand it's beta. Main thing — solve problem X"
  • Quick decision: from "interesting" to payment < 48 hours

Red flags (weaken signal):

  • ❌ Only friends/followers pay
  • ❌ "Will try at this price" (looking for cheapness)
  • ❌ "I like you" (bought you, not product)
  • ❌ 3-page requirements list before payment
  • ❌ "Will think about it" + week of silence
  • ❌ Endless haggling: "What if I bring a friend?"

Funnel benchmarks

B2C (services, content, SaaS):

  • 200-400 clicks → 6-12 applications → 2-4 payments
  • Price covers ads + minimum 1 hour work

B2B:

  • 30-50 contacts → 15-20 responses → 5-7 conversations → 2-3 pilots

Time metric (critical):

Calculate how much you would have earned in these hours at your main job or freelancing. Compare with experiment revenue.

Alternative earnings for experiment time vs. Actual revenue

If the difference is significantly negative — this is a strong signal that the hypothesis is not viable in its current form. You're spending time more expensively than you're earning.

What to Record Besides Numbers

Mandatory to write down:

  1. Pain formulations (in customer words, not yours)
  2. Two main objections (they'll indicate next step)
  3. Purchase triggers: who decides, what budget, what prevents buying now

If you only have numbers without this — data is useless.

About Friends and Warm Traffic

Friends buy trust in you, not the product. Count them separately.

Make "continue" decision only with cold/warm lead payments.

Test: replace you with another executor — will they buy? No = you're selling yourself.

Ethics of Selling Non-Existent Product

You're selling a promise. The product doesn't exist yet. This works, but only with full transparency.

Honesty rule

Mandatory to tell each client:

"I'm testing demand. The product doesn't exist yet. If the experiment confirms demand, I'll start development, and you'll get [what] in [timeframe: 2-4 weeks]. If not — I'll immediately refund money, no questions asked."

What this gives

  1. Filters audience: Those who need "right now" will leave. Those ready to wait for problem solution will stay.
  2. Builds trust: Clients value honesty. Many will say "yes" precisely because you don't hide incompleteness.
  3. Protects reputation: If you close the direction and refund money — people will remember you as honest, not a scammer.

B2C (individuals):

  • Always contract/offer + refund policy
  • Specify product creation timeline and refund conditions
  • Remember: clause "if experiment doesn't confirm" may be deemed invalid
  • Prepare to work under Consumer Protection Law (3%/day penalty for delay)
  • One lawsuit or complaint to consumer protection agency over untimely refund can cost you tens of thousands in fines and compensation, not to mention time spent and reputational damage. Error cost far exceeds potential weekend experiment revenue.

B2B (legal entities):

  • Contract with clear pilot description: scope, timeline, success criteria
  • Refund conditions if criteria not met
  • Specify this is a test project, not finished solution

If you're not ready for these requirements — don't do weekend experiment with prepayments. Use pre-orders (without charging) or just collect emails.

Monday: Interpretation and Next Steps

Weekend is over. You have data. But this isn't the "moment of truth" — it's the beginning of real investigation.

Three scenarios (not binary "yes/no")

Scenario 1: Phase 2 — Deep investigation (hypothesis confirmed)

Signs:

  • 2-3+ payments from people matching ICP
  • Unit economics preliminarily works
  • Objections are specific and fixable ("need integration with X")

Next step: 10-15 in-depth interviews with paying customers. Goal — understand LTV, repeat purchase triggers, competitor references. In parallel — build minimal MVP for 3-5 beta testers. Timeline: 2-4 weeks.


Scenario 2: Phase 2 — Hypothesis refinement (needs work)

Signs:

  • 0-1 payment, but lots of interest ("let's meet Monday", "show demo")
  • All objections reduce to one theme (price, feature X, distrust of new player)
  • Pattern: "Will buy if there's Y instead of Z"

Next step: Reformulate hypothesis based on objections. Conduct 5 interviews to test changed hypothesis. Repeat weekend test in 1-2 weeks. Don't build product before retest.


Scenario 3: Stop. Hypothesis disproven.

Signs:

  • Zero payments with 300-500 clicks and 20+ conversations
  • Silence, "will think", "not relevant" (no specific objections)
  • Economics don't work even in ideal scenario

Next step: Document conclusions. Close direction. Take next idea from list or take strategic pause for rethinking.


Rule: Make decision on Monday, without delay. Postponement = rationalization (looking for excuses instead of data analysis).

If decided to continue: tech debt checklist

Priority 1 (do in first week):

  1. Move analytics from Google Sheets to Amplitude/Mixpanel
  2. Move forms and payment from constructor to code (Next.js + Stripe API, for example)
  3. Automate welcome emails and customer notifications
  4. Remove all manual steps taking >30 minutes per client

Priority 2 (within a month):

  • Replace Google Sheets with database (PostgreSQL/Supabase)
  • Automate payouts and receipt generation
  • Monitoring and alerts on critical metrics

Success cost: Every 3 confirmed hypotheses = 1-2 weeks engineering time on infrastructure. If you don't budget this — you'll drown during growth.

Stress test: what breaks at 10x growth?

Imagine: tomorrow 10x more applications arrive. What breaks first?

  • Payments fail (payment gateway limit)?
  • Forms crash (hosting can't handle)?
  • You physically can't respond in time?
  • Google Sheets breaks at thousand rows?

These bottlenecks need removal in first week. Not when fire happens, but before.

Economics: final check

Recalculate with cold numbers:

  • CAC (Customer Acquisition Cost): ads + your time on acquisition
  • Service time × your hourly rate
  • LTV (Lifetime Value): how much client will bring over lifetime

Survival formula: CAC + service time < 30-50% LTV

If doesn't work — either raise price, or reduce costs, or close direction. Business without margin is a hobby eating your savings.

Case: When Poor Preparation Kills Even 6 Payments

This case is not an example of "weekend method failure". It's an example of what happens when you skip real customer development and go straight to selling a promise.

Main mistake: formal interviews

I conducted 5 interviews before the weekend. But they were superficial. I didn't dig deep. Didn't ask:

  • "Show me how you currently solve this problem" (observe the process)
  • "How many hours are YOU willing to invest in implementation?"
  • "What tools have you already tried and why did you quit?"

As a result, I sold a "universal FAQ automation bot" — a fantasy disconnected from real cases.

Weekend test exposed hypothesis failure in 48 hours. Without it, I would have spent 2-3 months developing a universal solution and learned the same thing, but lost much more.


Experiment timeline

Context: 2023, idea for support automation bot for startups.

Hypothesis: CTOs will pay $30 for week-long pilot to automate FAQ.

What I missed in preparation:

  • Didn't check client stack (Jira, Slack, Zendesk — all different)
  • Didn't ask about existing integrations
  • Sold replacement of their processes, not addition

Preparation:

  • 5 conversations: 3 from personal network, 2 from LinkedIn
  • Tilda landing + Stripe
  • Launch Friday, 6:00 PM

What happened:

Friday: First payment at 7:30 PM. Client writes: "Can we start tomorrow?" Promised Monday, but he insists. Conducted onboarding in evening (1.5 hours).

Saturday morning: Second payment. Another one — "urgent". Google Sheets crashed (too many concurrent writes from form, Zapier bug). Lost 2 applications, found them only in evening in Stripe spam folder.

Saturday evening: 6 payments total. Of them 4 — friends or followers. Only 2 cold. One cold asked: "Can you connect to Jira?" I don't know Jira API. Promised to figure out. Spent 3 hours Sunday, didn't work — client requested refund.

Sunday: Conducted 4 onboarding calls. Each — 2+ hours, because everyone has different FAQ structures. Got tired. Forgot to record two clients in table — had to restore from email.

Result by Monday:

  • 5 active clients (one left)
  • $150 net revenue
  • ~18 hours work → $8.33/hour
  • Target rate: $100/hour
  • One client already writing: "Bot is slow on complex questions"

Verdict: Failure.

What I missed:

  • Didn't ask in interviews: "How many hours are YOU willing to invest in implementation?"
  • Didn't check stack (Jira, Slack, etc.) — sold universal solution for custom cases
  • Didn't build onboarding automation — each client = manual setup
  • Sold not product, but freelance service in SaaS wrapper

Decision on Monday:

Closed direction. Refunded money to 4 clients with explanation and apologies. All four said "thanks for honesty" — one even asked to keep updated on next ideas.

Data analysis for new hypothesis:

Reviewed conversation recordings. Pattern became obvious:

  • All 6 clients asked about integration (Jira, Slack, Notion, Intercom)
  • 5 of 6 — e-commerce (Shopify/WooCommerce)
  • Main objection: "We already have Zendesk/Intercom, why another tool?"

Key insight: I sold not "another tool", but tried to replace existing ones. This was failed framing. People don't want to change workflows — they want to improve what's already there.

Second insight: 5 of 6 — e-commerce on Shopify/WooCommerce. This isn't coincidence. This is a niche. And they have identical pains: order status, return, shipping — 80% same questions.

New hypothesis: Not universal bot. Ready-made Shopify addon: plugin with pre-installed answers to 20 typical e-commerce questions. OAuth installation in 3 clicks. No manual setup. Not Zendesk competitor — addition to it (bot answers typical, complex goes to Zendesk).

Price: $99/month (instead of $30/week for manual setup).

Next step: 5 interviews with Shopify store owners (validate new hypothesis). If confirms — build MVP in 2 weeks, repeat weekend test.

Lesson: Failure isn't the end. It's data for next, more precise hypothesis. Main thing — record patterns, not just say "didn't work".


Working Tools for Recording Results

Two simple templates you can create in Google Sheets in 2 minutes.

Tool 1: Experiment metrics table

MetricPlanFactNotes
Hypothesis[Segment] will buy [product] for [$$$]
Traffic
- Clicks
- Applications
- Conversations
Results
- Payments (cold)
- Payments (warm)
Economics
- Revenue$
- Expenses (ads)$
- Hours worked
- $/hour
Main insights1. ... 2. ... 3. ...
DecisionScenario 1/2/3
New hypothesis(if needed)

How to use:

  1. "Plan" column — target numbers (fill Thursday)
  2. "Fact" column — real results (fill as you go)
  3. "Notes" — key moments and customer quotes

Tool 2: Customer quality checklist

Evaluate each paying customer:

CustomerMatches ICPReasonable expectationsBought problem (not you)Rating
Customer 1✅/❌✅/❌✅/❌👍/👎
Customer 2✅/❌✅/❌✅/❌👍/👎
Customer 3✅/❌✅/❌✅/❌👍/👎

Criteria:

  • ICP: From target segment, has budget, decision maker
  • Expectations: Doesn't demand magic, ready for iterations, reasonable
  • Bought problem: Justified with pain, not sympathy for you

If more than half of customers got 👎 — reconsider segment or offer.


Conclusion

Weekend experiment selling non-existent product is not a universal methodology. It's a high-risk last resort tool.

Only use it if:

  1. Conducted at least 10 in-depth customer development interviews
  2. Understand legal risks and have competent offer/contract
  3. Ready to refund all money immediately and without questions
  4. Realize you're only testing first reaction, not business model
  5. Considered alternatives (CustDev, pre-orders, Concierge MVP, No-code prototype)

What the method DOESN'T give:

❌ Business model confirmation
❌ Understanding retention and LTV
❌ Real CAC at scale
❌ Operational insights

What the method gives (when used correctly):

✅ Maximum willingness to pay for promise (revealed preference)
✅ Catalog of real objections from paying customers
✅ Quick validation of failing hypotheses (2-3 days instead of 2-3 months)

Final warning: 99% of this article's readers should NOT use this method. If you have doubts — start with proper customer development. Selling a promise is a last resort, not standard practice.

If you need help building an investigation process without high risks — contact me.