<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Gamers Forem</title>
    <description>The most recent home feed on Gamers Forem.</description>
    <link>https://gg.forem.com</link>
    <atom:link rel="self" type="application/rss+xml" href="https://gg.forem.com/feed"/>
    <language>en</language>
    <item>
      <title>How to Prepare a Legacy Codebase for AI-Assisted Refactoring</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Sat, 09 May 2026 11:09:29 +0000</pubDate>
      <link>https://gg.forem.com/137foundry/how-to-prepare-a-legacy-codebase-for-ai-assisted-refactoring-18k5</link>
      <guid>https://gg.forem.com/137foundry/how-to-prepare-a-legacy-codebase-for-ai-assisted-refactoring-18k5</guid>
      <description>&lt;p&gt;Jumping into a legacy codebase with an AI coding assistant and no preparation produces predictably mixed results. The AI generates plausible-looking refactors that miss critical business logic embedded in unexpected places. You spend more time verifying output than the AI saved you in generation time. And the refactored code, while cleaner-looking, may have subtle behavioral changes that surface in production six weeks later.&lt;/p&gt;

&lt;p&gt;The difference between this outcome and a productive AI-assisted modernization session is preparation. Specifically: giving the AI the context it needs to reason correctly about your specific codebase rather than reasoning from generic patterns.&lt;/p&gt;

&lt;p&gt;This guide covers the preparation steps that make AI-assisted legacy refactoring significantly safer and more productive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Establish Scope and Document It
&lt;/h2&gt;

&lt;p&gt;Before any AI interaction, define the boundary of what you are working on. Legacy codebases have a way of expanding scope because everything touches everything. Resist this.&lt;/p&gt;

&lt;p&gt;Choose a specific module, class, or set of related functions as your working scope. Write a plain-language description of what that scope is responsible for:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Scope: the discount calculation module (discount.py, approximately 400 lines)
This module is responsible for: calculating the final price a customer pays
after applying applicable discounts, promotions, and loyalty tier benefits.

It is NOT responsible for: fetching customer tier data (done by customer_service.py),
validating promo codes (done by promo_validator.py), or applying tax (done post-discount
by tax_calculator.py).

The most important business constraint: discounts do not stack additively.
A customer with a 20% loyalty discount and a 15% promo code gets 20% off, 
not 35% off. This is intentional and must be preserved in any refactoring.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This description becomes the context header you paste before every AI prompt related to this module. It costs you twenty minutes to write; it saves you from explaining the same context to the AI repeatedly and catching errors that stem from the AI not knowing the "discounts don't stack" rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Audit Dependencies Before Touching Anything
&lt;/h2&gt;

&lt;p&gt;AI coding assistants will generate refactored code that changes function signatures, return types, or module interfaces without knowing what depends on them. Before you start refactoring, you need a dependency map.&lt;/p&gt;

&lt;p&gt;For Python codebases, tools like &lt;a href="https://python.org" rel="noopener noreferrer"&gt;Python's built-in ast module&lt;/a&gt; and import analysis scripts can generate call graphs. For JavaScript, &lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; and module analysis tools serve a similar purpose. &lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; advanced search can help you find all internal references to a specific function across a large repository.&lt;/p&gt;

&lt;p&gt;The AI can help with this phase, but its output should be treated as a starting point:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Identify all the places this function is called in the following files.
For each call site, note:
1. The file and line number
2. How the return value is used (stored, compared, iterated over, etc.)
3. Whether the caller passes keyword arguments or positional arguments

[target function] [relevant surrounding files]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review the AI's output carefully. Dynamic call patterns (calling functions stored in dictionaries, factory patterns, monkey-patching) will not appear in AI dependency analysis. These need manual identification.&lt;/p&gt;

&lt;p&gt;The dependency map serves a critical purpose: before you change a function signature or return type, you know what you need to update. Without it, you are refactoring blind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Test Baseline
&lt;/h2&gt;

&lt;p&gt;Legacy code with no tests is the most dangerous to refactor because you have no automated way to verify that behavior is preserved. Before any refactoring, use AI to generate an initial test suite for the module you are working on.&lt;/p&gt;

&lt;p&gt;This is one of the highest-value uses of AI assistance in legacy modernization. Even imperfect AI-generated tests are faster to produce than writing them from scratch, and they provide a safety net that makes subsequent refactoring significantly lower-risk.&lt;/p&gt;

&lt;p&gt;Important: AI-generated tests tend to cover the happy path and obvious error cases well, and miss edge cases that emerged from production incidents. After getting the AI-generated test suite, review your issue tracker, &lt;a href="https://git-scm.com" rel="noopener noreferrer"&gt;Git&lt;/a&gt; blame history, and incident reports for the module. Add tests for any bugs that were fixed in the module's history - those are the edge cases most likely to be reintroduced by refactoring.&lt;/p&gt;

&lt;p&gt;Once your test baseline is in place, configure your CI pipeline to run these tests on every commit. This gives you immediate feedback when a refactoring breaks behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Identify and Document the Critical Paths
&lt;/h2&gt;

&lt;p&gt;Not all code in a legacy system is equally risky to modify. The critical paths are the execution flows that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handle money or anything irreversible (payments, emails sent, database deletes)&lt;/li&gt;
&lt;li&gt;Run under high load or in performance-sensitive paths&lt;/li&gt;
&lt;li&gt;Have known security relevance (authentication, authorization, input validation)&lt;/li&gt;
&lt;li&gt;Have produced incidents or bugs in the past&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the paths where AI-generated refactors need the most careful human review. Document them explicitly before starting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Critical paths in discount.py:
1. Lines 145-190: Final discount application to cart total - this writes to the order record
2. Lines 210-230: Promo code validation bypass for internal employee accounts - security-relevant
3. Lines 280-310: Bulk discount calculation - runs for every item in large orders, performance-sensitive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When AI-generated refactors touch lines in this list, they get extra review. When they do not, you can move faster. This simple classification reduces the time you spend being careful about everything and focuses attention where it matters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdas4v3zf1lc0tfiudxie.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdas4v3zf1lc0tfiudxie.jpeg" alt="A chalkboard with handwritten formulas and diagrams being worked through" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Bernice Chan on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Set Up a Safe Experimentation Environment
&lt;/h2&gt;

&lt;p&gt;Before merging any AI-assisted refactoring, you need a way to run the original and refactored code side-by-side and compare behavior. The ideal setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A feature branch where AI-assisted changes are isolated&lt;/li&gt;
&lt;li&gt;Your test baseline running against both the original and the refactored code&lt;/li&gt;
&lt;li&gt;If the module has external side effects (database writes, external API calls), a way to stub those out for comparison testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.martinfowler.com" rel="noopener noreferrer"&gt;Martin Fowler's&lt;/a&gt; branch-by-abstraction pattern is useful for large-scale refactoring: introduce a seam that lets you run old and new implementations in parallel and compare results before fully switching.&lt;/p&gt;

&lt;p&gt;For simpler modules, a straightforward A/B test in a staging environment - routing a portion of traffic to the refactored implementation - gives you confidence before full deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It Together
&lt;/h2&gt;

&lt;p&gt;The preparation sequence - scope definition, dependency audit, test baseline, critical path identification, safe environment setup - takes time. On a module of moderate complexity, expect to spend a day on preparation before writing a line of refactored code.&lt;/p&gt;

&lt;p&gt;That investment pays back quickly. With context documents, a test baseline, and a dependency map in hand, each AI-assisted refactoring session produces output that is faster to review, safer to merge, and less likely to produce production incidents.&lt;/p&gt;

&lt;p&gt;For the full framework on running these sessions - including prompting patterns for the refactoring phase itself - the guide on &lt;a href="https://137foundry.com/articles/ai-coding-assistants-legacy-code-modernization" rel="noopener noreferrer"&gt;using AI coding assistants for legacy code modernization&lt;/a&gt; covers the end-to-end process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; works with engineering teams on legacy modernization assessments and implementation. The &lt;a href="https://137foundry.com/services/ai-automation" rel="noopener noreferrer"&gt;137Foundry AI automation services&lt;/a&gt; include preparation consulting for teams starting this process for the first time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://prettier.io" rel="noopener noreferrer"&gt;Prettier&lt;/a&gt; and &lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; are useful tools for establishing consistent code style as a baseline before starting structural refactoring - style differences in a diff make behavioral changes harder to spot. &lt;a href="https://owasp.org" rel="noopener noreferrer"&gt;OWASP&lt;/a&gt; provides useful checklists for security-critical code review that apply directly to the critical path review step.&lt;/p&gt;

&lt;p&gt;Legacy modernization done well is not fast. But with the right preparation, AI assistance makes it substantially less expensive than it used to be.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Stop Paying for Invoice Software. This Free Tool Runs Right in Your Browser.</title>
      <dc:creator>Tharindu Dulshan Fernando</dc:creator>
      <pubDate>Sat, 09 May 2026 11:07:50 +0000</pubDate>
      <link>https://gg.forem.com/tharindufdo/stop-paying-for-invoice-software-this-free-tool-runs-right-in-your-browser-1am2</link>
      <guid>https://gg.forem.com/tharindufdo/stop-paying-for-invoice-software-this-free-tool-runs-right-in-your-browser-1am2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d4b547qepsdoj4m5gpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d4b547qepsdoj4m5gpr.png" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No subscriptions. No accounts. No data sent to anyone. Just open it and start invoicing. Link : &lt;a href="https://invoicegeny.com/" rel="noopener noreferrer"&gt;https://invoicegeny.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftol4i74uucdhz1ytn8wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftol4i74uucdhz1ytn8wp.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you run a small business, freelance, or do any kind of client work, invoicing is one of those necessary evils. You need professional-looking invoices, but the tools that create them either cost money every month, require you to create an account, or lock your data behind a login.&lt;/p&gt;

&lt;p&gt;Invoicegeny is a free alternative that works differently. It runs entirely in your web browser. No sign-up, no subscription, no data ever leaving your device. Open it, set it up once, and start sending invoices in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Data Stays on Your Device
&lt;/h2&gt;

&lt;p&gt;This is the biggest difference from every other invoicing tool.&lt;/p&gt;

&lt;p&gt;Apps like FreshBooks, Wave, or QuickBooks store your business data on their servers. That means your client list, your pricing, your revenue, all of it lives somewhere you don’t fully control. If they change their pricing, get acquired, or shut down, you have a problem.&lt;/p&gt;

&lt;p&gt;Invoicegeny stores everything in your browser’s local storage, the same place your browser saves your preferences and history. Nothing is sent to any server. Nothing is stored in the cloud. Your data is yours, period.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set It Up Once, Use It Forever
&lt;/h2&gt;

&lt;p&gt;The first thing you do is fill in your Seller Profile. This is your business info: name, logo, address, email, phone, tax ID, default tax rate, service charge, and bank account details. You fill this out once. Every invoice you create pulls from this profile automatically.&lt;/p&gt;

&lt;p&gt;You can save multiple bank accounts and pick the right one per invoice with a single click. Currency is fully configurable — USD, EUR, GBP, LKR, and dozens more are supported out of the box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5yipyonnftn2jpf9y9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5yipyonnftn2jpf9y9r.png" alt=" " width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Customers and Products - Managed, Not Re-Typed
&lt;/h2&gt;

&lt;p&gt;One of the most tedious parts of manual invoicing is re-entering the same customer details and prices over and over.&lt;/p&gt;

&lt;p&gt;Invoicegeny has a proper customer list and product catalogue. When you create an invoice, you search for the customer by name, phone, or email, it finds them instantly. Same with products: search, click, and the item appears on the invoice with the right price already filled in.&lt;/p&gt;

&lt;p&gt;Adding the same product twice? It just bumps the quantity. No duplicate lines. If a customer or product doesn’t exist yet, you can add them right from the invoice creation screen,no need to go to a separate page first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an Invoice in Under a
&lt;/h2&gt;

&lt;p&gt;Minute&lt;/p&gt;

&lt;p&gt;Once your profile, customers, and products are set up, creating an invoice is straightforward: pick a customer, search and add products, choose a payment method (Cash, Card, or Bank Transfer), set a due date, add any notes, and click “Save &amp;amp; Download PDF”.&lt;/p&gt;

&lt;p&gt;A professionally formatted PDF downloads to your computer instantly, with your logo, your business details, the itemised list, tax, service charge, and total. Ready to send to your client.&lt;/p&gt;

&lt;p&gt;Invoice numbers are assigned automatically in sequence (INV-0001, INV-0002…) so you never have to think about it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkevqfgebaztux2qyfuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkevqfgebaztux2qyfuc.png" alt=" " width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The PDF Looks Professional
&lt;/h2&gt;

&lt;p&gt;A lot of free invoice tools produce PDFs that look like they were made in 2003. This one doesn’t.&lt;/p&gt;

&lt;p&gt;The generated PDF includes your logo in the top left corner, your business name and contact details next to it, “INVOICE” prominently on the right with the invoice number and dates, a clean items table with alternating row shading, an itemised totals section showing subtotal, tax, service charge, and the final amount, payment details at the bottom, and a footer.&lt;/p&gt;

&lt;p&gt;It’s the kind of invoice that makes you look established and professional, even if you’re a team of one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Track the Status of Every Invoice
&lt;/h2&gt;

&lt;p&gt;Your invoices list shows every invoice you’ve created with its current status: Draft, Sent, Paid, or Overdue. You can update the status with one click. It’s a simple but effective way to know at a glance what’s been paid and what you still need to chase.&lt;/p&gt;

&lt;p&gt;Press enter or click to view image in full size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1hi3csiwkuxi8ftq74k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1hi3csiwkuxi8ftq74k.png" alt=" " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Is This For?
&lt;/h2&gt;

&lt;p&gt;This tool is a great fit if you are a freelancer (designer, developer, photographer, writer, consultant) who invoices clients directly, a sole trader or small business, someone who values privacy and doesn’t want their business data on third-party servers, someone tired of paying a monthly fee for a feature they only use occasionally, or operating in any country — multi-currency support is built in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completely Free. Forever.
&lt;/h2&gt;

&lt;p&gt;There’s no free tier with limits. No premium plan. No credit card required. The tool has no server costs because it has no server. It’s free to use because there’s genuinely nothing to charge for.&lt;/p&gt;

&lt;p&gt;Open the app, fill in your seller profile, add a couple of customers and products, create your first invoice, and download the PDF. The whole setup takes about five minutes. After that, invoicing takes under a minute per client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To try you can visit → &lt;a href="https://invoicegeny.com/" rel="noopener noreferrer"&gt;https://invoicegeny.com/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>invoice</category>
      <category>invoicegenerator</category>
      <category>productivity</category>
      <category>invoicegeny</category>
    </item>
    <item>
      <title>The AI Hype Crisis</title>
      <dc:creator>Tim Green</dc:creator>
      <pubDate>Sat, 09 May 2026 11:00:00 +0000</pubDate>
      <link>https://gg.forem.com/rawveg/the-ai-hype-crisis-3d9b</link>
      <guid>https://gg.forem.com/rawveg/the-ai-hype-crisis-3d9b</guid>
      <description>&lt;p&gt;In June 2024, Goldman Sachs published a research note that rattled Silicon Valley's most cherished assumptions. The report posed what it called the “$600 billion question”: would the staggering investment in artificial intelligence infrastructure ever generate proportional returns? The note featured analysis from MIT economist Daron Acemoglu, who had recently calculated that AI would produce no more than a 0.93 to 1.16 percent increase in US GDP over the next decade, a figure dramatically lower than the techno-utopian projections circulating through investor presentations and conference keynotes. “Much of what we hear from the industry now is exaggeration,” Acemoglu stated plainly. Two months later, he was awarded the 2024 Nobel Memorial Prize in Economic Sciences, alongside his MIT colleague Simon Johnson and University of Chicago economist James Robinson, for research on the relationship between political institutions and economic growth.&lt;/p&gt;

&lt;p&gt;That gap between what AI is promised to deliver and what it actually does is no longer an abstract concern for economists and technologists. It is reshaping public attitudes toward technology at a speed that should alarm anyone who cares about the long-term relationship between innovation and democratic society. When governments deploy algorithmic systems to deny healthcare coverage or detect welfare fraud, when corporations invest billions in tools that fail 95 percent of the time, and when the public is told repeatedly that superintelligence is just around the corner while chatbots still fabricate legal citations, something fundamental breaks in the social contract around technological progress.&lt;/p&gt;

&lt;p&gt;The question is not whether AI is useful. It plainly is, in specific, well-defined applications. The question is what happens when an entire civilisation makes strategic decisions based on capabilities that do not yet exist and may never materialise in the form being sold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Great Correction Arrives
&lt;/h2&gt;

&lt;p&gt;By late 2025, the AI industry had entered what Gartner's analysts formally classified as the “Trough of Disillusionment.” Generative AI, which had been perched at the Peak of Inflated Expectations just one year earlier, had slid into the territory where early adopters report performance issues, low return on investment, and a growing sense that the technology's capabilities had been systematically overstated. The positioning reflected difficulties organisations face when attempting to move generative AI from pilot projects to production systems. Integration with existing infrastructure presented technical obstacles, while concerns about data security caused some companies to limit deployment entirely.&lt;/p&gt;

&lt;p&gt;The numbers told a damning story. According to MIT's “The GenAI Divide: State of AI in Business 2025” report, published in July 2025 and based on 52 executive interviews, surveys of 153 leaders, and analysis of 300 public AI deployments, 95 percent of generative AI pilot projects delivered no measurable profit-and-loss impact. American enterprises had spent an estimated $40 billion on artificial intelligence systems in 2024, yet the vast majority saw zero measurable bottom-line returns. Only five percent of integrated systems created significant value.&lt;/p&gt;

&lt;p&gt;The study's authors, from MIT's NANDA initiative, identified what they termed the “GenAI Divide”: a widening split between high adoption and low transformation. Companies were enthusiastically purchasing and deploying AI tools, but almost none were achieving the business results that had been promised. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report stated. The core barrier, the authors concluded, was not infrastructure, regulation, or talent. It was that most generative AI systems “do not retain feedback, adapt to context, or improve over time,” making them fundamentally ill-suited for the enterprise environments into which they were being thrust.&lt;/p&gt;

&lt;p&gt;This was not an outlier finding. A 2024 NTT DATA analysis concluded that between 70 and 85 percent of generative AI deployment efforts were failing to meet their desired return on investment. The Autodesk State of Design &amp;amp; Make 2025 report found that sentiment toward AI had dropped significantly year over year, with just 69 percent of business leaders saying AI would enhance their industry, representing a 12 percent decline from the previous year. Only 40 percent of leaders said they were approaching or had achieved their AI goals, a 16-point decrease that represented a 29 percent drop. S&amp;amp;P Global data revealed that 42 percent of companies scrapped most of their AI initiatives in 2025, up sharply from 17 percent the year before.&lt;/p&gt;

&lt;p&gt;The infrastructure spending, meanwhile, continued accelerating even as returns failed to materialise. Meta, Microsoft, Amazon, and Google collectively committed over $250 billion to AI infrastructure during 2025. Amazon alone planned $125 billion in capital expenditure, up from $77 billion in 2024, a 62 percent increase. Goldman Sachs CEO David Solomon publicly acknowledged that he expected “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos called the environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman conceded that “people will overinvest and lose money.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust in Freefall
&lt;/h2&gt;

&lt;p&gt;The gap between AI's promises and its performance is not occurring in a vacuum. It is landing on a public already growing sceptical of the technology industry's claims, and it is accelerating a decline in trust that carries profound implications for democratic governance.&lt;/p&gt;

&lt;p&gt;The 2025 Edelman Trust Barometer, based on 30-minute online interviews conducted between October and November 2024, revealed a stark picture. Globally, only 49 percent of respondents trusted artificial intelligence as a technology. In the United States, that figure dropped to just 32 percent. Three times as many Americans rejected the growing use of AI (49 percent) as embraced it (17 percent). In the United Kingdom, trust stood at just 36 percent. In Germany, 39 percent. The Chinese public, by contrast, reported 72 percent trust in AI, a 40-point gap that reflects not just different regulatory environments but fundamentally different cultural relationships with technology and state authority.&lt;/p&gt;

&lt;p&gt;These figures represent a significant deterioration. A decade ago, 73 percent of Americans trusted technology companies. By 2025, that number had fallen to 63 percent. Technology, which was the most trusted sector in 90 percent of the countries Edelman studies eight years ago, now held that position in only half. The barometer also found that 59 percent of global employees feared job displacement due to automation, and nearly one in two were sceptical of business use of artificial intelligence.&lt;/p&gt;

&lt;p&gt;The Pew Research Center's findings painted an even more granular picture of public anxiety. In an April 2025 report examining how the US public and AI experts view artificial intelligence, Pew found that 50 percent of American adults said they were more concerned than excited about the increased use of AI in daily life, up from 37 percent in 2021. More than half (57 percent) rated the societal risks of AI as high, compared with only 25 percent who said the benefits were high. Over half of US adults (53 percent) believed AI did more harm than good in protecting personal privacy, and 53 percent said AI would worsen people's ability to think creatively.&lt;/p&gt;

&lt;p&gt;Perhaps most revealing was the chasm between expert optimism and public unease. While 56 percent of AI experts believed AI would have a positive effect on the United States over the next 20 years, only 17 percent of the general public agreed. While 47 percent of experts said they were more excited than concerned, only 11 percent of ordinary citizens felt the same. And despite their divergent levels of optimism, both groups shared a common scepticism about institutional competence: roughly 60 percent of both experts and the public said they lacked confidence that US companies would develop AI responsibly.&lt;/p&gt;

&lt;p&gt;The Stanford HAI AI Index 2025 Report reinforced these trends globally. Across 26 nations surveyed by Ipsos, confidence that AI companies protect personal data fell from 50 percent in 2023 to 47 percent in 2024. Fewer people believed AI systems were unbiased and free from discrimination compared to the previous year. While 18 of 26 nations saw an increase in the proportion of people who believed AI products offered more benefits than drawbacks, the optimism was concentrated in countries like China (83 percent), Indonesia (80 percent), and Thailand (77 percent), while the United States (39 percent), Canada (40 percent), and the Netherlands (36 percent) remained deeply sceptical.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Algorithms Replace Judgement
&lt;/h2&gt;

&lt;p&gt;The erosion of public trust in AI would be concerning enough if it were merely a matter of consumer sentiment. But the stakes become existential when governments and corporations use overestimated AI capabilities to make decisions that fundamentally alter people's lives, and when those decisions carry consequences that cannot be undone.&lt;/p&gt;

&lt;p&gt;Consider healthcare. In November 2023, a class action lawsuit was filed against UnitedHealth Group and its subsidiary, alleging that the company illegally used an AI algorithm called nH Predict to deny rehabilitation care to seriously ill elderly patients enrolled in Medicare Advantage plans. The algorithm, developed by a company called Senior Metrics and later acquired by UnitedHealth's Optum subsidiary in 2020, was designed to predict how long patients would need post-acute care. According to the lawsuit, UnitedHealth deployed the algorithm knowing it had a 90 percent error rate on appeals, meaning that nine out of ten times a human reviewed the AI's denial, they overturned it. UnitedHealth also allegedly knew that only 0.2 percent of denied patients would file appeals, making the error rate commercially inconsequential for the insurer despite being medically devastating for patients.&lt;/p&gt;

&lt;p&gt;The human cost was documented in court filings. Gene Lokken, a 91-year-old Wisconsin resident named in the lawsuit, fractured his leg and ankle in May 2022. After his doctor approved physical therapy, UnitedHealth paid for only 19 days before the algorithm determined he was safe to go home. His doctors appealed, noting his muscles were “paralysed and weak,” but the insurer denied further coverage. His family paid approximately $150,000 over the following year until he died in July 2023. In February 2025, a federal court allowed the case to proceed, denying UnitedHealth's attempt to dismiss the claims and waiving the exhaustion of administrative remedies requirement, noting that patients faced irreparable harm.&lt;/p&gt;

&lt;p&gt;The STAT investigative series “Denied by AI,” which broke the UnitedHealth story, was a 2024 Pulitzer Prize finalist in investigative reporting. A US Senate report released in October 2024 found that UnitedHealthcare's prior authorisation denial rate for post-acute care had jumped to 22.7 percent in 2022 from 10.9 percent in 2020. The healthcare AI problem extends far beyond a single insurer. ECRI, a patient safety organisation, ranked insufficient governance of artificial intelligence as the number two patient safety threat in 2025, warning that medical errors generated by AI could compromise patient safety through misdiagnoses and inappropriate treatment decisions. Yet only about 16 percent of hospital executives surveyed said they had a systemwide governance policy for AI use and data access.&lt;/p&gt;

&lt;p&gt;The pattern repeats across domains where algorithmic systems are deployed to process vulnerable populations. In the Netherlands, the childcare benefits scandal stands as perhaps the most devastating example of what happens when governments trust flawed algorithms with life-altering decisions. The Dutch Tax and Customs Administration deployed a machine learning model to detect welfare fraud that illegally used dual nationality as a risk characteristic. The system falsely accused over 20,000 parents of fraud, resulting in benefits termination and forced repayments. Families were driven into bankruptcy. Children were removed from their homes. Mental health crises proliferated. Seventy percent of those affected had a migration background, and fifty percent were single-person households, mostly mothers. In January 2021, the Dutch government was forced to resign after a parliamentary investigation concluded that the government had violated the foundational principles of the rule of law.&lt;/p&gt;

&lt;p&gt;The related SyRI (System Risk Indication) system, which cross-referenced citizens' employment, benefits, and tax data to flag “unlikely citizen profiles,” was deployed exclusively in neighbourhoods with high numbers of low-income households and disproportionately many residents from immigrant backgrounds. In February 2020, the Hague court ordered SyRI's immediate halt, ruling it violated Article 8 of the European Convention on Human Rights. Amnesty International described the system's targeting criteria as “xenophobic machines.” Yet investigations by Lighthouse Reports later confirmed that similar algorithmic surveillance practices continued under slightly adapted systems, even after the ban, with the government having “silently continued to deploy a slightly adapted SyRI in some of the country's most vulnerable neighbourhoods.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stochastic Parrot Problem
&lt;/h2&gt;

&lt;p&gt;Understanding why AI hype is so dangerous requires understanding what these systems actually do, as opposed to what their makers claim they do.&lt;/p&gt;

&lt;p&gt;Emily Bender, a linguistics professor at the University of Washington who was included in the inaugural TIME100 AI list of most influential people in artificial intelligence in 2023, co-authored a now-famous paper arguing that large language models are fundamentally “stochastic parrots.” They do not understand language in any meaningful sense. They draw on training data to predict which sequence of tokens is most likely to follow a given prompt. The result is an illusion of comprehension, a pattern-matching exercise that produces outputs resembling intelligent thought without any of the underlying cognition.&lt;/p&gt;

&lt;p&gt;In 2025, Bender and sociologist Alex Hanna, director of research at the Distributed AI Research Institute and a former Google employee, published “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.” The book argues that AI hype serves as a mask for Big Tech's drive for profit, with the breathless promotion of AI capabilities benefiting technology companies far more than users or society. “Who benefits from this technology, who is harmed, and what recourse do they have?” Bender and Hanna ask, framing these as the essential questions that the hype deliberately obscures. Library Journal called the book “a thorough, witty, and accessible argument against AI that meets the moment.”&lt;/p&gt;

&lt;p&gt;The stochastic parrot problem has real-world consequences that compound the trust deficit. When AI systems fabricate information with perfect confidence, they undermine the epistemic foundations that societies rely on for decision-making. Legal scholar Damien Charlotin, who tracks AI hallucinations in court filings through his database, had documented at least 206 instances of lawyers submitting AI-generated fabricated case citations by mid-2025. Stanford University's RegLab found that even premium legal AI tools hallucinated at alarming rates: Westlaw's AI-Assisted Research produced hallucinated or incorrect information 33 percent of the time, providing accurate responses to only 42 percent of queries. LexisNexis's Lexis+ AI hallucinated 17 percent of the time. A 2025 study published in Nature Machine Intelligence found that large language models cannot reliably distinguish between belief and knowledge, or between opinions and facts, noting that “failure to make such distinctions can mislead diagnoses, distort judicial judgements and amplify misinformation.”&lt;/p&gt;

&lt;p&gt;If the tools marketed as the most reliable in their field fabricate information roughly one-fifth to one-third of the time, what does this mean for the countless lower-stakes applications where AI outputs are accepted without verification?&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Washing Economy
&lt;/h2&gt;

&lt;p&gt;The gap between marketing claims and actual capabilities has grown so pronounced that regulators have begun treating AI exaggeration as a form of securities fraud.&lt;/p&gt;

&lt;p&gt;In March 2024, the US Securities and Exchange Commission brought its first “AI washing” enforcement actions, simultaneously charging two investment advisory firms, Delphia and Global Predictions, with making false and misleading statements about their use of AI. Delphia paid $225,000 and Global Predictions paid $175,000 in civil penalties. These firms had not been entirely without AI capabilities, but they had overstated what those systems could do, crossing the line from marketing enthusiasm into regulatory violation.&lt;/p&gt;

&lt;p&gt;The enforcement actions escalated rapidly. In January 2025, the SEC charged Presto Automation, a formerly Nasdaq-listed company, in the first AI washing action against a public company. Presto had claimed its AI voice system eliminated the need for human drive-through order-taking at fast food restaurants, but the SEC alleged the vast majority of orders still required human intervention and that the AI speech recognition technology was owned and operated by a third party. In April 2025, the SEC and Department of Justice charged the founder of Nate Inc. with fraudulently raising over $42 million by claiming the company's shopping app used AI to process transactions, when in reality manual workers completed the purchases. The claimed automation rate was above 90 percent; the actual rate was essentially zero.&lt;/p&gt;

&lt;p&gt;Securities class actions targeting alleged AI misrepresentations increased by 100 percent between 2023 and 2024. In February 2025, the SEC announced the creation of a dedicated Cyber and Emerging Technologies Unit, tasked with combating technology-related misconduct, and flagged AI washing as a top examination priority.&lt;/p&gt;

&lt;p&gt;The pattern is instructive. When a technology is overhyped, the incentive to exaggerate capabilities becomes irresistible. Companies that accurately describe their modest AI implementations risk being punished by investors who have been conditioned to expect transformative breakthroughs. The honest actors are penalised while the exaggerators attract capital, creating a market dynamic that systematically rewards deception.&lt;/p&gt;

&lt;h2&gt;
  
  
  Echoes of Previous Bubbles
&lt;/h2&gt;

&lt;p&gt;The AI hype cycle is not without historical precedent, and the parallels offer both warnings and qualified reassurance.&lt;/p&gt;

&lt;p&gt;During the dot-com era, telecommunications companies laid more than 80 million miles of fibre optic cables across the United States, driven by wildly inflated claims about internet traffic growth. Companies like Global Crossing, Level 3, and Qwest raced to build massive networks. The result was catastrophic overcapacity: even four years after the bubble burst, 85 to 95 percent of the fibre laid remained unused, earning the nickname “dark fibre.” The Nasdaq composite rose nearly 400 percent between 1995 and March 2000, then crashed 78 percent by October 2002.&lt;/p&gt;

&lt;p&gt;The parallels to today's AI infrastructure buildout are unmistakable. Meta CEO Mark Zuckerberg announced plans for an AI data centre “so large it could cover a significant part of Manhattan.” The Stargate Project aims to develop a $500 billion nationwide network of AI data centres. Goldman Sachs analysts found that hyperscaler companies had taken on $121 billion in debt over the past year, representing a more than 300 percent increase from typical industry debt levels. AI-related stocks had accounted for 75 percent of S&amp;amp;P 500 returns, 80 percent of earnings growth, and 90 percent of capital spending growth since ChatGPT launched in November 2022.&lt;/p&gt;

&lt;p&gt;Yet there are important differences. Unlike many dot-com companies that had no revenue, major AI players are generating substantial income. Microsoft's Azure cloud service grew 39 percent year over year to an $86 billion run rate. OpenAI projects $20 billion in annualised revenue. The Nasdaq's forward price-to-earnings ratio was approximately 26 times in November 2023, compared to approximately 60 times at the dot-com peak.&lt;/p&gt;

&lt;p&gt;The more useful lesson from the dot-com era is not about whether the bubble will burst, but about what happens to public trust and institutional decision-making in the aftermath. The internet survived the dot-com crash and eventually fulfilled many of its early promises. But the crash destroyed trillions in wealth, wiped out retirement savings, and created a lasting scepticism toward technology claims that took years to overcome. The institutions and individuals who made decisions based on dot-com hype, from pension funds that invested in companies with no path to profitability to governments that restructured services around technologies that did not yet work, bore costs that were never fully recovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Algorithmic Bias and the Feedback Loop of Injustice
&lt;/h2&gt;

&lt;p&gt;Perhaps the most consequential long-term risk of the AI hype gap is its intersection with systemic inequality. When policymakers deploy AI systems in criminal justice, welfare administration, and public services based on inflated claims of accuracy and objectivity, the consequences fall disproportionately on communities that are already marginalised.&lt;/p&gt;

&lt;p&gt;Predictive policing offers a stark illustration. The Chicago Police Department's “Strategic Subject List,” implemented in 2012 to identify individuals at higher risk of gun violence, disproportionately targeted young Black and Latino men, leading to intensified surveillance and police interactions in those communities. The system created a feedback loop: more police dispatched to certain neighbourhoods resulted in more recorded crime, which the algorithm interpreted as confirmation that those neighbourhoods were indeed high-risk, which led to even more policing. The NAACP has called on state legislators to evaluate and regulate the use of predictive policing, noting mounting evidence that these tools increase racial biases and citing the lack of transparency inherent in proprietary algorithms that do not allow for public scrutiny.&lt;/p&gt;

&lt;p&gt;The COMPAS recidivism prediction tool, widely used in US criminal justice, was found to produce biased predictions against Black defendants compared to white defendants, trained on historical data saturated with racial bias. An audit by the LAPD inspector general found “significant inconsistencies” in how officers entered data into a predictive policing programme, further fuelling biased predictions. These are not edge cases or implementation failures. They are the predictable consequences of deploying pattern-recognition systems trained on data that reflects centuries of structural discrimination.&lt;/p&gt;

&lt;p&gt;In welfare administration, the pattern is equally troubling. The Dutch childcare benefits scandal demonstrated how algorithmic systems can automate inequality at scale. The municipality of Rotterdam used a discriminatory algorithm to profile residents and “predict” social welfare fraud for three years, disproportionately targeting young single mothers with limited knowledge of Dutch. In the United Kingdom, the Department for Work and Pensions admitted, in documents released under the Freedom of Information Act, to finding bias in an AI tool used to detect fraud in universal credit claims. The tool's initial iteration correctly matched conditions only 35 percent of the time, and by the DWP's own admission, “chronic fatigue was translated into chronic renal failure” and “partially amputation of foot was translated into partially sighted.”&lt;/p&gt;

&lt;p&gt;These failures share a common thread. The AI systems were deployed based on claims of objectivity and accuracy that did not withstand scrutiny. Policymakers, influenced by industry hype about AI's capabilities, trusted algorithmic outputs over human judgement, and the people who paid the price were those least equipped to challenge the decisions being made about their lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Sustained Disillusionment Means for Innovation
&lt;/h2&gt;

&lt;p&gt;The long-term consequences of the AI hype gap extend beyond immediate harms to individual victims. They threaten to reshape the relationship between society and technological innovation in ways that could prove difficult to reverse.&lt;/p&gt;

&lt;p&gt;First, there is the problem of misallocated resources. The MIT study found that more than half of generative AI budgets were devoted to sales and marketing tools, despite evidence that the best returns came from back-office automation, eliminating business process outsourcing, cutting external agency costs, and streamlining operations. When organisations chase the use cases that sound most impressive rather than those most likely to deliver value, they waste capital that could have funded genuinely productive innovation. The study also revealed a striking shadow economy: while only 40 percent of companies had official large language model subscriptions, 90 percent of workers surveyed reported daily use of personal AI tools for job tasks, suggesting that the gap between corporate AI strategy and actual AI utility is even wider than the headline figures suggest.&lt;/p&gt;

&lt;p&gt;Second, the trust deficit creates regulatory feedback loops that can stifle beneficial applications. As public concern about AI grows, so does political pressure for restrictive regulation. The 2025 Stanford HAI report found that references to AI in draft legislation across 75 countries increased by 21.3 percent, continuing a ninefold increase since 2016. In the United States, 73.7 percent of local policymakers agreed that AI should be regulated, up from 55.7 percent in 2022. This regulatory momentum is a direct response to the trust deficit, and while some regulation is necessary and overdue, poorly designed rules driven by public fear rather than technical understanding risk constraining beneficial applications alongside harmful ones. Colorado became the first US state to enact legislation addressing algorithmic bias in 2024, with California and New York following with their own targeted measures.&lt;/p&gt;

&lt;p&gt;Third, the hype cycle creates a talent and attention problem. When AI is presented as a solution to every conceivable challenge, researchers and engineers are pulled toward fashionable applications rather than areas of genuine need. Acemoglu has argued that “we currently have the wrong direction for AI. We're using it too much for automation and not enough for providing expertise and information to workers.” The hype incentivises building systems that replace human judgement rather than augmenting it, directing talent and investment away from applications that could produce the greatest social benefit.&lt;/p&gt;

&lt;p&gt;Finally, and perhaps most critically, the erosion of public trust in AI threatens to become self-reinforcing. Each failed deployment, each exaggerated claim exposed, each algorithmic system found to be biased or inaccurate further deepens public scepticism. Meredith Whittaker, president of Signal, has warned about the security and privacy risks of granting AI agents extensive access to sensitive data, describing a future where the “magic genie bot” becomes a nightmare if security and privacy are not prioritised. When public trust in AI erodes, even beneficial and well-designed systems face adoption resistance, creating a vicious cycle where good technology is tainted by association with bad marketing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rebuilding on Honest Foundations
&lt;/h2&gt;

&lt;p&gt;The AI hype gap is not merely a marketing problem or an investment risk. It is a structural challenge to the relationship between technological innovation and public trust that has been building for years and is now reaching a critical inflection point.&lt;/p&gt;

&lt;p&gt;The 2025 Edelman Trust Barometer found that the most powerful drivers of AI enthusiasm are trust and information, with hesitation rooted more in unfamiliarity than negative experiences. This finding suggests a path that does not require abandoning AI, but demands abandoning the hype. As people use AI more and experience its ability to help them learn, work, and solve problems, their confidence rises. The obstacle is not the technology itself but the inflated expectations that set users up for disappointment.&lt;/p&gt;

&lt;p&gt;Gartner's placement of generative AI in the Trough of Disillusionment is, paradoxically, encouraging. As the firm's analysts note, the trough does not represent failure. It represents the transition from wild experimentation to rigorous engineering, from breathless promises to honest assessment of what works and what does not. The companies and institutions that emerge successfully from this phase will be those that measured their claims against reality rather than against their competitors' marketing materials.&lt;/p&gt;

&lt;p&gt;The lesson from previous technology cycles is clear but routinely ignored. The dot-com bubble popped, but the internet did not disappear. What disappeared were the companies and institutions that confused hype with strategy. The same pattern will likely repeat with AI. The technology will mature, find its genuine applications, and deliver real value. But the path from here to there runs through a period of reckoning that demands honesty about what AI can and cannot do, transparency about the limitations of algorithmic decision-making, and accountability for the real harms caused by deploying immature systems in high-stakes contexts.&lt;/p&gt;

&lt;p&gt;As Bender and Hanna urge, the starting point must be asking basic but important questions: who benefits, who is harmed, and what recourse do they have? As Acemoglu wrote in his analysis for “Economic Policy” in 2024, “Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing.” The potential is real. But potential is not performance, and treating it as such has consequences that a $600 billion question only begins to capture.&lt;/p&gt;




&lt;h2&gt;
  
  
  References and Sources
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Acemoglu, D. (2024). “The Simple Macroeconomics of AI.” &lt;em&gt;Economic Policy&lt;/em&gt;. Massachusetts Institute of Technology. &lt;a href="https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf" rel="noopener noreferrer"&gt;https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amnesty International. (2021). “Xenophobic Machines: Dutch Child Benefit Scandal.” Retrieved from &lt;a href="https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/" rel="noopener noreferrer"&gt;https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bender, E. M. &amp;amp; Hanna, A. (2025). &lt;em&gt;The AI Con: How to Fight Big Tech's Hype and Create the Future We Want&lt;/em&gt;. Penguin/HarperCollins.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CBS News. (2023). “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims.” Retrieved from &lt;a href="https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/" rel="noopener noreferrer"&gt;https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Challapally, A., Pease, C., Raskar, R. &amp;amp; Chari, P. (2025). “The GenAI Divide: State of AI in Business 2025.” MIT NANDA Initiative. As reported by Fortune, 18 August 2025. &lt;a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/" rel="noopener noreferrer"&gt;https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edelman. (2025). “2025 Edelman Trust Barometer.” Retrieved from &lt;a href="https://www.edelman.com/trust/2025/trust-barometer" rel="noopener noreferrer"&gt;https://www.edelman.com/trust/2025/trust-barometer&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edelman. (2025). “Flash Poll: Trust and Artificial Intelligence at a Crossroads.” Retrieved from &lt;a href="https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence" rel="noopener noreferrer"&gt;https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edelman. (2025). “The AI Trust Imperative: Navigating the Future with Confidence.” Retrieved from &lt;a href="https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector" rel="noopener noreferrer"&gt;https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gartner. (2025). “Hype Cycle for Artificial Intelligence, 2025.” Retrieved from &lt;a href="https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence" rel="noopener noreferrer"&gt;https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Goldman Sachs. (2024). “Top of Mind: AI: in a bubble?” Goldman Sachs Research. Retrieved from &lt;a href="https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble" rel="noopener noreferrer"&gt;https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Healthcare Finance News. (2025). “Class action lawsuit against UnitedHealth's AI claim denials advances.” Retrieved from &lt;a href="https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances" rel="noopener noreferrer"&gt;https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lighthouse Reports. (2023). “The Algorithm Addiction.” Retrieved from &lt;a href="https://www.lighthousereports.com/investigation/the-algorithm-addiction/" rel="noopener noreferrer"&gt;https://www.lighthousereports.com/investigation/the-algorithm-addiction/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D. &amp;amp; Ho, D. E. (2025). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” &lt;em&gt;Journal of Empirical Legal Studies&lt;/em&gt;, 0:1-27. &lt;a href="https://doi.org/10.1111/jels.12413" rel="noopener noreferrer"&gt;https://doi.org/10.1111/jels.12413&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MIT Technology Review. (2025). “The great AI hype correction of 2025.” Retrieved from &lt;a href="https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/" rel="noopener noreferrer"&gt;https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NAACP. (2024). “Artificial Intelligence in Predictive Policing Issue Brief.” Retrieved from &lt;a href="https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief" rel="noopener noreferrer"&gt;https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nature Machine Intelligence. (2025). “Language models cannot reliably distinguish belief from knowledge and fact.” &lt;a href="https://doi.org/10.1038/s42256-025-01113-8" rel="noopener noreferrer"&gt;https://doi.org/10.1038/s42256-025-01113-8&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Novara Media. (2025). “How Labour Is Using Biased AI to Determine Benefit Claims.” Retrieved from &lt;a href="https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/" rel="noopener noreferrer"&gt;https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NTT DATA. (2024). “Between 70-85% of GenAI deployment efforts are failing to meet their desired ROI.” Retrieved from &lt;a href="https://www.nttdata.com/global/en/insights/focus/2024/between-70-85p-of-genai-deployment-efforts-are-failing" rel="noopener noreferrer"&gt;https://www.nttdata.com/global/en/insights/focus/2024/between-70-85p-of-genai-deployment-efforts-are-failing&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pew Research Center. (2025). “How the US Public and AI Experts View Artificial Intelligence.” Retrieved from &lt;a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/" rel="noopener noreferrer"&gt;https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Radiologybusiness.com. (2025). “'Insufficient governance of AI' is the No. 2 patient safety threat in 2025.” Retrieved from &lt;a href="https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025" rel="noopener noreferrer"&gt;https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SEC. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36. Retrieved from &lt;a href="https://www.sec.gov/newsroom/press-releases/2024-36" rel="noopener noreferrer"&gt;https://www.sec.gov/newsroom/press-releases/2024-36&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stanford HAI. (2025). “The 2025 AI Index Report.” Stanford University Human-Centered Artificial Intelligence. Retrieved from &lt;a href="https://hai.stanford.edu/ai-index/2025-ai-index-report" rel="noopener noreferrer"&gt;https://hai.stanford.edu/ai-index/2025-ai-index-report&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;STAT News. (2023). “UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans.” Retrieved from &lt;a href="https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/" rel="noopener noreferrer"&gt;https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Dutch Childcare Benefits Scandal. Wikipedia. Retrieved from &lt;a href="https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Washington Post. (2024). “Big Tech is spending billions on AI. Some on Wall Street see a bubble.” Retrieved from &lt;a href="https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/" rel="noopener noreferrer"&gt;https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos7pdncawa0mgqcin0gf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos7pdncawa0mgqcin0gf.png" alt="Tim Green" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tim Green&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;UK-based Systems Theorist &amp;amp; Independent Technology Writer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at &lt;a href="https://smarterarticles.co.uk" rel="noopener noreferrer"&gt;smarterarticles.co.uk&lt;/a&gt;, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.&lt;/p&gt;

&lt;p&gt;His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ORCID:&lt;/strong&gt; &lt;a href="https://orcid.org/0009-0002-0156-9795" rel="noopener noreferrer"&gt;0009-0002-0156-9795&lt;/a&gt; &lt;br&gt;
&lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="mailto:tim@smarterarticles.co.uk"&gt;tim@smarterarticles.co.uk&lt;/a&gt;&lt;/p&gt;

</description>
      <category>humanintheloop</category>
      <category>aihypecrisis</category>
      <category>publictrusterosion</category>
      <category>algorithmicharms</category>
    </item>
    <item>
      <title>Orbis: Turn Any GitHub Repository Into an Interactive 3D Dependency Graph</title>
      <dc:creator>Nilofer 🚀</dc:creator>
      <pubDate>Sat, 09 May 2026 10:58:10 +0000</pubDate>
      <link>https://gg.forem.com/nilofer_tweets/orbis-turn-any-github-repository-into-an-interactive-3d-dependency-graph-3eei</link>
      <guid>https://gg.forem.com/nilofer_tweets/orbis-turn-any-github-repository-into-an-interactive-3d-dependency-graph-3eei</guid>
      <description>&lt;p&gt;Understanding a large codebase is hard. You clone it, start reading files, and quickly lose track of how everything connects. Which modules are most depended on? Where are the circular dependencies? What would break if you refactored this file?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orbis&lt;/strong&gt; answers these questions visually. Paste a GitHub repository URL, and Orbis clones it, parses the ASTs across Python, JavaScript, TypeScript, Go, Rust, and Java, detects architectural patterns, and renders the entire codebase as a navigable 3D force-directed graph. Click any module to inspect its dependencies, metrics, and exported symbols. Ask the built-in AI assistant questions like "which module should I refactor first?" and get answers grounded in the actual code structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3D force-directed graph&lt;/strong&gt; - Nodes sized by lines of code, colored by type, with animated directional particles on edges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-language AST parsing&lt;/strong&gt; - Python, JavaScript/TypeScript, Go, Rust, and Java via tree-sitter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI chat assistant&lt;/strong&gt; - Ask Claude questions about the analyzed codebase. Questions like "Which modules have circular dependencies?" or "Where should I add feature X?" are answered with full architectural context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural insights&lt;/strong&gt; - Auto-detected issues including god modules, high coupling, and circular dependencies, each with severity ratings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focus Mode&lt;/strong&gt; - Dim unconnected nodes to trace dependency paths clearly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shareable URLs&lt;/strong&gt; - &lt;code&gt;?repo=https://github.com/...&lt;/code&gt; auto-triggers analysis on load, making it easy to share a specific codebase view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recent history&lt;/strong&gt; - Last 5 repos stored locally for quick re-analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo mode&lt;/strong&gt; — Load a pre-analyzed snapshot without a GitHub clone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Backend: FastAPI + Server-Sent Events (SSE)&lt;/li&gt;
&lt;li&gt;AST Parsing: tree-sitter (Python, JS/TS, Go, Rust, Java)&lt;/li&gt;
&lt;li&gt;AI Integration: Claude Opus 4.6 via Anthropic API&lt;/li&gt;
&lt;li&gt;3D Rendering: 3d-force-graph + Three.js&lt;/li&gt;
&lt;li&gt;Frontend: Vanilla JS SPA - no build step&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Clone and install&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;orbis
python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate   &lt;span class="c"&gt;# Windows: venv\Scripts\activate&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Set up environment&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# Edit .env and add your ANTHROPIC_API_KEY for the AI chat feature&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get an API key at console.anthropic.com. The AI chat feature requires &lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt; in your environment. It degrades gracefully, if the key is missing, the chat panel shows an error message rather than breaking the rest of the app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Run&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvicorn main:app &lt;span class="nt"&gt;--host&lt;/span&gt; 0.0.0.0 &lt;span class="nt"&gt;--port&lt;/span&gt; 8001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:8001&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; orbis &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8001:8001 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-ant-... orbis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Once running, the workflow is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter a public GitHub repository URL - for example &lt;code&gt;https://github.com/expressjs/express&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Optionally specify a branch&lt;/li&gt;
&lt;li&gt;Click Analyze - Orbis clones the repo, parses ASTs, and builds the graph in roughly 5–30 seconds&lt;/li&gt;
&lt;li&gt;Explore the 3D graph - click a node to open its detail drawer, scroll to zoom, drag to rotate&lt;/li&gt;
&lt;li&gt;Use Focus Mode to highlight a node's direct connections&lt;/li&gt;
&lt;li&gt;Use layer filter chips to show or hide architectural layers&lt;/li&gt;
&lt;li&gt;Ask the AI assistant questions about the codebase in the chat panel&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Keyboard Shortcuts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;R: Reset camera&lt;/li&gt;
&lt;li&gt;P: Pause/resume rotation&lt;/li&gt;
&lt;li&gt;F: Toggle Focus Mode&lt;/li&gt;
&lt;li&gt;/: Focus search box&lt;/li&gt;
&lt;li&gt;Esc: Close detail drawer / exit Focus Mode&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The project has four files at its core - a FastAPI backend, a single-file AST parser, and a vanilla JS frontend with no build step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;main.py           FastAPI backend — SSE streaming for /analyze, /chat
neo_parser.py     Multi-language AST parser (tree-sitter)
static/
  index.html      Single-page frontend (3d-force-graph + Three.js)
save_analysis.py  Utility: pre-generate demo data from a repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The backend streams analysis progress to the frontend via Server-Sent Events, The backend streams analysis progress to the frontend via Server-Sent Events while cloning and analyzing the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Endpoints
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiky04nqoykgwfknmlsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiky04nqoykgwfknmlsm.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output Schema
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;/analyze&lt;/code&gt; emits SSE events and completes with a &lt;code&gt;complete&lt;/code&gt; event containing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"schema_version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"architecture_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MVC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"languages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Codebase contains 42 modules..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nodes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"requests/auth"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"auth.py"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"utility"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"lines_of_code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;315&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"complexity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"exported_symbols"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"AuthBase"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HTTPBasicAuth"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"internal_dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"requests/compat"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"external_dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"functions_total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"classes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"edges"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"from"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"requests/api"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"to"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"requests/auth"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"import"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"insights"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"high_coupling"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"high"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"High fan-in on requests/models"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"14 modules import this file directly."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"affected_nodes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"requests/models"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"recommendation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Consider splitting into smaller focused modules."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each node carries its lines of code, complexity rating, exported symbols, and both internal and external dependencies. The insights block surfaces architectural issues automatically, high coupling, circular dependencies, and god modules - each with a severity rating and a specific recommendation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supported Languages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python - &lt;code&gt;.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;JavaScript/TypeScript - &lt;code&gt;.js&lt;/code&gt;, &lt;code&gt;.mjs&lt;/code&gt;, &lt;code&gt;.cjs&lt;/code&gt;, &lt;code&gt;.jsx&lt;/code&gt;, &lt;code&gt;.ts&lt;/code&gt;, &lt;code&gt;.tsx&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Go - &lt;code&gt;.go&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Rust - &lt;code&gt;.rs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Java - &lt;code&gt;.java&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Chat
&lt;/h2&gt;

&lt;p&gt;The chat assistant uses Claude Opus 4.6 and receives the full architectural graph as context - node list, dependencies, insights, and summary. It can answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What does the auth module depend on?"&lt;/li&gt;
&lt;li&gt;"Why are there circular dependencies between X and Y?"&lt;/li&gt;
&lt;li&gt;"Which module should I refactor first?"&lt;/li&gt;
&lt;li&gt;"Where would I add a caching layer?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The assistant's answers are grounded in the actual parsed structure of the codebase - not generic advice. Requires &lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt; in your environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run with auto-reload&lt;/span&gt;
uvicorn main:app &lt;span class="nt"&gt;--reload&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt; 8001

&lt;span class="c"&gt;# Re-generate demo data&lt;/span&gt;
python save_analysis.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How I Built This Using NEO
&lt;/h2&gt;

&lt;p&gt;This project was built using &lt;a href="https://heyneo.com/" rel="noopener noreferrer"&gt;NEO&lt;/a&gt;. NEO is a fully autonomous AI engineering agent that can write code and build solutions for AI/ML tasks including AI model evals, prompt optimization and end to end AI pipeline development.&lt;/p&gt;

&lt;p&gt;The idea was a tool that turns any GitHub repository into an interactive 3D graph, something a developer could paste a URL into and immediately understand the architecture without reading a single file. The requirements included multi-language AST parsing, automatic architectural issue detection, an AI assistant grounded in the actual code structure, and a frontend that required no build step.&lt;/p&gt;

&lt;p&gt;NEO built the full stack from that description: the FastAPI backend with SSE streaming for real-time analysis progress, the multi-language AST parser in &lt;code&gt;neo_parser.py&lt;/code&gt; covering Python, JavaScript, TypeScript, Go, Rust, and Java via tree-sitter, the 3D force-directed graph frontend in vanilla JS, the Claude Opus 4.6 chat assistant with full architectural context, the insights engine detecting god modules, high coupling, and circular dependencies with severity ratings, and the demo mode with pre-generated analysis data.&lt;/p&gt;

&lt;h2&gt;
  
  
  How You Can Use and Extend This With NEO
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use it to onboard onto an unfamiliar codebase.&lt;/strong&gt;&lt;br&gt;
Instead of spending hours reading files to understand how a project is structured, paste the repo URL into Orbis and get an immediate visual map of every module, its dependencies, and the architectural issues that already exist. The AI assistant can then answer specific questions about the structure without you having to trace imports manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use it during code review to understand structural impact.&lt;/strong&gt;&lt;br&gt;
When reviewing a large pull request, run Orbis on the repo and use the insights panel to see whether high coupling, circular dependencies, or god modules exist in the areas being changed. The AI assistant can answer specific questions about how the affected modules connect to the rest of the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use it to plan a refactor.&lt;/strong&gt;&lt;br&gt;
Ask the AI assistant "which module should I refactor first?" or "where would I add a caching layer?" and get answers grounded in the actual dependency graph. The focus mode lets you isolate a specific module and trace exactly what depends on it before touching anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extend it with additional language parsers.&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;neo_parser.py&lt;/code&gt; already handles five languages via tree-sitter. Adding a new language - Ruby, C++, Swift - follows the same parser pattern and surfaces automatically in the language filter chips and the supported languages list without touching the frontend or the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;Orbis makes codebase architecture something you can see and navigate rather than something you have to reconstruct in your head. A 3D dependency graph, multi-language AST parsing, automatic architectural issue detection, and an AI assistant that knows the actual structure - all from a single repo URL.&lt;/p&gt;

&lt;p&gt;The code is at &lt;a href="https://github.com/dakshjain-1616/Orbit-dependency-visualised" rel="noopener noreferrer"&gt;https://github.com/dakshjain-1616/Orbit-dependency-visualised&lt;/a&gt;&lt;br&gt;
You can also build with NEO in your IDE using the &lt;a href="https://marketplace.visualstudio.com/items?itemName=NeoResearchInc.heyneo" rel="noopener noreferrer"&gt;VS Code extension&lt;/a&gt; or &lt;a href="https://open-vsx.org/extension/NeoResearchInc/heyneo" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;.&lt;br&gt;
You can use NEO MCP with Claude Code: &lt;a href="https://heyneo.com/claude-code" rel="noopener noreferrer"&gt;https://heyneo.com/claude-code&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>opensource</category>
    </item>
    <item>
      <title>🚀 Building AgentForge: AMD AI Agent Platform on Bolt.new</title>
      <dc:creator>Michael G. Inso</dc:creator>
      <pubDate>Sat, 09 May 2026 10:55:21 +0000</pubDate>
      <link>https://gg.forem.com/michaelinzo/building-agentforge-amd-ai-agent-platform-on-boltnew-dbb</link>
      <guid>https://gg.forem.com/michaelinzo/building-agentforge-amd-ai-agent-platform-on-boltnew-dbb</guid>
      <description>&lt;h3&gt;
  
  
  🔧 What Was Built
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Core Platform&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bolt Database schema with agents, workflows, tasks, and workflow runs
&lt;/li&gt;
&lt;li&gt;Row‑level security policies for data safety
&lt;/li&gt;
&lt;li&gt;Real‑time subscriptions support
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Frontend (React + Vite + TypeScript)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Landing Page&lt;/strong&gt; with AMD branding and feature highlights
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard&lt;/strong&gt; showing GPU metrics (utilization, VRAM, temperature, power)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents Page&lt;/strong&gt; to manage models (Llama 3.2, DeepSeek Coder, Qwen 2.5/VL, Mistral 7B)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflows Page&lt;/strong&gt; for multi‑agent orchestration pipelines
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Monitor&lt;/strong&gt; with real‑time execution feeds and performance tracking
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Design&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AMD‑inspired dark theme with red accents (#e8001d)
&lt;/li&gt;
&lt;li&gt;GPU‑centric UI reflecting AMD Instinct MI300X compute
&lt;/li&gt;
&lt;li&gt;Smooth animations, glassmorphism effects, responsive layout
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlzrb5zv1z7lqmi5u95a.jpeg" alt=" " width="800" height="393"&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✨ Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CRUD operations for agents and workflows
&lt;/li&gt;
&lt;li&gt;Real‑time task execution monitoring
&lt;/li&gt;
&lt;li&gt;GPU performance metrics display
&lt;/li&gt;
&lt;li&gt;Search and filtering across pages
&lt;/li&gt;
&lt;li&gt;Modal workflows for creating/editing entities
&lt;/li&gt;
&lt;li&gt;Live status indicators and animations
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluhhcp3vaahx963dgqjt.jpeg" alt=" " width="800" height="393"&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚢 Enhancements
&lt;/h3&gt;

&lt;p&gt;We added:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edge Functions for AI inference
&lt;/li&gt;
&lt;li&gt;Drag‑and‑drop workflow builder UI
&lt;/li&gt;
&lt;li&gt;Advanced filtering and sorting
&lt;/li&gt;
&lt;li&gt;Export/reporting capabilities
&lt;/li&gt;
&lt;li&gt;User authentication with Bolt Database Auth
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🌍 Why It Matters
&lt;/h3&gt;

&lt;p&gt;AgentForge is more than a demo — it’s a &lt;strong&gt;blueprint for the future of AI agents&lt;/strong&gt;. By combining modular workflows, real‑time monitoring, and AMD’s compute power, we’re enabling the next generation of high‑performance applications.  &lt;/p&gt;




&lt;h3&gt;
  
  
  🔗 Live Demo
&lt;/h3&gt;

&lt;p&gt;👉 Check out the published app here: &lt;a href="https://amd-ai-agent-and-app-yfzc.bolt.host" rel="noopener noreferrer"&gt;AgentForge on Bolt.new&lt;/a&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1rebv5rmhi9mq0zymj6.jpeg" alt=" " width="800" height="393"&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Closing
&lt;/h3&gt;

&lt;p&gt;This project embodies the hackathon spirit: rapid iteration, collaboration, and building something that’s both functional and inspiring. Excited to see how the community pushes agentic workflows forward!  &lt;/p&gt;




</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>A Self-Monetizing API in 20 Lines of Code</title>
      <dc:creator>MPP TestKit</dc:creator>
      <pubDate>Sat, 09 May 2026 10:53:15 +0000</pubDate>
      <link>https://gg.forem.com/mpptestkit/a-self-monetizing-api-in-20-lines-of-code-81f</link>
      <guid>https://gg.forem.com/mpptestkit/a-self-monetizing-api-in-20-lines-of-code-81f</guid>
      <description>&lt;p&gt;This is a hands-on tutorial. By the end you'll have a running pay-per-request API server, a client that pays automatically, and a test suite that covers the full payment flow - all on devnet, completely free.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With How We Monetize APIs Today
&lt;/h2&gt;

&lt;p&gt;If you've ever tried to sell access to an API you built, you know the drill:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up for Stripe&lt;/li&gt;
&lt;li&gt;Build a subscription checkout flow&lt;/li&gt;
&lt;li&gt;Issue API keys on payment confirmation&lt;/li&gt;
&lt;li&gt;Store keys in a database&lt;/li&gt;
&lt;li&gt;Validate keys on every request&lt;/li&gt;
&lt;li&gt;Build a usage dashboard&lt;/li&gt;
&lt;li&gt;Handle expired cards, failed payments, refunds&lt;/li&gt;
&lt;li&gt;Write the billing docs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By the time you've done all that, you've built a billing product. That wasn't the thing you wanted to build.&lt;/p&gt;

&lt;p&gt;The worst part: none of this scales to small amounts. Charging $0.001 per API call with Stripe isn't viable - the processing fee alone exceeds the charge. So you're forced into subscriptions, bundles, and credit packs. Your pricing model becomes a product decision instead of just... pricing.&lt;/p&gt;

&lt;p&gt;There's a cleaner way to do this. It's been in the HTTP spec since 1999. It just never had the infrastructure to work.&lt;/p&gt;




&lt;h2&gt;
  
  
  HTTP 402: The Status Code That Was Waiting for Blockchains
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;402 Payment Required
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This status code has been reserved in HTTP since the original 1.1 spec. The idea: server tells the client "pay first, then retry." The client pays, retries with proof, gets the resource.&lt;/p&gt;

&lt;p&gt;The problem was always &lt;em&gt;how&lt;/em&gt;. How does the server specify the amount? In what form? How does the client pay programmatically? How does the server verify payment without a central authority?&lt;/p&gt;

&lt;p&gt;Blockchains answer all of those questions. Solana specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specifies amount in SOL (or any token)&lt;/li&gt;
&lt;li&gt;Accepts payment via a signed transaction&lt;/li&gt;
&lt;li&gt;Provides on-chain verification with no intermediary&lt;/li&gt;
&lt;li&gt;Confirms transactions in ~2 seconds&lt;/li&gt;
&lt;li&gt;Charges fractions of a cent in fees&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://mpptestkit.com" rel="noopener noreferrer"&gt;MPP Testkit&lt;/a&gt; is the SDK that wires this up. Let's build something with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We're Building
&lt;/h2&gt;

&lt;p&gt;A Node.js API server with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/api/ping&lt;/code&gt; - free endpoint, no payment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/api/weather&lt;/code&gt; - costs 0.001 SOL per call&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/api/forecast&lt;/code&gt; - costs 0.005 SOL per call (premium tier)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And a client script that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hits the paid endpoints automatically&lt;/li&gt;
&lt;li&gt;Handles the wallet, airdrop, and payment without any manual steps&lt;/li&gt;
&lt;li&gt;Logs every step of the flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total code: about 20 lines for the server, 10 for the client.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;my-paid-api &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;my-paid-api
npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;express mpp-test-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create two files: &lt;code&gt;server.js&lt;/code&gt; and &lt;code&gt;client.js&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Server
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// server.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createTestServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mpp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createTestServer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Free - no payment needed&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/ping&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// 0.001 SOL per call&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/weather&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;mpp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.001&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;San Francisco&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;temp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Partly cloudy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;paid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// 0.005 SOL per call - premium tier&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/forecast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;mpp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.005&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;San Francisco&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;forecast&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;day&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Mon&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;high&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;low&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;54&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;day&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Tue&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;high&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;low&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;57&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;day&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Wed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;high&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;low&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;52&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;v2-premium&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;paid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Server running on :3001&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Payment recipient:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mpp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;recipientAddress&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node server.js
&lt;span class="c"&gt;# Server running on :3001&lt;/span&gt;
&lt;span class="c"&gt;# Payment recipient: 7xKmPq2rNbMd...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the server. &lt;code&gt;mpp.charge()&lt;/code&gt; is Express middleware. It returns a 402 if no valid payment receipt is present, verifies on-chain if one is, and calls &lt;code&gt;next()&lt;/code&gt; if everything checks out.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Client
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// client.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createTestClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createTestClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;devnet&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;onStep&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`  [&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;] &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;- Hitting /api/ping (free)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ping&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/ping&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ping&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;- Hitting /api/weather (0.001 SOL)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;weather&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/weather&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;weather&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;- Hitting /api/forecast (0.005 SOL)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;forecast&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/forecast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;forecast&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;node client.js

- Hitting /api/ping (free)
{ status: 'ok', ts: 1715123456789 }

- Hitting /api/weather (0.001 SOL)
  [wallet-created] Keypair generated: 3xMn9...Wr4k
  [funded] Airdropped 2 SOL on devnet
  [request] GET http://localhost:3001/api/weather
  [payment] 402 received · paying 0.001 SOL
  [payment] tx confirmed: 5xKm7...Pq2r
  [retry] Retrying with Payment-Receipt header
  [success] 200 OK
{ city: 'San Francisco', temp: 62, condition: 'Partly cloudy', paid: true }

- Hitting /api/forecast (0.005 SOL)
  [request] GET http://localhost:3001/api/forecast
  [payment] 402 received · paying 0.005 SOL
  [payment] tx confirmed: 8xNp3...Qr1s
  [retry] Retrying with Payment-Receipt header
  [success] 200 OK
{ city: 'San Francisco', forecast: [...], model: 'v2-premium', paid: true }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: the wallet and airdrop only happen once. The second paid call reuses the same wallet - no second airdrop. The client is stateful, the calls are not.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Just Happened Under the Hood
&lt;/h2&gt;

&lt;p&gt;When the client hit &lt;code&gt;/api/weather&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Client sent&lt;/strong&gt; &lt;code&gt;GET /api/weather&lt;/code&gt; - no special headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server returned&lt;/strong&gt; &lt;code&gt;402 Payment Required&lt;/code&gt; with header:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;   &lt;span class="py"&gt;Payment-Request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;solana; amount="0.001"; recipient="7xKmPq2r..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SDK parsed&lt;/strong&gt; the header, built a Solana transaction for 0.001 SOL to &lt;code&gt;7xKmPq2r...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDK signed and submitted&lt;/strong&gt; the transaction to devnet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDK waited&lt;/strong&gt; for confirmation (~2 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client retried&lt;/strong&gt; with header:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;   &lt;span class="py"&gt;Payment-Receipt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;solana; signature="5xKm7...Pq2r"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Server verified&lt;/strong&gt; on-chain: transaction exists, recipient matches, amount ≥ 0.001 SOL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server called&lt;/strong&gt; &lt;code&gt;next()&lt;/code&gt;, handler returned 200&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your handler code saw none of this. It just received a request and returned JSON.&lt;/p&gt;




&lt;h2&gt;
  
  
  Writing Tests for Your Paid Endpoints
&lt;/h2&gt;

&lt;p&gt;This is where MPP Testkit really shines. Integration testing paid APIs usually means mocking the payment layer, which means your tests don't actually test the payment logic.&lt;/p&gt;

&lt;p&gt;With MPP Testkit on devnet, you test the real flow for free:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// server.test.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createTestClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;afterAll&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vitest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Start the server&lt;/span&gt;
  &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./server.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Create a test client - wallet + airdrop happen here once&lt;/span&gt;
  &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createTestClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;devnet&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Free endpoints&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET /api/ping returns 200 without payment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/ping&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Paid endpoints&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET /api/weather: pays 0.001 SOL, returns weather data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/weather&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;temp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;paid&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET /api/forecast: pays 0.005 SOL, returns 3-day forecast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/forecast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;forecast&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Direct request without payment returns 402&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/weather&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;402&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;header&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Payment-Request&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;header&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/solana/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;header&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/amount="0.001"/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Request with invalid receipt returns 403&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3001/api/weather&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Payment-Receipt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;solana; signature=fakesig123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;403&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx vitest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are real integration tests. No mocks. No stubs. The payment flow runs against devnet on every test run. If your receipt verification logic breaks, the test catches it - because it actually verifies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Handling Errors Gracefully
&lt;/h2&gt;

&lt;p&gt;Real code needs error handling. The SDK throws typed errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;mppFetch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;MppFaucetError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;MppPaymentError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;MppTimeoutError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;fetchWithFallback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;mppFetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;MppFaucetError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Devnet faucet rate-limited - common in CI with many parallel runs&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Faucet unavailable, retrying in 60s...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;mppFetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// retry once&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;MppPaymentError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Transaction rejected - log and rethrow&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Payment rejected: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; → &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;MppTimeoutError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Flow took too long - Solana can be slow under load&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Timed out after &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;timeoutMs&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ms`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Taking It to Production (Mainnet)
&lt;/h2&gt;

&lt;p&gt;When you're ready to charge real SOL:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server side&lt;/strong&gt; - pin a stable recipient wallet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mpp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createTestServer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SERVER_SECRET_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// base64 or JSON array&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mainnet&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Client side&lt;/strong&gt; - provide a funded wallet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createTestClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mainnet&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CLIENT_SECRET_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// pre-funded with real SOL&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The protocol is identical. The only difference is &lt;code&gt;network: "mainnet"&lt;/code&gt; and no auto-airdrop. Your application code doesn't change.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Creator Economy Angle
&lt;/h2&gt;

&lt;p&gt;I want to zoom out for a second, because this is bigger than just APIs.&lt;/p&gt;

&lt;p&gt;If you're a developer who creates things - libraries, datasets, AI models, tools - the standard monetization paths are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open source&lt;/strong&gt; - free, you get nothing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SaaS&lt;/strong&gt; - build a subscription billing system (Stripe, auth, database, dashboard)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-time license&lt;/strong&gt; - Gumroad, Paddle, etc. - purchase gate, license key management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage-based&lt;/strong&gt; - Stripe metered billing - still requires subscription, monthly invoicing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these require you to build infrastructure around your actual product.&lt;/p&gt;

&lt;p&gt;HTTP 402 with Solana adds a fifth option: &lt;strong&gt;embed payment in the protocol itself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your dataset endpoint charges $0.0001 per query. Your AI model charges $0.01 per inference. Your code analysis tool charges $0.005 per file. No subscription. No free tier decision. No pricing page. The price is in the API response.&lt;/p&gt;

&lt;p&gt;Consumers - including AI agents - just pay and use. Your server accumulates SOL. You withdraw to your wallet.&lt;/p&gt;

&lt;p&gt;The entire billing system is the blockchain.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for AI Agents Right Now
&lt;/h2&gt;

&lt;p&gt;We're at an inflection point: AI agents are being deployed that autonomously call tools, APIs, and services. The problem is that every one of those tools currently requires a human to sign up, get an API key, and manage billing.&lt;/p&gt;

&lt;p&gt;That doesn't scale. An agent that spins up 50 tool-calling sessions needs 50 sets of credentials managed by a human. Or one set of credentials shared across everything - which is a security disaster.&lt;/p&gt;

&lt;p&gt;HTTP 402 with Solana gives agents a payment identity without a human in the loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Agent generates an ephemeral keypair on first run&lt;/li&gt;
&lt;li&gt;Agent funds wallet from its operator's SOL balance&lt;/li&gt;
&lt;li&gt;Agent hits any 402-gated endpoint and pays automatically&lt;/li&gt;
&lt;li&gt;Operator monitors spending via on-chain transactions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No API keys. No credentials database. No revocation system. The wallet is the identity. The payment is the access token.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://agent.mpptestkit.com" rel="noopener noreferrer"&gt;Auton demo&lt;/a&gt; shows this working end-to-end - an agent that generates its own wallet, gets funded, navigates a 402 gate, pays, and retrieves premium data. Every step streams live to the browser. Watch it once and the architecture becomes obvious.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;mpp-test-sdk       &lt;span class="c"&gt;# TypeScript/JavaScript&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;mpp-test-sdk       &lt;span class="c"&gt;# Python&lt;/span&gt;
go get github.com/mpptestkit/mpp-test-sdk-go  &lt;span class="c"&gt;# Go&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimal server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createTestServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mpp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createTestServer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/paid&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mpp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.001&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimal client
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;mppFetch&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mpp-test-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;mppFetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://your-api.com/paid&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Networks
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Network&lt;/th&gt;
&lt;th&gt;Auto-funded&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;devnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Yes (2 SOL)&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;testnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Yes (2 SOL)&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mainnet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Real SOL&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Error types
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;Cause&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MppFaucetError&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Devnet/testnet faucet rate-limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MppPaymentError&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;On-chain transaction rejected&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MppTimeoutError&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full flow exceeded timeout&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MppNetworkError&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Mainnet attempted without funded wallet&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Once you have the basics running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Add multiple price tiers&lt;/strong&gt; - &lt;code&gt;mpp.charge({ amount: "0.01" })&lt;/code&gt; on premium endpoints, &lt;code&gt;mpp.charge({ amount: "0.001" })&lt;/code&gt; on standard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the lifecycle events&lt;/strong&gt; - stream payment status to your frontend in real-time (the playground does this)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate with your existing auth&lt;/strong&gt; - 402 and API keys aren't mutually exclusive; use 402 for metered access, API keys for identity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Move to mainnet&lt;/strong&gt; - same code, swap &lt;code&gt;network: "devnet"&lt;/code&gt; to &lt;code&gt;network: "mainnet"&lt;/code&gt;, provide funded wallets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://mpptestkit.com" rel="noopener noreferrer"&gt;interactive playground&lt;/a&gt; is the fastest way to see the full flow before you build anything. It runs against a live server - no setup, no install.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://mpptestkit.com" rel="noopener noreferrer"&gt;mpptestkit.com&lt;/a&gt; - Playground + full protocol docs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://agent.mpptestkit.com" rel="noopener noreferrer"&gt;agent.mpptestkit.com&lt;/a&gt; - Auton: autonomous agent payment demo&lt;/li&gt;
&lt;li&gt;npm: &lt;code&gt;mpp-test-sdk&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PyPI: &lt;code&gt;mpp-test-sdk&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Go: &lt;code&gt;github.com/mpptestkit/sdk-go&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;The billing system your API never had to build is already in the HTTP spec. It just needed Solana to make it real.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why I Shipped Two Artifact Mechanisms In My VS Code Extension — Not One</title>
      <dc:creator>Thomas Landgraf</dc:creator>
      <pubDate>Sat, 09 May 2026 10:51:18 +0000</pubDate>
      <link>https://gg.forem.com/thlandgraf/why-i-shipped-two-artifact-mechanisms-in-my-vs-code-extension-not-one-50pi</link>
      <guid>https://gg.forem.com/thlandgraf/why-i-shipped-two-artifact-mechanisms-in-my-vs-code-extension-not-one-50pi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1d0znkkwo4yvli9i62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1d0znkkwo4yvli9i62.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A specification is more than text. It comes with a wireframe, the regulatory PDF it answers to, the API contract it has to honour, the stakeholder slide deck someone negotiated against. For a long time, none of that lived in my spec tree. The Markdown files were git-tracked; the evidence behind them rotted in Confluence, in shared drives, in pasted-and-lost screenshots in chat.&lt;/p&gt;

&lt;p&gt;This week I shipped a fix in v0.9.7 of the VS Code extension I maintain. The shape of the fix is the part I want to write about, because the obvious version of it would have been wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full disclosure:&lt;/strong&gt; I'm the creator of &lt;a href="https://marketplace.visualstudio.com/items?itemName=DigitalDividend.speclan-vscode-extension" rel="noopener noreferrer"&gt;SPECLAN&lt;/a&gt;, a VS Code extension that manages product specifications as Markdown files with YAML frontmatter — Git-native, one file per requirement, organized in a hierarchical tree. The pattern (Markdown + YAML + Git) works without the tool; SPECLAN is just where I observed and engineered around the design problem below.&lt;/p&gt;

&lt;h2&gt;
  
  
  The obvious version: one artifact mechanism, governance everywhere
&lt;/h2&gt;

&lt;p&gt;Specs travel in a lifecycle: &lt;code&gt;draft → review → approved → in-development → under-test → released → deprecated&lt;/code&gt;. Once a spec is approved, the team has agreed on its content; the implementation gets built, tested, and shipped against that agreement. The natural next thought: artifact attachments should follow the same discipline. If you can't silently rewrite a &lt;code&gt;released&lt;/code&gt; requirement's body, you also can't silently swap out the API contract attached to it. So: one universal artifact mechanism, Change Request governance everywhere.&lt;/p&gt;

&lt;p&gt;I sketched that. I didn't ship it. Two days into testing, I realized the obvious version forced ceremony onto material that had no business going through a review cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing the obvious version got wrong
&lt;/h2&gt;

&lt;p&gt;Consider the artifacts a project actually accumulates over a year:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;login-flow-mockup.png&lt;/code&gt; attached to a specific feature. Owned by that feature. Means nothing without it.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;api-contract.json&lt;/code&gt; attached to a specific requirement. Verified against that requirement. Drift from it is a real bug.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;architecture/system-overview.png&lt;/code&gt;. Referenced by half the specs. Owned by no spec.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;brand-guidelines.pdf&lt;/code&gt;. Cited by every customer-facing surface. Owned by no spec.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;regulatory/pet-handling-compliance.md&lt;/code&gt;. Read whenever a new compliance-touching feature is drafted. Owned by no spec.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;meeting-notes/2026-04-22-kickoff.md&lt;/code&gt;. Reference material for context. Owned by no spec.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first two are evidence pinned to a specific entity, with a specific lifecycle, where governance is the whole point. The other four are reference material — shared across many specs, owned by none, not bound to any specification's release cycle.&lt;/p&gt;

&lt;p&gt;If I force-march the second class through a Change Request workflow, every architecture-diagram update needs a 4-stage review against… what spec? It doesn't belong to one. The reviewer would be approving a change with no parent entity to compare against. The ceremony has no anchor.&lt;/p&gt;

&lt;p&gt;So I shipped two mechanisms with deliberately different governance:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Axis&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Spec Artifacts&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Project Artifacts&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Hierarchy&lt;/td&gt;
&lt;td&gt;Flat — top-level files only&lt;/td&gt;
&lt;td&gt;Full nested filetree&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Change Request governance&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Mandatory on locked specs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;None&lt;/strong&gt; — direct file ops always&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Filename sanitation&lt;/td&gt;
&lt;td&gt;Enforced at every Add path&lt;/td&gt;
&lt;td&gt;Whatever the filesystem accepts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scope&lt;/td&gt;
&lt;td&gt;Pinned to one spec entity&lt;/td&gt;
&lt;td&gt;Project-wide reference, owned by no one&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visual surface&lt;/td&gt;
&lt;td&gt;Section at the bottom of a spec page&lt;/td&gt;
&lt;td&gt;Third entry in the project tree&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The split falls out of one observation: &lt;strong&gt;locking applies to entities with status; project folders don't have status.&lt;/strong&gt; A spec has a status. A project folder doesn't. There is no "released project" to gate changes against, so a unified governance mechanism would have no anchor for one of the two cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Spec Artifacts work
&lt;/h2&gt;

&lt;p&gt;Every feature, requirement, or change request gets its own &lt;code&gt;artifacts/&lt;/code&gt; folder right next to its &lt;code&gt;.md&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;speclan/features/F-2419-login-flow/
├── F-2419-login-flow.md
├── artifacts/
│   ├── login-mockup.png
│   ├── api-response-schema.json
│   └── stakeholder-approval.pdf
└── change-requests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the WYSIWYG editor, an Artifacts section appears at the bottom of the spec page. Drag-drop or pick to add. Click to open in the registered default viewer. Image artifacts (PNG, JPEG, GIF, WebP, SVG) get an extra trick: they can be embedded inline in the spec body as illustrations, diagrams, or mockups — not as separate attachment rows. The on-disk artifact stays a plain &lt;code&gt;![alt](artifacts/file.png)&lt;/code&gt; markdown link, so the spec is portable to any markdown viewer.&lt;/p&gt;

&lt;p&gt;The Change Request gate kicks in on locked specs. The dispatch table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parent spec status&lt;/th&gt;
&lt;th&gt;Add / Remove behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;draft&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;review&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;in-development&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;under-test&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;deprecated&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Add/remove disabled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On a locked spec, dropping in an artifact doesn't overwrite the canonical file. The system creates a Change Request in &lt;code&gt;draft&lt;/code&gt; status and stages your file under a CR-suffixed disk name. The CR flows through the standard &lt;code&gt;draft → review → approved → in-development → under-test → released&lt;/code&gt; lifecycle. When you click Merge, the staged file becomes canonical.&lt;/p&gt;

&lt;p&gt;The reason for this discipline isn't audit hygiene — it's &lt;strong&gt;artifact / implementation drift.&lt;/strong&gt; A spec in &lt;code&gt;released&lt;/code&gt; is one whose word the implementation team has built against. The API contract attached to it is the contract the build was verified against. If that contract silently shifts, the implementation no longer matches the evidence and nobody knows. The CR-staging mechanism prevents the silent shift; an approved CR doubles as a signal into the implementation flow — same trigger releases the new evidence and tells the implementation it needs to update.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Project Artifacts work
&lt;/h2&gt;

&lt;p&gt;A single project-wide directory at &lt;code&gt;speclan/artifacts/&lt;/code&gt;. Folders nest to any depth. Drop files in via the picker, drag from your OS file manager, or organize subfolders directly through the filesystem — the on-disk filetree is the source of truth, the editor's tree view auto-refreshes via a filesystem watcher.&lt;/p&gt;

&lt;p&gt;No CR governance. No filename sanitation. No status check. Direct file ops always.&lt;/p&gt;

&lt;p&gt;This isn't laziness; it's the second half of the design choice. Reference material doesn't have a release cycle of its own. An architecture diagram updates when the architecture updates. A brand-guidelines PDF updates when marketing pushes a new revision. The project's own lifecycle drives those changes — there is no spec entity with a status that owns them, so there's nothing to gate against.&lt;/p&gt;

&lt;p&gt;The two mechanisms compose: a spec body can link to a project artifact via plain relative markdown (&lt;code&gt;[brand guidelines](../../../artifacts/brand-guidelines.pdf)&lt;/code&gt;). The linkage is plain markdown — SPECLAN doesn't track it as a referential relationship, doesn't auto-stage it under a CR, and doesn't validate the path. They share one thing: a consistent icon vocabulary. A &lt;code&gt;.pdf&lt;/code&gt; artifact gets the same icon in the spec's Artifacts section as it does in the project tree. Different governance, same visual language.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the extension does NOT do with artifact bytes
&lt;/h2&gt;

&lt;p&gt;One boundary worth being explicit about, because the question gets asked: SPECLAN does not read, parse, summarise, or interpret the bytes of your artifacts. None of the AI features (clarification assistants, code-walking inference, change-request merging) consume artifact contents. The file is stored, referenced, surfaced in the UI, governed through CRs, and kept in sync on rename — that's the whole interaction the extension has with it.&lt;/p&gt;

&lt;p&gt;What the extension does is make sure the &lt;strong&gt;implementation agent&lt;/strong&gt; — Claude Code, Codex, Cursor, whatever you hand the spec to — can find the artifacts and decide for itself how to read them. Markdown, JSON, source code, and most images are read natively by modern coding agents. PDFs, DOCX, PPTX usually need a skill attached to the agent or a pre-extraction step into a sibling Markdown artifact before the implementation hand-off.&lt;/p&gt;

&lt;p&gt;This is deliberate. Bundling PDF/DOCX parsers into the extension would (a) bloat it, (b) lock users into one extraction pipeline, and (c) silently expose binary content to AI providers users may not have authorised for that scope. Artifacts are evidence the implementation agent can find — not pre-digested input the AI has already consumed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one-click bridge to implementation
&lt;/h2&gt;

&lt;p&gt;While the architecture work was the headline of this release, the quality-of-life win that ships with it is a button the extension has needed since v0.9.0: &lt;strong&gt;Quick Impl.&lt;/strong&gt; A pill-shaped button in the editor topbar, visible on Features, Requirements, and Change Requests, that turns an &lt;code&gt;approved&lt;/code&gt; spec into a paste-ready implementation prompt with a single click.&lt;/p&gt;

&lt;p&gt;The prompt is structured to read the spec from its relative path, ask the user which implementation technique to use, set the spec's status to &lt;code&gt;in-development&lt;/code&gt; for the duration of the work, and bookend the lifecycle by flipping to &lt;code&gt;under-test&lt;/code&gt; when development is complete. It's a single-spec, fire-and-forget hand-off — the explicit alternative to the planfile-based workflow for "I have one approved feature and just want to ship it now."&lt;/p&gt;

&lt;h2&gt;
  
  
  What this changed about my own workflow
&lt;/h2&gt;

&lt;p&gt;For a year I'd been treating the spec body as the only thing that lived in git. The wireframes lived in Figma; the contracts lived in OpenAPI files in another repo; the regulatory references lived in shared-drive PDFs that I emailed myself when I needed them. The day I started attaching them all to the spec tree under v0.9.7, I noticed how much friction the previous pattern carried that I'd just become numb to.&lt;/p&gt;

&lt;p&gt;The deliberate split — governed Spec Artifacts, ungoverned Project Artifacts — is the kind of design choice that's obvious in hindsight but easy to get wrong upfront. A less careful version would have shipped one universal artifact mechanism with CR governance everywhere, and users (myself included) would have spent months filing bugs about why the architecture diagram needs a 4-stage review cycle to update. Two mechanisms with different governance is what falls out of taking the entity-lifecycle abstraction seriously.&lt;/p&gt;

&lt;p&gt;If you're building tooling that touches a spec lifecycle, the practical lesson generalizes: &lt;strong&gt;governance is determined by what the parent entity needs, not by what's most uniform.&lt;/strong&gt; A unified mechanism is satisfying from the architect's seat. From the user's seat, it forces ceremony on material that has none.&lt;/p&gt;

&lt;p&gt;For the SPECLAN-specific tour, the &lt;a href="https://speclan.net/news/" rel="noopener noreferrer"&gt;release notes&lt;/a&gt; and the &lt;a href="https://speclan.net/help/reference/artifacts/" rel="noopener noreferrer"&gt;Artifacts help page&lt;/a&gt; walk through the user surfaces in detail. And if you're curious which of today's frontier models writes specs you'd actually trust to carry real artifacts, &lt;a href="https://speclan.net/compare/" rel="noopener noreferrer"&gt;speclan.net/compare&lt;/a&gt; parks 13 models' output on the same brief side-by-side.&lt;/p&gt;

&lt;p&gt;What's the cleanest split you've made between governed and ungoverned state in tooling you've shipped? Curious whether the "what does the parent entity need" lens lands the same way elsewhere.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>githubcopilot</category>
      <category>chatgpt</category>
      <category>vscode</category>
    </item>
    <item>
      <title>18 installs, 0 signups. What Chrome extension onboarding actually looks like.</title>
      <dc:creator>HelixLabs-dev</dc:creator>
      <pubDate>Sat, 09 May 2026 10:50:27 +0000</pubDate>
      <link>https://gg.forem.com/helix_labs_dev/18-installs-0-signups-what-chrome-extension-onboarding-actually-looks-like-1j59</link>
      <guid>https://gg.forem.com/helix_labs_dev/18-installs-0-signups-what-chrome-extension-onboarding-actually-looks-like-1j59</guid>
      <description>&lt;p&gt;Two weeks ago I launched FocusForge which is my second Chrome extension. AI powered focus tool with time tracking, site blocking, grayscale mode, and a Nuclear Option that locks every distracting site for up to 8 hours with zero bypass.&lt;/p&gt;

&lt;p&gt;18 installs from organic Chrome Store search. 0 signups. 0 revenue.&lt;br&gt;
Here's what that taught me.&lt;/p&gt;

&lt;p&gt;The free tier problem. AGAIN&lt;/p&gt;

&lt;p&gt;Prompt Helix, my first extension, launched with unlimited free usage. Nobody upgraded because nobody needed to. Fixed that in v1.0.2 with a 25 query daily limit.&lt;/p&gt;

&lt;p&gt;FocusForge launched with AI coaching and Nuclear Option locked behind a paywall. Core features — time tracking, site blocking, grayscale mode, daily reports — all free with no account needed.&lt;/p&gt;

&lt;p&gt;Same mistake. Different product.&lt;/p&gt;

&lt;p&gt;People install it, get real value from the free features, and have zero reason to create an account or upgrade. If the free tier is complete enough to solve the problem, the paid tier is invisible.&lt;/p&gt;

&lt;p&gt;The return behaviour problem.&lt;/p&gt;

&lt;p&gt;The conversion trigger only works if people come back to hit it. Someone who installs FocusForge, opens it twice, and forgets it exists will never see an upgrade prompt regardless of how well designed it is.&lt;/p&gt;

&lt;p&gt;This is the problem I haven't solved yet. The first session has to be compelling enough that they think about it the next day without being reminded. For FocusForge that means the first time someone sets a time limit and gets blocked from a site has to feel genuinely useful — not annoying, not intrusive, actually helpful.&lt;/p&gt;

&lt;p&gt;I don't know if that's happening yet. With 0 signups I have no data on what the first session actually looks like for real users.&lt;br&gt;
What I'm building next to fix this.&lt;/p&gt;

&lt;p&gt;A proper onboarding sequence. Right now someone installs FocusForge and sees the popup with no guidance on what to do first. The first session needs to walk them through setting their first time limit, show them their first daily report, and give them a reason to come back tomorrow.&lt;br&gt;
The re-engagement email is the other piece but I don't have the email infrastructure set up yet. That's the next technical task.&lt;/p&gt;

&lt;p&gt;The honest builder lesson.&lt;/p&gt;

&lt;p&gt;Every metric problem in a Chrome extension eventually traces back to one of three things. Not enough installs, not enough return visits, or not enough upgrade triggers. I've been focused on installs through promotion. The return visit and upgrade trigger problems are what actually need solving now.&lt;/p&gt;

&lt;p&gt;If you've built a Chrome extension and solved the day 1 to day 7 retention problem I'd genuinely love to know what worked.&lt;/p&gt;

&lt;p&gt;Chrome Store: chromewebstore.google.com/detail/focusforge/hdkabchfflgnnonnhffkcmhgbenfoaci&lt;/p&gt;

&lt;p&gt;helixlabs.studio&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building Your First n8n Workflow in 30 Minutes: A Hands-On Tutorial</title>
      <dc:creator>TrackStack</dc:creator>
      <pubDate>Sat, 09 May 2026 10:48:32 +0000</pubDate>
      <link>https://gg.forem.com/trackstack/building-your-first-n8n-workflow-in-30-minutes-a-hands-on-tutorial-3f03</link>
      <guid>https://gg.forem.com/trackstack/building-your-first-n8n-workflow-in-30-minutes-a-hands-on-tutorial-3f03</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;— Build a real, working n8n workflow from scratch in ~30 minutes. We'll fetch the Bitcoin price every weekday at 9 AM, branch on whether it's above $100k, and notify either by email or Slack. Free tier only, no prior experience required. By the end you'll understand triggers, nodes, expressions, and conditional logic — the foundation everything else builds on.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've onboarded a few colleagues to n8n over the past year. The pattern is the same every time: they start with the official "Schedule + NASA solar flares" tutorial, build something that works, then have no idea how to apply it to their actual job. The missing piece is a workflow that uses a real-world API and demonstrates &lt;em&gt;why&lt;/em&gt; you'd use each node type — not just &lt;em&gt;how&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here's that workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five concepts you need before clicking anything
&lt;/h2&gt;

&lt;p&gt;Internalize these. They'll save you hours of confusion later.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt; — a collection of connected nodes that automates a process. One workflow = one automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node&lt;/strong&gt; — a single step. Each does one thing: trigger, fetch, transform, send.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger node&lt;/strong&gt; — the first node. Decides &lt;em&gt;when&lt;/em&gt; the workflow runs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution&lt;/strong&gt; — one full run of the workflow, top to bottom. n8n logs every execution for debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expression&lt;/strong&gt; — JavaScript-flavored snippets in &lt;code&gt;{{ }}&lt;/code&gt; that reference data from previous nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When something breaks, ask yourself: &lt;em&gt;which node failed, what data did it receive, what did it try to do with that data?&lt;/em&gt; Almost every problem maps to those three questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we're building
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Schedule (every weekday 9 AM)
        ↓
HTTP Request (fetch BTC price from CoinGecko)
        ↓
Edit Fields (extract price as a clean number)
        ↓
   If (price &amp;gt; 100,000?)
   ┌────┴────┐
   true     false
    ↓         ↓
  Email    Slack
(celebrate) (notify)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six nodes, two branches, one schedule. Real API, real notifications, every concept a beginner needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Two paths to start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;n8n Cloud&lt;/strong&gt; — sign up at n8n.io, 14-day free trial, no credit card. After trial, paid plans from €24/month for 2,500 executions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted&lt;/strong&gt; — free forever, runs on your own server with Docker. Requires basic Linux comfort. Full production setup walkthrough in our &lt;a href="https://trackstack.tech/en/n8n-self-hosting-guide-2026/" rel="noopener noreferrer"&gt;n8n self-hosting guide&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial, Cloud is faster — every step works identically on self-hosted. After signup, click &lt;strong&gt;Create Workflow&lt;/strong&gt; in the upper-right. You'll see an empty canvas with one button: &lt;strong&gt;Add first step&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Schedule trigger
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Add first step&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;Schedule&lt;/code&gt; and pick &lt;strong&gt;Schedule Trigger&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger Interval:&lt;/strong&gt; &lt;code&gt;Days&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Days Between Triggers:&lt;/strong&gt; &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger at Hour:&lt;/strong&gt; &lt;code&gt;9am&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Optionally: under &lt;strong&gt;Trigger on Weekdays&lt;/strong&gt;, select Mon–Fri only.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Critical detail: the Schedule trigger only fires when the workflow is &lt;strong&gt;published&lt;/strong&gt;. While building, run the workflow manually with the &lt;strong&gt;Execute Workflow&lt;/strong&gt; button at the bottom of the canvas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: HTTP Request — fetch BTC price
&lt;/h2&gt;

&lt;p&gt;The HTTP Request node is the most powerful node in n8n. It calls any public API, even ones without dedicated integrations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;code&gt;+&lt;/code&gt; on the right of the Schedule trigger.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;HTTP Request&lt;/code&gt;, select it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;URL:&lt;/strong&gt; &lt;code&gt;https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&amp;amp;vs_currencies=usd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication:&lt;/strong&gt; &lt;code&gt;None&lt;/code&gt; (CoinGecko allows unauthenticated calls).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Method:&lt;/strong&gt; &lt;code&gt;GET&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Execute step&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should see output like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bitcoin"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"usd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;105432&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the live BTC price. Close the node panel — we'll use this data next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro tip that saves debugging hours:&lt;/strong&gt; &lt;strong&gt;Execute step&lt;/strong&gt; runs only that single node, with sample data, without firing the entire workflow. Use it on every new node before connecting the next one. This catches 80% of mistakes early, when they're easy to fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Edit Fields — clean up the data shape
&lt;/h2&gt;

&lt;p&gt;The CoinGecko response nests the price inside &lt;code&gt;bitcoin.usd&lt;/code&gt;. To make later steps cleaner, let's promote it to a top-level &lt;code&gt;price&lt;/code&gt; field.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;code&gt;+&lt;/code&gt; on the HTTP Request node.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;Edit Fields&lt;/code&gt; (also called "Set" in some versions).&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Fields to Set&lt;/strong&gt;, click &lt;strong&gt;Add Field&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Name:&lt;/strong&gt; &lt;code&gt;price&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value:&lt;/strong&gt; toggle the &lt;code&gt;=&lt;/code&gt; icon to red (this enables expression mode).&lt;/li&gt;
&lt;li&gt;Drag &lt;code&gt;bitcoin.usd&lt;/code&gt; from the left panel into the value field. The expression becomes &lt;code&gt;{{ $json.bitcoin.usd }}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Execute step&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Output should now be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;105432&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expressions are how you reference data from previous nodes. Anything inside &lt;code&gt;{{ }}&lt;/code&gt; is JavaScript. &lt;code&gt;$json&lt;/code&gt; means "data the previous node returned." You don't need to memorize syntax — drag fields from the left panel and n8n writes the expression for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: If node — conditional branching
&lt;/h2&gt;

&lt;p&gt;The If node creates two branches. We'll route based on whether BTC is above $100k.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;code&gt;+&lt;/code&gt; on the Edit Fields node.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;If&lt;/code&gt;, select it.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Conditions&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Value 1:&lt;/strong&gt; drag &lt;code&gt;price&lt;/code&gt; from the left panel → &lt;code&gt;{{ $json.price }}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operation:&lt;/strong&gt; &lt;code&gt;Number &amp;gt; Larger&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value 2:&lt;/strong&gt; &lt;code&gt;100000&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Execute step&lt;/strong&gt; to verify.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The If node now exposes two output connectors: &lt;strong&gt;true&lt;/strong&gt; (top) and &lt;strong&gt;false&lt;/strong&gt; (bottom).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common gotcha:&lt;/strong&gt; make sure &lt;strong&gt;Operation&lt;/strong&gt; is set to &lt;code&gt;Number&lt;/code&gt;, not &lt;code&gt;String&lt;/code&gt;. String comparison treats &lt;code&gt;"5" &amp;gt; "100000"&lt;/code&gt; as &lt;code&gt;true&lt;/code&gt; (alphabetic order), which silently breaks your logic. This bites everyone at least once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Action nodes (email + Slack)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Email on the "true" branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;code&gt;+&lt;/code&gt; labeled &lt;strong&gt;true&lt;/strong&gt; on the If node.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;Send Email&lt;/code&gt;, select it.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create new credential&lt;/strong&gt; → configure SMTP. Gmail needs an app-specific password; most ESPs accept standard SMTP creds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To Email:&lt;/strong&gt; your address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subject:&lt;/strong&gt; &lt;code&gt;BTC just hit $100k!&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text&lt;/strong&gt; (expression mode): &lt;code&gt;BTC is currently at ${{ $json.price }} — celebration time!&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Execute step&lt;/strong&gt; to send a test email.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Slack on the "false" branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Back on the canvas. Click &lt;code&gt;+&lt;/code&gt; labeled &lt;strong&gt;false&lt;/strong&gt; on the If node.&lt;/li&gt;
&lt;li&gt;Search &lt;code&gt;Slack&lt;/code&gt;, select it.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create new credential&lt;/strong&gt; → OAuth2 flow connects your workspace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; &lt;code&gt;Message&lt;/code&gt;, &lt;strong&gt;Operation:&lt;/strong&gt; &lt;code&gt;Send&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Pick a channel (e.g., &lt;code&gt;#general&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text&lt;/strong&gt; (expression mode): &lt;code&gt;BTC is at ${{ $json.price }} — still under $100k.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Execute step&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If both test sends worked, the workflow is functionally complete. &lt;strong&gt;Save it now&lt;/strong&gt; — &lt;code&gt;Cmd/Ctrl + S&lt;/code&gt; or click Save at the top right. n8n doesn't auto-save while you build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Test, then publish
&lt;/h2&gt;

&lt;p&gt;Two final steps separate "kinda works" from "actually runs reliably."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run the full workflow once manually.&lt;/strong&gt; Click &lt;strong&gt;Execute Workflow&lt;/strong&gt; at the bottom. Every node turns green on success or red on failure. If anything fails, click the failed node — n8n shows the exact error and the input data that triggered it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publish.&lt;/strong&gt; Toggle &lt;strong&gt;Publish&lt;/strong&gt; at the top of the editor to active. Now the Schedule trigger fires every weekday at 9 AM automatically.&lt;/p&gt;

&lt;p&gt;To verify it actually fires, open &lt;strong&gt;Executions&lt;/strong&gt; in the left sidebar. After 9 AM tomorrow, you'll see a fresh execution logged. Click into any execution to see what data flowed through each node — invaluable for debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Six gotchas that bite everyone
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Forgetting to publish.&lt;/strong&gt; Most common reason "the schedule isn't firing." If the toggle isn't on &lt;strong&gt;Published&lt;/strong&gt;, the trigger is dormant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;String vs Number in If nodes.&lt;/strong&gt; &lt;code&gt;"5" &amp;gt; "10"&lt;/code&gt; returns true alphabetically. Always pick the right operation type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardcoding what should be an expression.&lt;/strong&gt; Typing the literal text &lt;code&gt;$json.price&lt;/code&gt; into a regular field doesn't work. Toggle the &lt;code&gt;=&lt;/code&gt; icon to red first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Polling APIs every minute on n8n Cloud.&lt;/strong&gt; A 1-minute schedule = 43,200 executions/month, which exceeds most paid plan limits. Use webhooks where possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not testing each node individually.&lt;/strong&gt; Click &lt;strong&gt;Execute step&lt;/strong&gt; on every new node before connecting the next. Prevents 80% of debugging pain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No backup of N8N_ENCRYPTION_KEY (self-hosted).&lt;/strong&gt; Lose this and every saved credential is unrecoverable. Back it up to a password manager the moment you generate it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What to build next
&lt;/h2&gt;

&lt;p&gt;You now know the foundation. Three productive next steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Replace the Schedule trigger with a Webhook trigger&lt;/strong&gt; to react to events from external systems instead of polling. Big efficiency gain. New to webhooks? Read this &lt;a href="https://trackstack.tech/en/what-is-a-webhook-and-how-to-test-it/" rel="noopener noreferrer"&gt;practical webhook primer including testing&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add an error-handling node&lt;/strong&gt; that fires when any step fails. Without it, silent failures will burn you eventually. The pattern: every critical workflow ends with a "Send Email/Slack on Error" branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build something for your actual job.&lt;/strong&gt; Pick one repetitive task you do every week (compiling stats, posting reports, syncing data) and rebuild it as an n8n workflow. The fastest way to learn is to solve a real problem.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're comparing n8n with other automation platforms before committing more time, here's our &lt;a href="https://trackstack.tech/en/zapier-vs-make-2026/" rel="noopener noreferrer"&gt;Zapier vs Make 2026 breakdown&lt;/a&gt; covering trade-offs across hosted alternatives.&lt;/p&gt;

&lt;h2&gt;
  
  
  The most useful first workflow for SMBs
&lt;/h2&gt;

&lt;p&gt;The pattern that pays back fastest: &lt;strong&gt;form submission → CRM record → notification&lt;/strong&gt;. New lead fills a form, data lands in CRM with proper tagging, your team gets notified instantly. Eliminates manual data entry, reduces lead response time from hours to seconds, and uses every concept from this tutorial.&lt;/p&gt;

&lt;p&gt;Build this version once and you'll see why n8n's learning curve is worth it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://trackstack.tech/en/how-to-build-first-workflow-n8n-beginner-tutorial-2026/" rel="noopener noreferrer"&gt;TrackStack&lt;/a&gt; — practical write-ups on automation, tracking, and infrastructure for SMBs. If you got stuck on any step, drop a comment with what broke and I'll help debug.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Gave My Newsletter a Voice (Literally)</title>
      <dc:creator>Andrea Liliana Griffiths</dc:creator>
      <pubDate>Sat, 09 May 2026 10:48:10 +0000</pubDate>
      <link>https://gg.forem.com/andreagriffiths11/i-gave-my-newsletter-a-voice-literally-3k5p</link>
      <guid>https://gg.forem.com/andreagriffiths11/i-gave-my-newsletter-a-voice-literally-3k5p</guid>
      <description>&lt;h1&gt;
  
  
  I Gave My Newsletter a Voice (Literally)
&lt;/h1&gt;

&lt;p&gt;My newsletter site has a chat widget now. You type a question, it searches through every issue I've ever written, and gives you an answer with sources.&lt;/p&gt;

&lt;p&gt;That took an evening. Cool, but not interesting enough to write about.&lt;/p&gt;

&lt;p&gt;What made me write this: I added a microphone button next to the text input. Click it, and you're in a real-time voice conversation with an AI agent that knows my content. You talk, it listens, it talks back. Not a recording. Not text-to-speech over a chat response. An actual voice conversation.&lt;/p&gt;

&lt;p&gt;The stack behind it is LiveKit, and I want to walk through how it works because it's simpler than I expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  What LiveKit Actually Does
&lt;/h2&gt;

&lt;p&gt;LiveKit is real-time communication infrastructure. Think "Zoom but programmable." It handles all the WebRTC complexity — rooms, audio routing, codecs, latency optimization — so you don't have to.&lt;/p&gt;

&lt;p&gt;The part that matters for AI voice agents: LiveKit has an agent framework. You write a Python worker that connects to their cloud service and waits. When a user joins a room, LiveKit dispatches your agent into that room. The agent listens to the user's microphone, processes speech, thinks, and talks back. All in real time.&lt;/p&gt;

&lt;p&gt;The latency is wild. It feels like talking to someone, not waiting for a computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Three pieces:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The API&lt;/strong&gt; — A FastAPI server that handles text chat (RAG search over my newsletter content) and generates LiveKit room tokens when someone clicks the mic button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The voice agent&lt;/strong&gt; — A Python worker running LiveKit's agent SDK. It connects outbound to LiveKit Cloud and waits for rooms. When someone joins, it gets dispatched. Inside the agent: voice activity detection (Silero VAD), speech-to-text (Azure Speech Services), an LLM (GPT-4.1-mini via GitHub Models), and text-to-speech (Azure Speech Services). Azure handles both ends of the voice pipeline — turning your speech into text the LLM can understand, then turning the LLM's response back into natural-sounding speech. Before every response, it searches my knowledge base for relevant context — same RAG pipeline the text chat uses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The frontend&lt;/strong&gt; — An Astro component with a mic button. Clicking it loads the LiveKit client SDK, requests a room token from my API, and connects to the room via WebRTC. The agent joins, and they're talking.&lt;/p&gt;

&lt;p&gt;Both the API and the voice agent run in a single Railway container. A bash script starts both processes — if either dies, the container exits and Railway restarts it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The RAG Part
&lt;/h2&gt;

&lt;p&gt;Every time the user says something, the agent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Transcribes the speech — Azure Speech Services converts your voice to text in real time&lt;/li&gt;
&lt;li&gt;Embeds the transcript using GitHub Models API (text-embedding-3-small, 1536 dimensions)&lt;/li&gt;
&lt;li&gt;Searches a SQLite vector database (sqlite-vec) for the most relevant newsletter chunks&lt;/li&gt;
&lt;li&gt;Rebuilds the system prompt with fresh context&lt;/li&gt;
&lt;li&gt;Generates a response (GPT-4.1-mini via GitHub Models)&lt;/li&gt;
&lt;li&gt;Speaks it back — Azure Speech Services (en-US-JennyNeural voice) converts the text response to natural-sounding speech&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This happens per utterance. The agent's knowledge stays current with whatever the user is asking about, not stuck on whatever the first question was.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hybrid retrieval trick:&lt;/strong&gt; Vector search alone can't answer "what's the latest issue?" because semantic similarity doesn't understand ordering. The solution: at startup, the agent queries the database for all newsletter issue URLs, extracts the numbers, and injects a content index into every system prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Available newsletter issues: issue-1, issue-2, ..., issue-20
The latest/most recent issue is issue-20
Total issues: 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the LLM gets both semantic context from vector search &lt;em&gt;and&lt;/em&gt; structural metadata it can't learn from embeddings. Ask "what's the latest issue?" and it knows. Ask "tell me about GitHub Copilot" and vector search finds the right chunks. Hybrid retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Azure Speech Services?
&lt;/h2&gt;

&lt;p&gt;The voice pipeline has two critical pieces where quality matters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speech-to-text accuracy.&lt;/strong&gt; If the transcription is wrong, the whole conversation breaks. Azure Speech Services gives me high-accuracy transcription with low latency — critical for real-time voice. It handles accents, background noise, and technical terminology better than the alternatives I tested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural-sounding TTS.&lt;/strong&gt; Azure's neural voices sound human. Not robotic, not uncanny valley. The "en-US-JennyNeural" voice I use for the agent has natural pacing, intonation, and emotion. When the agent says "Let me search my knowledge base for that," it sounds like a person helping you, not a computer reading text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming support.&lt;/strong&gt; Both Azure STT and TTS support streaming. The agent starts transcribing as you speak (no waiting for you to finish), and starts speaking as soon as the first chunk of TTS audio is ready (no waiting for the full LLM response). This cuts perceived latency in half.&lt;/p&gt;

&lt;p&gt;LiveKit integrates with Azure Speech Services out of the box via their plugins (&lt;code&gt;livekit-plugins-azure&lt;/code&gt;). Configuration is straightforward — set your Azure subscription key and region, pick a voice, done.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Surprised Me
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LiveKit agents are workers, not servers.&lt;/strong&gt; They don't listen on a port. They connect outbound to LiveKit Cloud and get dispatched into rooms. This threw me at first — I kept trying to think of it as another HTTP service. It's not. It's a background worker that happens to handle real-time audio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The voice pipeline has real latency requirements.&lt;/strong&gt; Text chat can take 2-3 seconds and nobody cares. Voice? If there's a 2-second gap after someone finishes talking, it feels broken. LiveKit's streaming architecture handles this — the TTS starts speaking before the full LLM response is complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sqlite-vec is underrated.&lt;/strong&gt; I'm running vector search in SQLite. No Pinecone, no Weaviate, no managed vector database. For a knowledge base of ~130 newsletter chunks (all 20 issues, articles, and GitHub blog posts), this is more than enough. The query takes single-digit milliseconds. Embeddings come from GitHub Models API during ingestion — free during preview, high quality (text-embedding-3-small, 1536 dims), and no local model loading headaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging production is different.&lt;/strong&gt; The voice agent worked locally but failed silently in Railway. The agent would listen, transcribe perfectly, but always respond with "I don't have that information." Turned out the embeddings API was returning 400 errors because an old environment variable (&lt;code&gt;LIVEBRAIN_EMBEDDING_MODEL&lt;/code&gt;) was still set to a local model name (&lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt;) that the API didn't recognize. The fix: delete the variable and let it default to &lt;code&gt;text-embedding-3-small&lt;/code&gt;. Real-time logging made this visible — without &lt;code&gt;print()&lt;/code&gt; statements showing chunk retrieval counts and similarities, I would have been guessing for hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm extracting the reusable parts of this into an open-source framework. The idea: point it at a YAML file with your content sources, run an ingestion script, and you get a voice agent that knows your stuff. Newsletter, documentation, blog — whatever you feed it.&lt;/p&gt;

&lt;p&gt;It's not ready yet. The mainbranch-agent version works, but the generic framework needs cleanup before anyone else can use it. I'll open-source it when it's actually good, not when it's "minimum viable."&lt;/p&gt;

&lt;p&gt;If you want to see it in action, go to &lt;a href="https://mainbranch.dev" rel="noopener noreferrer"&gt;mainbranch.dev&lt;/a&gt; and click the chat bubble. The mic button is right there.&lt;/p&gt;

</description>
      <category>livekit</category>
      <category>voiceai</category>
      <category>rag</category>
      <category>githubmodels</category>
    </item>
    <item>
      <title>U.S. Cyber Trust Mark: what IoT firmware teams should prepare</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Sat, 09 May 2026 10:47:26 +0000</pubDate>
      <link>https://gg.forem.com/pezzullo/us-cyber-trust-mark-what-iot-firmware-teams-should-prepare-3k28</link>
      <guid>https://gg.forem.com/pezzullo/us-cyber-trust-mark-what-iot-firmware-teams-should-prepare-3k28</guid>
      <description>&lt;p&gt;IoT security labels are turning cybersecurity from an internal engineering topic into a visible product requirement.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is an English DEV.to draft based on a Silicon LogiX technical article. The canonical source is linked at the end.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;The U.S. Cyber Trust Mark is voluntary, but it can influence buyers, retailers and procurement teams.&lt;/p&gt;

&lt;p&gt;For firmware teams, the important part is not the label itself. It is the discipline required to earn and maintain trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Products need updateable firmware, protected credentials, secure configuration and documented support periods.&lt;/li&gt;
&lt;li&gt;A public registry or QR-linked label makes lifecycle information easier to compare.&lt;/li&gt;
&lt;li&gt;The requirements overlap with broader global trends such as the EU Cyber Resilience Act.&lt;/li&gt;
&lt;li&gt;Security evidence becomes part of the product package, not only an internal checklist.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Document supported lifetime, update policy and vulnerability disclosure process.&lt;/li&gt;
&lt;li&gt;[ ] Implement signed firmware updates and rollback protection.&lt;/li&gt;
&lt;li&gt;[ ] Remove default passwords and protect commissioning flows.&lt;/li&gt;
&lt;li&gt;[ ] Track software components and known vulnerabilities.&lt;/li&gt;
&lt;li&gt;[ ] Prepare test evidence that non-security stakeholders can understand.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common mistakes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Treating the label as a marketing task after development is finished.&lt;/li&gt;
&lt;li&gt;Shipping products that cannot be patched in the field.&lt;/li&gt;
&lt;li&gt;Documenting security claims that the firmware architecture cannot support.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final takeaway
&lt;/h2&gt;

&lt;p&gt;Security labels raise the bar because they make product lifecycle promises visible. Firmware architecture has to be ready before certification conversations begin.&lt;/p&gt;




&lt;p&gt;Canonical source: &lt;a href="https://www.siliconlogix.it/en/article/us-cyber-trust-mark-what-iot-firmware-teams-should-prepare" rel="noopener noreferrer"&gt;U.S. Cyber Trust Mark: what IoT firmware teams should prepare&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you build embedded, IoT or firmware products and want a second pair of eyes on architecture, update strategy or security, Silicon LogiX can help turn prototypes into maintainable systems.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>iot</category>
      <category>news</category>
      <category>security</category>
    </item>
    <item>
      <title>QUIC in embedded systems: when it makes sense over TCP and UDP</title>
      <dc:creator>Marco</dc:creator>
      <pubDate>Sat, 09 May 2026 10:47:25 +0000</pubDate>
      <link>https://gg.forem.com/pezzullo/quic-in-embedded-systems-when-it-makes-sense-over-tcp-and-udp-4eba</link>
      <guid>https://gg.forem.com/pezzullo/quic-in-embedded-systems-when-it-makes-sense-over-tcp-and-udp-4eba</guid>
      <description>&lt;p&gt;QUIC is often described as a replacement for TCP or UDP. For embedded products, the useful question is narrower: when does it improve the system?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is an English DEV.to draft based on a Silicon LogiX technical article. The canonical source is linked at the end.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;QUIC runs over UDP but provides secure, reliable streams with faster connection setup and better multiplexing behavior.&lt;/p&gt;

&lt;p&gt;It can be valuable for devices that interact with modern HTTP/3 services, cloud APIs or dashboards with many parallel requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;QUIC integrates TLS 1.3 into the transport model instead of layering TLS over TCP.&lt;/li&gt;
&lt;li&gt;Stream multiplexing avoids some head-of-line blocking problems seen with TCP-based HTTP/2.&lt;/li&gt;
&lt;li&gt;Connection migration can help mobile or changing-network scenarios, though not every embedded device needs it.&lt;/li&gt;
&lt;li&gt;The cost is stack complexity, memory usage, diagnostics and firewall/NAT behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Use QUIC when connection setup latency or multiplexing changes user-visible behavior.&lt;/li&gt;
&lt;li&gt;[ ] Stay with TCP when simplicity, tooling and compatibility are more important.&lt;/li&gt;
&lt;li&gt;[ ] Use raw UDP only when the application can own reliability and security correctly.&lt;/li&gt;
&lt;li&gt;[ ] Measure RAM, CPU and handshake behavior on target hardware.&lt;/li&gt;
&lt;li&gt;[ ] Plan how field technicians will diagnose QUIC failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common mistakes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Choosing QUIC because it is newer, not because it solves a product problem.&lt;/li&gt;
&lt;li&gt;Ignoring network environments that block or degrade UDP.&lt;/li&gt;
&lt;li&gt;Underestimating observability and debugging cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final takeaway
&lt;/h2&gt;

&lt;p&gt;QUIC is a useful tool for specific embedded networking problems, not a universal upgrade over TCP and UDP.&lt;/p&gt;




&lt;p&gt;Canonical source: &lt;a href="https://www.siliconlogix.it/en/article/quic-in-embedded-systems-when-it-makes-sense-over-tcp-and-udp" rel="noopener noreferrer"&gt;QUIC in embedded systems: when it makes sense over TCP and UDP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you build embedded, IoT or firmware products and want a second pair of eyes on architecture, update strategy or security, Silicon LogiX can help turn prototypes into maintainable systems.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>iot</category>
      <category>networking</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
