<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Daneel AI Research</title>
    <link>https://daneel.injen.io/research/</link>
    <description>In-depth research papers from Daneel AI on local inference, model architecture, quantization, and adjacent technologies.</description>
    <language>en-us</language>
    <atom:link href="https://daneel.injen.io/research/rss.xml" rel="self" type="application/rss+xml"/>
    
  <item>
    <title>From community ports to first-party releases: small-to-medium browser LLMs in March 2026</title>
    <link>https://daneel.injen.io/research/state-of-browser-small-medium-llms-march-2026.html?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=news_syndication</link>
    <guid isPermaLink="true">https://daneel.injen.io/research/state-of-browser-small-medium-llms-march-2026.html</guid>
    <description>In late 2024 the in-browser open-source LLM catalog was effectively three community ports: Microsoft Phi-3, Meta Llama 3.2, and HuggingFace SmolLM2. By March 2026 it runs to roughly two dozen first-party releases from a multi-vendor cohort that did not exist eighteen months earlier, including IBM Granite 4 with explicit ONNX-web variants, OpenAI's first open-weight family in years, Liquid AI's hybrid LFM2.5 line, and a 1-bit entrant from Caltech that emerged from stealth on the last day of the window. This paper maps the trajectory through its inflection points, the labs driving releases, the catalog as it stands, and what 2026-2027 has telegraphed.</description>
    <pubDate>Sat, 25 Apr 2026 00:00:00 GMT</pubDate>
    <enclosure url="https://daneel.injen.io/medias/research.small.models.jpg" type="image/png"/>
    <category>small models</category><category>ONNX</category><category>transformers.js</category><category>open-source LLM</category><category>on-device AI</category>
  </item>

  <item>
    <title>The three walls of in-browser LLM inference: the state of affairs in April 2026</title>
    <link>https://daneel.injen.io/research/three-walls-browser-llm-inference-2026.html?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=news_syndication</link>
    <guid isPermaLink="true">https://daneel.injen.io/research/three-walls-browser-llm-inference-2026.html</guid>
    <description>In-browser LLM inference has lived under three constraint walls in sequence: protobuf's 2 GB cap on `.onnx` files, WebAssembly's 4 GB linear-memory limit, and the browser-allocated WebGPU VRAM budget.
As of April 2026, the first two are effectively cleared — the first by ONNX's External Data format, the second by ONNX Runtime's new C++ WebGPU execution provider — while the third remains the binding constraint, with no portable spec query and substantial per-platform variance.
This paper traces how each wall arose, how the February 2026 transformers.js v4 / ORT C++ EP inflection collapsed two of them, and what genuinely fits in a browser tab today across Apple Silicon, NVIDIA, AMD, Intel, and mobile.</description>
    <pubDate>Sat, 25 Apr 2026 00:00:00 GMT</pubDate>
    <enclosure url="https://daneel.injen.io/medias/research.webgpu.color.png" type="image/png"/>
    <category>WebGPU</category><category>transformers.js</category><category>ONNX Runtime</category><category>in-browser LLM</category><category>WebAssembly</category>
  </item>
  </channel>
</rss>