diff --git a/README.md b/README.md index 6e41332..4f1cbb2 100644 --- a/README.md +++ b/README.md @@ -55,7 +55,7 @@ Once published on arXiv: 1. **Quick Overview**: Read `executive_summary.md` (2 pages) 2. **Technical Summary**: Read `two_page_summary.tex` (2 pages, compile to PDF) -3. **Full Paper**: Read `main.tex` (24 pages, compile to PDF) +3. **Full Paper**: Read `main.tex` (25 pages, compile to PDF) 4. **Try It Yourself**: Visit the [experiments repository](https://github.com/sqrtspace/sqrtspace-experiments) ## Contact diff --git a/figures/ollama_sqrt_validation.png b/figures/ollama_sqrt_validation.png new file mode 100644 index 0000000..3ae4ea5 Binary files /dev/null and b/figures/ollama_sqrt_validation.png differ diff --git a/main.tex b/main.tex index feddd2f..f46efcf 100644 --- a/main.tex +++ b/main.tex @@ -505,6 +505,13 @@ Chunked $\sqrt{n}$ & 54.10 $\pm$ 2.71s & 2.41 MB & 122 & 18.3× \\ The 18.3× slowdown aligns more closely with theoretical predictions than our simulated results, demonstrating that real models exhibit the expected space-time tradeoffs when processing is dominated by model inference rather than memory bandwidth. +\begin{figure}[htbp] +\centering +\includegraphics[width=0.75\textwidth]{figures/ollama_sqrt_validation.png} +\caption{Validation that our Ollama context chunking follows the theoretical $\sqrt{n}$ pattern. For 14,750 characters of input, we use 122 chunks of 121 characters each, precisely following $\sqrt{n}$ chunking.} +\label{fig:ollama_sqrt} +\end{figure} + \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{figures/ollama_spacetime_results.png} diff --git a/ubiquity.pdf b/ubiquity.pdf index bd6dba6..f9d1e0b 100644 Binary files a/ubiquity.pdf and b/ubiquity.pdf differ