OUR MISSION
UNDERSTANDINGHOW NEURAL NETWORKS
LEARN, FORGET & GENERALIZE
Axion Deep Labs conducts original experimental research in deep learning theory. We investigate the structural conditions that govern knowledge persistence, information capacity, and generalization in neural networks — with an emphasis on reproducibility, open methodology, and cross-disciplinary rigor.
Research Domains
Our work spans six interconnected domains, unified by a central question: what structural properties of neural networks determine their capacity to learn, retain, and generalize knowledge?
Continual Learning & Catastrophic Forgetting
Why do neural networks forget previously learned tasks when trained on new data? We investigate the structural and topological conditions under which knowledge persists or degrades across sequential training regimes.
Active — experimental data collected
Topological Data Analysis
Applying persistent homology to characterize the shape of neural network loss landscapes. We measure how topological features (connected components, loops, voids) relate to learning dynamics and generalization.
Active — cross-architecture study in progress
Information Capacity & Scaling Laws
Does neural network information capacity follow area laws or volume laws? We test whether capacity scales with boundary parameters (a computational analog of the Bekenstein bound) rather than total parameter count.
Protocol defined — pending execution
Integrated Information Measurement
Adapting Integrated Information Theory (Tononi, 2004) from neuroscience to computational systems. We measure Phi across deep learning architecture families and test its correlation with generalization and robustness.
Protocol defined — pending execution
Quantum System Behavior
Characterizing stability degradation in quantum state evolution under repeated operator application. We investigate how operator ordering and diversity affect behavioral uncertainty in regimes beyond closed-form prediction.
Active — theoretical framework established
Loss Landscape Geometry
Studying the geometric and topological structure of optimization landscapes in deep neural networks. We analyze how architecture choices, training regimes, and data distribution shape the loss surface.
Integrated across active programs
Research Methodology
Every experiment follows a structured protocol designed for independent reproducibility. We version-control configurations, pin dependencies, and publish all code and data.
Formulate
Identify open questions in the literature and formulate testable hypotheses. Each research program begins with a specific, falsifiable prediction grounded in prior work.
Design
Create reproducible experimental protocols with version-controlled YAML configurations, deterministic seeding, and full dependency pinning. Every experiment is designed to be independently replicable.
Execute
Run controlled experiments on local GPU infrastructure with automated tracking. We use ClearML for experiment management, PyTorch for model training, and Ripser/scikit-tda for topological computation.
Publish
Share findings through peer-reviewed venues (NeurIPS, ICML, Nature) and open-source code repositories. All experimental code, configurations, and raw data are made publicly available.
Current Focus: Catastrophic Forgetting
Our flagship experiment (EXP-01) has completed preliminary proof-of-concept, demonstrating across 19 small-to-medium architectures and 3 datasets that loss landscape topology predicts mitigation benefit at small scale (H0 predicts EWC benefit: CIFAR-100 ρ = 0.76, RESISC-45 ρ = 0.86). These results are preliminary: all models are under 45M parameters. The critical open question is whether the signal survives at production scale (100M-7B+ parameters), which requires supercomputer resources and potentially novel distributed persistent homology algorithms. Phase I scale validation is planned pending supercomputer allocation.
Research Collaborations
We welcome inquiries from funding agencies, academic collaborators, and researchers working on related problems in deep learning theory and continual learning.
