Professional resume of Andrew Charneski, a senior software engineer and AI architect with 20+ years of experience in AI/ML research, distributed systems, cloud infrastructure, and full-stack development.
An exploration of Generation-Augmented Retrieval (GAR) and the Model Context Protocol (MCP), navigating the shift from traditional RAG through a unique pirate-themed narrative.
A comprehensive system architecture enabling LLMs to design, simulate, and visualize analog circuits through a Central Intermediate Representation (CIR-JSON), combining eecircuit simulation with tscircuit visualization.
Projects input vectors into Minkowski spacetime and computes complex-valued pseudo-distances to learned reference points, encoding causal structure through timelike vs. spacelike intervals.
Comprehensive technical analysis of the InterpolatedDensityEntropy neural network layer, including forward/backward passes, gradient derivations, stability analysis, and reference implementations.
A novel computational framework combining wavelet-decomposed geographic topology with deep neural cellular automata for learning geospatial dynamics from observational data
A novel regularization framework for large language models using spherical harmonic decomposition to control semantic frequencies and enable principled hallucination suppression.
A novel computational framework for automated discovery of analytical maximum entropy distribution families using genetic programming validated against parameterizable data generators.
Exploring the profound parallels between quantum decoherence and neural network dropout to develop unified frameworks for robust information processing across computational paradigms processing
A novel computational paradigm proposing Probabilistic Neural Substrates (PNS) that maintain continuous probability distributions through cross-entropy optimization, enabling self-organizing recurrent intelligence with unprecedented interpretability and uncertainty quantification.
Revolutionary synthesis of geometric optimization with Probabilistic Neural Substrates, creating self-organizing intelligent systems with unprecedented mathematical elegance.
A novel dual-constraint training methodology that preserves intellectual diversity while enabling continued learning in neural networks through adaptive anomaly preservation and trust region approaches.
Comprehensive analysis of AI's transformative impact on software development, examining current trends, future projections, and best practices for conscious evolution in the age of autonomous development.
First systematic study of LLM performance degradation when processing self-referential and meta-cognitive content, revealing exponential failure rates with recursive depth.
Comprehensive analysis of quantum field theory generalizations using Taylor expansion frameworks, covering effective field theory, experimental constraints, and machine learning applications.
Novel technique for generating mathematically symmetric textures using neural networks with geometric constraints, exploring Euclidean, spherical, and hyperbolic geometries.
A novel optimization algorithm that improves deep neural network training by decomposing gradients into layer-wise components and using meta-optimization to find optimal combinations.
A comprehensive framework for automated prompt optimization using genetic algorithms, enabling systematic improvement of Large Language Model prompts through evolutionary computation.
Technical analysis of MindsEye's modular optimization architecture, examining its four-layer decomposition and innovative approaches to machine learning framework design.
Novel method for modeling probability distributions using volumetric density trees with quadratic polynomial constraints, addressing complex geometric boundaries in 2-4D spaces.
Explore alternative loss functions for regression beyond least-squares, including zero-loss zones, robust methods, and practical applications in engineering and ML.
Novel extension to decision tree methodology that models joint probability distributions using cross-entropy optimization between prior and posterior distributions.
A comprehensive methodology for implementing scalable 2D convolution layers in neural networks, addressing GPU memory constraints through dynamic partitioning