← Back to Blog

Fundamental Launches NEXUS to Automate ETL While TTT-Discover Doubles GPU Performance

Executive Summary

AI is moving from a novelty interface to a structural layer for the enterprise. Fundamental just launched a model that handles tabular data directly, which could eliminate the costly manual labor involved in traditional data prep. We're also seeing a clear shift in how companies interact with these tools, moving away from simple chat toward managing autonomous agents that actually execute tasks.

The capital expenditure race between Amazon and Google shows no signs of slowing, but the focus is shifting toward efficiency. TTT-Discover is now optimizing GPU kernels twice as fast as human experts by training during inference. This software-level breakthrough justifies the high hardware spend by squeezing more value out of every chip, making the current bullish sentiment feel grounded in technical reality.

Expect the next wave of ROI to come from these hidden infrastructure wins rather than public-facing chatbots. As data processing becomes native to the models themselves, the companies that control the underlying compute will see their margins improve through sheer efficiency.

Continue Reading:

  1. Beyond the lakehouse: Fundamental's NEXUS bypasses manual ETL with a n...feeds.feedburner.com
  2. AI companies want you to stop chatting with bots and start managing th...feeds.arstechnica.com
  3. TTT-Discover optimizes GPU kernels 2x faster than human experts — by t...feeds.feedburner.com
  4. Amazon and Google are winning the AI capex race — but what’s the...techcrunch.com
  5. V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal ...arXiv

Technical Breakthroughs

Researchers have introduced TTT-Discover, a system that automates GPU kernel optimization and outperforms human experts by 2x. This development tackles one of the most expensive bottlenecks in AI, which is the manual labor required to tune CUDA code for maximum hardware efficiency. Instead of relying on a small pool of specialized engineers, the system uses test-time training to optimize kernels during the actual inference process.

This shift allows models to adapt their internal mathematical operations to the specific data they see in real time. It's a practical way to extend the life of existing compute clusters and reduce the total cost of ownership for large-scale deployments. We expect this to move hardware optimization from a manual, slow-moving craft to an automated software process that scales with the hardware.

Continue Reading:

  1. TTT-Discover optimizes GPU kernels 2x faster than human experts — by t...feeds.feedburner.com

Product Launches

Fundamental emerged from stealth with NEXUS, a foundation model specifically trained on tabular data to replace the manual slog of ETL (Extract, Transform, Load) processes. Enterprise data teams often spend 80% of their time cleaning and moving data, a bottleneck that has historically limited the ROI on large scale machine learning projects. By applying transformer architectures to structured databases rather than just text, Fundamental is targeting a significant segment of the global data management market. This move signals a shift from general purpose AI toward specialized tools that solve expensive, boring problems for the Fortune 500.

This automation facilitates a broader industry transition from conversational "chat" interfaces toward autonomous agent management. Major players are increasingly asking users to stop talking to bots and start overseeing them as they execute complex, multi-step workflows. If Fundamental successfully automates the data layer, the human role shifts from manual labor to high-level orchestration of specialized digital workers. The long term value here lies in the "managerial" interface, where the platform that successfully monitors these agents becomes the new operating system for the enterprise.

Continue Reading:

  1. Beyond the lakehouse: Fundamental's NEXUS bypasses manual ETL with a n...feeds.feedburner.com
  2. AI companies want you to stop chatting with bots and start managing th...feeds.arstechnica.com

Research & Development

The focus in R&D is shifting from models that simply talk to models that understand how the physical world works. This week's batch of research highlights a move toward spatial intelligence and "world models" that could finally unlock the robotics market. Researchers are testing whether vision language models can learn intuitive physics through interaction (2602.06033), while others are training AI to predict camera poses from text descriptions (2602.06041). If a model can translate a sentence like "the view from the balcony" into precise geometric coordinates, it's a massive win for autonomous systems and augmented reality hardware.

Companies like Google and Meta are also hunting for ways to make messy, multimodal data more searchable for enterprise clients. A new framework called V-Retrver (2602.06034) introduces agentic reasoning to the retrieval process. It doesn't just look for keywords across text and video. It acts as a digital investigator that cross-references different media types to find evidence-backed answers. This is the "reasoning-first" architecture required to turn a basic search bar into a functional corporate intelligence tool that actually works for high-stakes decisions.

Under the hood, we're seeing a push for more efficient mathematical architectures. Pseudo-Invertible Neural Networks (2602.06042) offer a workaround for the rigid constraints of traditional invertible models. These systems are useful for "inverse problems" where you need to map outputs back to their original inputs, but they're usually too slow or restrictive for complex tasks. This "pseudo" approach suggests we can get the benefits of reversibility without the typical performance tax, which is the kind of quiet engineering progress that eventually slashes inference costs for $1.0B-plus clusters.

The recurring theme across these papers is that raw scale is no longer the only lever for performance. We're seeing a deliberate move toward "embodied" reasoning and structural efficiency that mimics how humans actually perceive and navigate 3D space. Investors should watch for which labs successfully port these spatial reasoning capabilities into hardware, as that's where the next multi-billion dollar software-to-physical-world bridge will be built.

Continue Reading:

  1. V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal ...arXiv
  2. Pseudo-Invertible Neural NetworksarXiv
  3. Can vision language models learn intuitive physics from interaction?arXiv
  4. Predicting Camera Pose from Perspective Descriptions for Spatial Reaso...arXiv

Sources gathered by our internal agentic system. Article processed and written by Gemini 3.0 Pro (gemini-3-flash-preview).

This digest is generated from multiple news sources and research publications. Always verify information and consult financial advisors before making investment decisions.