NVIDIA Unveils GPT-5.5-Powered Codex for Enhanced Developer Workflows
NVIDIA has launched Codex, an AI-driven coding application now powered by OpenAI’s latest model, GPT-5.5. This advancement aims to transform knowledge work by improving information processing, problem-solving, and innovation across various industries. Over 10,000 employees at NVIDIA are already experiencing significant productivity boosts through the use of this cutting-edge technology.
Transforming Developer Productivity
The integration of GPT-5.5 into Codex is set to redefine developer workflows. NVIDIA’s new infrastructure, utilizing the GB200 NVL72 rack-scale systems, boasts a remarkable 35 times lower cost per million tokens and a 50 times higher token output per second per megawatt compared to previous generation systems. These enhancements make it feasible for enterprises to deploy frontier-model inference at scale.
Early adopters within NVIDIA have reported drastic reductions in debugging cycles, which have shrunk from days to mere hours. Complex codebases that once required weeks of experimentation can now see significant advancements overnight. Teams are able to deliver end-to-end features directly from natural-language prompts with improved reliability and efficiency.
This progress illustrates not only OpenAI’s capabilities but also NVIDIA’s commitment to enhancing AI agent utilization within its own operations while supporting partners in developing efficient AI models.
Enterprise Security Measures
To ensure the safe deployment of Codex in enterprise environments, NVIDIA has implemented robust security protocols. The Codex application facilitates remote Secure Shell (SSH) connections to approved cloud virtual machines (VMs), allowing agents to access real company data securely without external exposure.
NVIDIA’s IT department has provisioned cloud VMs for all employees, creating a dedicated sandbox environment for the Codex agents. This setup ensures maximum operational capability while maintaining full auditability of actions taken by the agents. Users interact with the Codex agent through a familiar interface while adhering to stringent security measures.
A zero-data retention policy governs this deployment, ensuring that agents operate with read-only permissions when accessing production systems via command-line interfaces and Skills—an agentic toolkit used for automation across NVIDIA.
A Decade of Collaboration with OpenAI
The launch of GPT-5.5 and its integration into Codex is the culmination of over ten years of collaboration between NVIDIA and OpenAI. This partnership began in 2016 when NVIDIA founder Jensen Huang delivered the first DGX-1 AI supercomputer to OpenAI’s headquarters in San Francisco.
Since then, both companies have worked closely together across the entire AI stack. NVIDIA was a foundational partner during the launch of OpenAI’s gpt-oss open-weight model, optimizing it for use with NVIDIA TensorRT-LLM and other ecosystem frameworks like vLLM and Ollama.
OpenAI has committed to deploying over 10 gigawatts of NVIDIA systems for its next-generation AI infrastructure, which will leverage millions of GPUs for model training and inference in the coming years. Additionally, both companies engage in early silicon co-design partnerships where feedback from OpenAI informs NVIDIA’s hardware roadmap while granting OpenAI early access to new architectures.
This collaboration led to significant milestones such as the joint development of the GB200 NVL72 cluster—capable of managing 100,000 GPUs—which completed multiple large-scale training runs and established new benchmarks for reliability at frontier scale.
What This Means
The introduction of GPT-5.5-powered Codex marks a pivotal moment for developers and knowledge workers alike, promising enhanced productivity and innovation capabilities within secure enterprise environments. As companies increasingly adopt AI technologies like Codex, they can expect substantial improvements in workflow efficiency and problem-solving capabilities. The ongoing collaboration between NVIDIA and OpenAI sets a precedent for future advancements in AI infrastructure that could redefine how businesses operate across various sectors.
For more information, read the original report here.
































