Close Menu
World Byte NewsWorld Byte News
    Facebook X (Twitter) Bluesky LinkedIn RSS
    Facebook X (Twitter) Instagram

    World Byte NewsWorld Byte News

    • Home
    • English News
    • Sports
    • Markets News
    • Tech
    • Automobile
    • Agriculture
    • Entertainment
    World Byte NewsWorld Byte News
    English News

    Dog x-rays, art history and a ‘never say never’ attitude: the surprising toolbox of professional conservators​on January 7, 2025 at 2:00 pm

    By MAK GojarJanuary 7, 2025

    Restoration demands a marriage of scientific and technical expertise with knowledge of art and incredible patienceWhen Cecilia Giménez noticed a…

    Sports

    UTSA Roadrunners vs. Tulsa Golden Hurricane: How to watch NCAA Basketball online, TV channel, live stream info, start time​on January 7, 2025 at 1:00 pm

    By MAK GojarJanuary 7, 2025

    How to watch Texas-San Antonio vs. Tulsa basketball game ​How to watch Texas-San Antonio vs. Tulsa basketball game    Who’s Playing…

    Sports

    How to watch Iowa State Cyclones vs. Utah Utes: TV channel, NCAA Basketball live stream info, start time​on January 7, 2025 at 1:00 pm

    By MAK GojarJanuary 7, 2025

    How to watch Iowa State vs. Utah basketball game ​How to watch Iowa State vs. Utah basketball game    Who’s Playing…

    Sports

    Missouri State Bears vs. UIC Flames: How to watch online, live stream info, start time, TV channel​on January 7, 2025 at 1:00 pm

    By MAK GojarJanuary 7, 2025

    How to watch Missouri State vs. Illinois-Chicago basketball game ​How to watch Missouri State vs. Illinois-Chicago basketball game    Who’s Playing…

    Sports

    How to watch Iowa Hawkeyes vs. Nebraska Cornhuskers: Live stream, TV channel, start time for Tuesday’s NCAA Basketball game​on January 7, 2025 at 1:01 pm

    By MAK GojarJanuary 7, 2025

    How to watch Iowa vs. Nebraska basketball game ​How to watch Iowa vs. Nebraska basketball game    Who’s Playing Nebraska Cornhuskers…

    Sports

    2025 NFL Mock Draft: Titans stick at No. 1, Browns replace Deshaun Watson and Giants take Travis Hunter​on January 7, 2025 at 1:01 pm

    By MAK GojarJanuary 7, 2025

    The Tennessee Titans are officially on the clock ​The Tennessee Titans are officially on the clock    Getty Images The Tennessee…

    English News

    NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones​on January 7, 2025 at 2:30 am

    By Calisa ColeJanuary 7, 2025

    NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones​on January 7, 2025 at 2:30 am

    A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety. The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional
    Read ArticleA new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety. The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional
    Read Article  

     

    A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety.

    The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional safety.

    The lab has been accredited by the ANSI National Accreditation Board (ANAB) according to the ISO/IEC 17020 assessment for standards, including:

    • Functional safety (ISO 26262)
    • SOTIF (ISO 21448)
    • Cybersecurity (ISO 21434)
    • UN-R regulations, including UN-R 79, UN-R 13-H, UN-R 152, UN-R 155, UN-R 157 and UN-R 171
    • AI functional safety (ISO PAS 8800 and ISO/IEC TR 5469)

    “The launch of this new lab will help partners in the global automotive ecosystem create safe, reliable autonomous driving technology,” said Ali Kani, vice president of automotive at NVIDIA. “With accreditation by ANAB, the lab will carry out an inspection plan that combines functional safety, cybersecurity and AI — bolstering adherence to the industry’s safety standards.”

    “ANAB is proud to be the accreditation body for the NVIDIA DRIVE AI Systems Inspection Lab,” said R. Douglas Leonard Jr., executive director of ANAB. “NVIDIA’s comprehensive evaluation verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that DRIVE ecosystem partners meet the highest benchmarks for functional safety, cybersecurity and AI integration.”

    The new lab builds on NVIDIA’s ongoing safety compliance work with Mercedes-Benz and JLR. Inaugural participants in the lab include Continental and Sony SSS-America.

    “We are pleased to participate in the newly launched NVIDIA Drive AI Systems Inspection Lab and to further intensify the fruitful, ongoing collaboration between our two companies,” said Nobert Hammerschmidt, head of components business at Continental.

    “Self-driving vehicles have the capability to significantly enhance safety on roads,” said Marius Evensen, head of automotive image sensors at Sony SSS-America. “We look forward to working with NVIDIA’s DRIVE AI Systems Inspection Lab to help us deliver the highest levels of safety to our customers.”

    “Compliance with functional safety, SOTIF and cybersecurity is particularly challenging for complex systems such as AI-based autonomous vehicles,” said Riccardo Mariani, head of industry safety at NVIDIA. “Through the DRIVE AI Systems Inspection Lab, the correctness of the integration of our partners’ products with DRIVE safety and cybersecurity requirements can be inspected and verified.”

    Now open to all NVIDIA DRIVE AGX platform partners, the lab is expected to expand to include additional automotive and robotics products and add a testing component.

    Complementing International Automotive Safety Standards

    The NVIDIA DRIVE AI Systems Inspection Lab complements the missions of independent third-party certification bodies, including technical service organizations such as TÜV SÜD, TÜV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA.

    Today’s announcement dovetails with recent significant safety certifications and assessments of NVIDIA automotive products:

    TÜV SÜD granted the ISO 21434 Cybersecurity Process certification to NVIDIA for its automotive system-on-a-chip, platform and software engineering processes. Upon certification release, the NVIDIA DriveOS 6.0 operating system conforms with ISO 26262 Automotive Safety Integrity Level (ASIL) D standards.

    “Meeting cybersecurity process requirements is of fundamental importance in the autonomous vehicle era,” said Martin Webhofer, CEO of TÜV SÜD Rail GmbH. “NVIDIA has successfully established processes, activities and procedures that fulfill the stringent requirements of ISO 21434. Additionally, NVIDIA DriveOS 6.0 conforms to ISO 26262 ASIL D standards, pending final certification activities.”

    TÜV Rheinland performed an independent United Nations Economic Commission for Europe safety assessment of NVIDIA DRIVE AV related to safety requirements for complex electronic systems.

    “NVIDIA has demonstrated thorough, high-quality, safety-oriented processes and technologies in the context of the assessment of the generic, non-OEM-specific parts of the SAE level 2 NVIDIA DRIVE system,” said Dominik Strixner, global lead functional safety automotive mobility at TÜV Rheinland.

    To learn more about NVIDIA’s work in advancing autonomous driving safety, read the NVIDIA Self-Driving Safety Report.

    Categories: Driving
    Tags: Artificial Intelligence | CES 2025 | Cybersecurity | NVIDIA DRIVE | Transportation

     

    A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety. The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industry’s stringent safety and cybersecurity standards, including AI functional
    Read Article

    English News

    NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES​on January 7, 2025 at 2:30 am

    By Jessica SoaresJanuary 7, 2025

    NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES​on January 7, 2025 at 2:30 am

    Leading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility. NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation
    Read ArticleLeading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility. NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation
    Read Article  

     

    Leading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility.

    NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation and in-vehicle compute.

    At the CES trade show in Las Vegas this week, NVIDIA’s customers and partners are showcasing their latest mobility innovations built on NVIDIA accelerated computing and AI.

    Readying Future Vehicle Roadmaps With NVIDIA DRIVE Thor, Built on NVIDIA Blackwell

    The NVIDIA DRIVE AGX Thor system-on-a-chip (SoC), built on the NVIDIA Blackwell architecture, is engineered to handle the transportation industry’s most demanding data-intensive workloads, including those involving generative AI, vision language models and large language models.

    DRIVE Ecosystem Partners Transform the Show Floor and Industry at Large

    NVIDIA partners are pushing boundaries of automotive innovation with their latest developments and demos, using NVIDIA technologies and accelerated computing to advance everything from sensors, simulation and training to generative AI and teledriving, and include:

    Delivering 1,000 teraflops of accelerated compute performance, DRIVE Thor is equipped to accelerate inference tasks that are critical for autonomous vehicles to understand and navigate the world around them, such as recognizing pedestrians, adjusting to inclement weather and more.

    At CES, Aurora, Continental and NVIDIA announced a long-term strategic partnership to deploy driverless trucks at scale, powered by the next-generation NVIDIA DRIVE Thor SoC. NVIDIA DRIVE Thor and DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous driving system that Continental plans to mass-manufacture in 2027.

    Arm, one of NVIDIA’s key technology partners, is the compute platform of choice for a number of innovations at CES. The Arm Neoverse V3AE CPU, designed to meet the specific safety and performance demands of automotive, is integrated with DRIVE Thor. This marks the first implementation of Arm’s next-generation automotive CPU, which combines Arm v9-based technologies with data-center-class single-thread performance, alongside essential safety and security features.

    Tried and True — DRIVE Orin Mainstream Adoption Continues

    NVIDIA DRIVE AGX Orin, the predecessor of DRIVE Thor, continues to be a production-proven advanced driver-assistance system computer widely used in cars today — delivering 254 trillion operations per second of accelerated compute to process sensor data for safe, real-time driving decisions.

    Toyota, the world’s largest automaker, will build its next-generation vehicles on the high-performance, automotive-grade NVIDIA DRIVE Orin SoC, running the safety-certified NVIDIA DriveOS. These vehicles will offer functionally safe advanced driving-assistance capabilities.

    At the NVIDIA showcase on the fourth floor of the Fontainebleau, Volvo Cars’ software-defined EX90 and Nuro’s autonomous driving technology — the Nuro Driver platform — will be on display, built on NVIDIA DRIVE AGX.

    Other vehicles powered by NVIDIA DRIVE Orin on display during CES include:

    • Zeekr Mix and Zeekr 001, which feature DRIVE Orin will be on display along with the debut of Zeekr’s self-developed ultra-high-performance intelligent driving domain controller that will be built on DRIVE Thor and the NVIDIA Blackwell architecture (LVCC West Hall, booth 5640)
    • Lotus Eletre Carbon (LVCC West Hall, booth 4266 with P3 and 3SS and booth 3500 with HERE)
    • Rivian R1S and Polestar 3 activated with Dolby — vehicles on display and demos available by appointment (Park MGM/NoMad Hotel next to Dolby Live)
    • Lucid Air (LVCC West Hall booth 4964 with SoundHound AI)
    Zeekr MIX
    Rivian R1S

    NVIDIA’s partners will also showcase their automotive solutions built on NVIDIA technologies, including:

    • Arbe: Delivering next-generation, ultra-high-definition radar technology, integrating with NVIDIA DRIVE AGX to revolutionize radar-based free-space mapping with cutting-edge AI capabilities. The integration empowers manufacturers to incorporate radar data effortlessly into their perception systems, enhancing safety applications and autonomous driving. (LVCC, West Hall 7406, Diamond Lot 323)
    • Cerence: Collaborating with NVIDIA to enhance its CaLLM family of language models, including the cloud-based Cerence Automotive Large Language Model, or CaLLM, powered by DRIVE Orin.
    • Foretellix: Integrating NVIDIA Omniverse Sensor RTX APIs into its Foretify AV test management platform, enhancing object-level simulation with physically accurate sensor simulations.
    • Imagry: Building AI-driven, HD-mapless autonomous driving solutions, accelerated by NVIDIA technology, that are designed for both self-driving passenger vehicles and urban buses. (LVCC, West Hall, 5976)
    • Lenovo Vehicle Computing: Previewing (by appointment) its Lenovo AD1, a powerful automotive-grade domain controller built on the NVIDIA DRIVE Thor platform, and tailored for SAE level 4 autonomous driving.
    • Provizio: Showcasing Provizio’s 5D perception Imaging Radar, accelerated by NVIDIA technology, that delivers unprecedented, scalable, on-the-edge radar perception capabilities, with on-vehicle demonstration rides at CES.
    • Quanta: Demonstrating (by appointment) in-house NVIDIA DRIVE AGX Hyperion cameras running on its electronic control unit powered by DRIVE Orin.
    • SoundHound AI: Showcasing its work with NVIDIA to bring voice generative AI directly to the edge, bringing the intelligence of cloud-based LLMs directly to vehicles. (LVCC, West Hall, 4964)
    • Vay: Offering innovative door-to-door mobility services by combining Vay’s remote driving capabilities with NVIDIA DRIVE advanced AI and computing power.
    • Zoox: Showcasing its latest robotaxi, which leverages NVIDIA technology, driving autonomously on the streets of Las Vegas and parked in the Zoox booth. (LVCC, West Hall 3316).

    Safety Is the Way for Autonomous Innovation 

    At CES, NVIDIA also announced that its DRIVE AGX Hyperion platform has achieved safety certifications from TÜV SÜD and TÜV Rheinland, setting new standards for autonomous vehicle safety and innovation.

    To enhance safety measures, NVIDIA also launched the DRIVE AI Systems Inspection Lab, designed to help partners meet rigorous autonomous vehicle safety and cybersecurity requirements.

    In addition, complementing its three computers designed to accelerate AV development — NVIDIA AGX, NVIDIA Omniverse running on OVX and NVIDIA DGX — NVIDIA has introduced the NVIDIA Cosmos platform. Cosmos’ world foundation models and advanced data processing pipelines can dramatically scale generated data and speed up physical AI system development. With the platform’s data flywheel capability, developers can effectively transform thousands of real-world driven miles into billions of virtual miles.

    Transportation leaders using Cosmos to build physical AI for AVs include Fortellix, Uber, Waabi and Wayve.

    Learn more about NVIDIA’s latest automotive news by watching NVIDIA founder and CEO Jensen Huang’s opening keynote at CES.

    See notice regarding software product information.

    Categories: Driving
    Tags: Artificial Intelligence | CES 2025 | Customer Stories | NVIDIA DRIVE | Transportation

     

    Leading global transportation companies — spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems — are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility. NVIDIA’s automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation
    Read Article

    English News

    PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW​on January 7, 2025 at 2:30 am

    By Andrew FearJanuary 7, 2025

    PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW​on January 7, 2025 at 2:30 am

    GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions. Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW
    Read ArticleGeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions. Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW
    Read Article  

     

    GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions.

    Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW app for the Steam Deck. NVIDIA is working to bring cloud gaming to the popular PC gaming handheld device later this year.

    In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality devices — with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS.

    In addition, NVIDIA is launching the first GeForce RTX-powered data center in India, making gaming more accessible around the world.

    Plus, GeForce NOW’s extensive library of over 2,100 supported titles is expanding with highly anticipated AAA titles. DOOM: The Dark Ages and Avowed will join the cloud when they launch on PC this year.

    RTX on Deck

    The Steam Deck’s portability paired with GeForce NOW opens up new possibilities for high-fidelity gaming everywhere. The native GeForce NOW app will offer up to 4K resolution and 60 frames per second with high dynamic range on Valve’s innovative Steam Deck handheld when connected to a TV, streaming from GeForce RTX-powered gaming rigs in the cloud.

    Last year, GeForce NOW rolled out a beta installation method that was eagerly welcomed by the gaming community. Later this year, members will be able to download the native GeForce NOW app and install it on Steam Deck.

    Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency. Because GeForce NOW streams from an RTX gaming rig in the cloud, the Steam Deck uses less processing power, which extends battery life compared with playing locally.

    The streaming experience with GeForce NOW looks stunning, whichever way Steam Deck users want to play — whether that’s in handheld mode for HDR-quality graphics, connected to a monitor for up to 1440p 120 fps HDR or hooked up to a TV for big-screen streaming at up to 4K 60 HDR. GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.

    Get ready for major upgrades to streaming on the go when the GeForce NOW app launches on the Steam Deck later this year.

    Stream Beyond Reality

    Get immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.

    Meta Quest 3 to be on GeForce NOW
    Jump into a whole new gaming dimension with GeForce NOW.

    Members can transform the space around them into a personal gaming theater with GeForce NOW. The streaming experience on these devices will support gamepad-compatible titles for members to play their favorite PC games on a massive virtual screen.

    For an even more enhanced visual experience, GeForce NOW Ultimate and Performance members using these devices can tap into RTX and DLSS technologies in supported games. Members will be able to step into a world where games come to life on a grand scale, powered by GeForce NOW technologies.

    Land of a Thousand Lights … and Games

    India data center to be on GeForce NOW
    New year, new data center.

    NVIDIA is broadening cloud gaming in India and Latin America. The first GeForce RTX 4080-powered data center will launch in India in the first half of this year. This follows the launch of GeForce NOW in Japan last year, as well as in Colombia and Chile, to be operated by GeForce NOW Alliance partner Digevo.

    GeForce RTX-powered gaming in the rapidly growing Indian gaming market will provide the ability to stream AAA games without the latest hardware. Gamers in the region can look forward to the launch of Ultimate memberships, along with all the new games and technological advancements announced at CES.

    Send in the Games

    AAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software, the legendary studio behind the DOOM franchise. All will be available at launch on PC this year.

    Avowed to be on GeForce NOW
    Get ready to jump into the Living Lands.

    Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Welcome to the Living Lands, an island full of mysteries and secrets, danger and adventure, choices and consequences and untamed wilderness. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic — harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions — which join the players’ parties — have unique abilities and storylines that can be influenced by gamers’ choices.

    DOOM: The Dark Ages to be on GeForce NOW
    Have a hell of a time in the cloud.

    DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayer’s rage this year.

    Get ready to play these titles and more at high performance when they join GeForce NOW at launch. Ultimate members will be able to stream at up to 4K resolution and 120 fps with support for NVIDIA DLSS and Reflex technology, and experience the action even on low-powered devices. Keep an eye out on GFN Thursdays for the latest on their release dates in the cloud.

    GeForce NOW is making popular devices cloud-gaming-ready while consistently delivering quality titles from top publishers to bring another ultimate year of gaming to members across the globe.

    See notice regarding software product information.

    Categories: Gaming
    Tags: Cloud Gaming | GeForce NOW

     

    GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions. Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW
    Read Article

    English News

    NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community​on January 7, 2025 at 2:30 am

    By Ming-Yu LiuJanuary 7, 2025

    NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community​on January 7, 2025 at 2:30 am

    NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs). World foundation models, or WFMs, are as fundamental as large
    Read ArticleNVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs). World foundation models, or WFMs, are as fundamental as large
    Read Article  

     

    NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs).

    World foundation models, or WFMs, are as fundamental as large language models. They use input data, including text, image, video and movement, to generate and simulate virtual worlds in a way that accurately models the spatial relationships of objects in the scene and their physical interactions.

    Announced today at CES, NVIDIA is making available the first wave of Cosmos WFMs for physics-based simulation and synthetic data generation — plus state-of-the-art tokenizers, guardrails, an accelerated data processing and curation pipeline, and a framework for model customization and optimization.

    Researchers and developers, regardless of their company size, can freely use the Cosmos models under NVIDIA’s permissive open model license that allows commercial usage. Enterprises building AI agents can also use new open NVIDIA Llama Nemotron and Cosmos Nemotron models, unveiled at CES.

    The openness of Cosmos’ state-of-the-art models unblocks physical AI developers building robotics and AV technology and enables enterprises of all sizes to more quickly bring their physical AI applications to market. Developers can use Cosmos models directly to generate physics-based synthetic data, or they can harness the NVIDIA NeMo framework to fine-tune the models with their own videos for specific physical AI setups.

    Physical AI leaders — including robotics companies 1X, Agility Robotics and XPENG, and AV developers Uber and Waabi  — are already working with Cosmos to accelerate and enhance model development.

    Developers can preview the first Cosmos autoregressive and diffusion models on the NVIDIA API catalog, and download the family of models and fine-tuning framework from the NVIDIA NGC catalog and Hugging Face.

    World Foundational Models for Physical AI

    Cosmos world foundation models are a suite of open diffusion and autoregressive transformer models for physics-aware video generation. The models have been trained on 9,000 trillion tokens from 20 million hours of real-world human interactions, environment, industrial, robotics and driving data.

    The models come in three categories: Nano, for models optimized for real-time, low-latency inference and edge deployment; Super, for highly performant baseline models; and Ultra, for maximum quality and fidelity, best used for distilling custom models.

    When paired with NVIDIA Omniverse 3D outputs, the diffusion models generate controllable, high-quality synthetic video data to bootstrap training of robotic and AV perception models. The autoregressive models predict what should come next in a sequence of video frames based on input frames and text. This enables real-time next-token prediction, giving physical AI models the foresight to predict their next best action.

    Developers can use Cosmos’ open models for text-to-world and video-to-world generation. Versions of the diffusion and autoregressive models, with between 4 and 14 billion parameters each, are available now on the NGC catalog and Hugging Face.

    Also available are a 12-billion-parameter upsampling model for refining text prompts, a 7-billion-parameter video decoder optimized for augmented reality, and guardrail models to ensure responsible, safe use.

    To demonstrate opportunities for customization, NVIDIA is also releasing fine-tuned model samples for vertical applications, such as generating multisensor views for AVs.

    Advancing Robotics, Autonomous Vehicle Applications

    Cosmos world foundation models can enable synthetic data generation to augment training datasets, simulation to test and debug physical AI models before they’re deployed in the real world, and reinforcement learning in virtual environments to accelerate AI agent learning.

    Developers can generate massive amounts of controllable, physics-based synthetic data by conditioning Cosmos with composed 3D scenes from NVIDIA Omniverse.

    Waabi, a company pioneering generative AI for the physical world, starting with autonomous vehicles, is evaluating the use of Cosmos for the search and curation of video data for AV software development and simulation. This will further accelerate the company’s industry-leading approach to safety, which is based on Waabi World, a generative AI simulator that can create any situation a vehicle might encounter with the same level of realism as if it happened in the real world.

    In robotics, WFMs can generate synthetic virtual environments or worlds to provide a less expensive, more efficient and controlled space for robot learning. Embodied AI startup Hillbot is boosting its data pipeline by using Cosmos to generate terabytes of high-fidelity 3D environments. This AI-generated data will help the company refine its robotic training and operations, enabling faster, more efficient robotic skilling and improved performance for industrial and domestic tasks.

    In both industries, developers can use NVIDIA Omniverse and Cosmos as a multiverse simulation engine, allowing a physical AI policy model to simulate every possible future path it could take to execute a particular task — which in turn helps the model select the best of these paths.

    Data curation and the training of Cosmos models relied on thousands of NVIDIA GPUs through NVIDIA DGX Cloud, a high-performance, fully managed AI platform that provides accelerated computing clusters in every leading cloud.

    Developers adopting Cosmos can use DGX Cloud for an easy way to deploy Cosmos models, with further support available through the NVIDIA AI Enterprise software platform.

    Customize and Deploy With NVIDIA Cosmos

    In addition to foundation models, the Cosmos platform includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.

    Robotics and AV developers collect millions or billions of hours of real-world recorded video, resulting in petabytes of data. Cosmos enables developers to process 20 million hours of data in just 40 days on NVIDIA Hopper GPUs, or as little as 14 days on NVIDIA Blackwell GPUs. Using unoptimized pipelines running on a CPU system with equivalent power consumption, processing the same amount of data would take over three years.

    The platform also features a suite of powerful video and image tokenizers that can convert videos into tokens at different video compression ratios for training various transformer models.

    The Cosmos tokenizers deliver 8x more total compression than state-of-the-art methods and 12x faster processing speed, which offers superior quality and reduced computational costs in both training and inference. Developers can access these tokenizers, available under NVIDIA’s open model license, via Hugging Face and GitHub.

    Developers using Cosmos can also harness model training and fine-tuning capabilities offered by NeMo framework, a GPU-accelerated framework that enables high-throughput AI training.

    Developing Safe, Responsible AI Models

    Now available to developers under the NVIDIA Open Model License Agreement, Cosmos was developed in line with NVIDIA’s trustworthy AI principles, which include nondiscrimination, privacy, safety, security and transparency.

    The Cosmos platform includes Cosmos Guardrails, a dedicated suite of models that, among other capabilities, mitigates harmful text and image inputs during preprocessing and screens generated videos during postprocessing for safety. Developers can further enhance these guardrails for their custom applications.

    Cosmos models on the NVIDIA API catalog also feature an inbuilt watermarking system that enables identification of AI-generated sequences.

    NVIDIA Cosmos was developed by NVIDIA Research. Read the research paper, “Cosmos World Foundation Model Platform for Physical AI,” for more details on model development and benchmarks. Model cards providing additional information are available on Hugging Face.

    Learn more about world foundation models in an AI Podcast episode, airing Jan. 7, that features Ming-Yu Liu, vice president of research at NVIDIA. 

    Get started with NVIDIA Cosmos and join NVIDIA at CES. Watch the Cosmos demo and Huang’s keynote below: 

    See notice regarding software product information.

    Categories: Driving | Generative AI | Robotics | Software
    Tags: Artificial Intelligence | CES 2025 | Cosmos | DGX Cloud | Jetson | NVIDIA DRIVE | NVIDIA NeMo | NVIDIA Research | Omniverse | Robotics | Simulation and Design | Synthetic Data Generation | Transportation

     

    NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models — neural networks that can predict and generate physics-aware videos of the future state of a virtual environment — to help developers build next-generation robots and autonomous vehicles (AVs). World foundation models, or WFMs, are as fundamental as large
    Read Article

    English News

    NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development​on January 7, 2025 at 2:30 am

    By Spencer HuangJanuary 7, 2025

    NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development​on January 7, 2025 at 2:30 am

    Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts. Announced by NVIDIA founder and CEO Jensen
    Read ArticleOver the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts. Announced by NVIDIA founder and CEO Jensen
    Read Article  

     

    Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts.

    Announced by NVIDIA founder and CEO Jensen Huang today at the CES trade show, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.

    Imitation learning — a subset of robot learning — enables humanoids to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the Isaac GR00T blueprint for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations.

    Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a digital twin. These human actions are mimicked by a robot in  simulation and recorded for use as ground truth.

    The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the NVIDIA Omniverse and NVIDIA Cosmos platforms, exponentially expands this dataset through domain randomization and 3D upscaling.

    The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in NVIDIA Isaac Lab, an open-source and modular framework for robot learning.

    World Foundation Models Narrow the Sim-to-Real Gap 

    NVIDIA also announced Cosmos at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and synthetic data.

    In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse — a developer platform of application programming interfaces and microservices for building 3D applications and services — with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations.

    An Expanding Ecosystem 

    Collectively, NVIDIA Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure.

    Humanoid software, hardware and robot manufacturers can apply for early access to NVIDIA’s humanoid robot developer program.

    Watch the CES opening keynote from NVIDIA founder and CEO Jensen Huang, and stay up to date by subscribing to the newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X and Facebook.

    See notice regarding software product information.

    Categories: Robotics
    Tags: Artificial Intelligence | CES 2025 | Cosmos | Digital Twin | Isaac | Omniverse | Robotics | Synthetic Data Generation

     

    Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts. Announced by NVIDIA founder and CEO Jensen
    Read Article

    English News

    NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI​on January 7, 2025 at 2:30 am

    By Richard KerrisJanuary 7, 2025

    NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI​on January 7, 2025 at 2:30 am

    From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment. NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences. Built
    Read ArticleFrom creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment. NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences. Built
    Read Article  

     

    From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment.

    NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences.

    Built on technologies like NVIDIA NIM microservices and AI Blueprints — and breakthrough AI applications from startups and software partners — Media2 uses AI to drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.

    Amid this rapid creative transformation, companies embracing NVIDIA Media2 can stay on the $3 trillion media and entertainment industry’s cutting edge, reshaping how audiences consume and engage with content.

    NVIDIA Media2 technology stack

    NVIDIA Technologies at the Heart of Media2

    As the media and entertainment industry embraces generative AI and accelerated computing, NVIDIA technologies are transforming how content is created, delivered and experienced.

    NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that allows companies in broadcast, streaming and live sports to run live video pipelines on the same infrastructure as AI. The platform delivers applications from vendors across the industry on NVIDIA-accelerated infrastructure.

    NVIDIA Holoscan for Media

    Delivering the power needed to drive the next wave of data-enhanced intelligent content creation and hyper-personalized media is the NVIDIA Blackwell architecture, built to handle data-center-scale generative AI workflows with up to 25x more energy efficiency over the NVIDIA Hopper generation. Blackwell integrates six types of chips: GPUs, CPUs, DPUs, NVIDIA NVLink Switch chips, NVIDIA InfiniBand switches and Ethernet switches.

    NVIDIA Blackwell architecture

    Blackwell is supported by NVIDIA AI Enterprise, an end-to-end software platform for production-grade AI. NVIDIA AI Enterprise comprises NVIDIA NIM microservices, AI frameworks, libraries and tools that media companies can deploy on NVIDIA-accelerated clouds, data centers and workstations. Of the expanding list, these include:

    • The Llama 3.1-405B-Instruct NIM microservice, which enables synthetic data generation, distillation and inference for chatbots, coding and domain-specific tasks.
    • The Mistral-NeMo-12B-Instruct NIM microservice, which enables multilingual information retrieval — the ability to search, process and retrieve knowledge across languages. This is key in enhancing an AI model’s outputs with greater accuracy and global relevancy.
    • The NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, which can help advertisers easily build personalized, on-brand and product-accurate marketing content at scale using real-time rendering and generative AI without affecting a hero product asset.
    • NVIDIA NeMo Retriever embedding and reranking NIM microservices, which can vectorize text documents, transcripts, news articles and other written content. Media companies can use these to expand their generative AI efforts and build accurate, multilingual systems.
    • The NVIDIA Cosmos Nemotron vision language model NIM microservice, which is a multimodal VLM that can understand the meaning and context of text, images and video. With the microservice, media companies can query images and videos with natural language and receive informative responses.
    • The NVIDIA AI Blueprint for video search and summarization (VSS), which integrates VLMs and LLMs and provides cloud-native building blocks to build video analytics, search and summarization applications.
    • The NVIDIA Edify multimodal generative AI architecture, which can generate visual assets — like images, 3D models and HDRi environments — from text or image prompts. It offers advanced editing tools and efficient training for developers. With NVIDIA AI Foundry, service providers can customize Edify models for commercial visual services using NVIDIA NIM microservices.

    Partners in the Media2 Ecosystem

    Partners across the industry are adopting NVIDIA technology to reshape the next chapter of storytelling.

    Getty Images and Shutterstock are intelligent content creation services built with NVIDIA Edify. The AI models have also been optimized and packaged for maximum performance with NVIDIA NIM microservices.

    Bria is a commercial-first visual generative AI platform designed for developers. It’s trained on 100% licensed data and built on responsible AI principles. The platform offers tools for custom pipelines, seamless integration and flexible deployment, ensuring enterprise-grade compliance and scalable, predictable content generation. Optimized with NVIDIA NIM microservices, Bria delivers faster, safer and scalable production-ready solutions.

    Runway is an AI platform that provides advanced creative tools for artists and filmmakers. The company’s Gen-3 Alpha Turbo model excels in video generation and includes a new Camera Control feature that allows for precise camera movements like pan, tilt and zoom. Runway’s integration of the NVIDIA CV-CUDA open-source library combined with NVIDIA GPUs accelerates preprocessing for high-resolution videos in its segmentation model.

    Wonder Dynamics, an Autodesk company, recently launched the beta version of Wonder Animation, featuring powerful new video-to-3D scene technology that can turn any video sequence into a 3D-animated scene for animated film production. Accelerated by NVIDIA GPU technology, Wonder Animation provides visual effects artists and animators with an easy-to-use, flexible tool that significantly reduces the time, complexity and efforts traditionally associated with 3D animation and visual effects workflows — while allowing the artist to maintain full creative control.

    Comcast’s Sky innovation team is collaborating with NVIDIA on lab testing NVIDIA NIM microservices and partner models for its global platforms. The integration could lead to greater interactivity and accessibility for customers around the world, such as enabling the use of voice commands to request summaries during live sports and access other contextual information.

    Vū, a creative technology company and home to the largest network of virtual studios, is broadening access to the creation of virtual environments and immersive content with NVIDIA-accelerated generative AI technologies.

    Twelve Labs, a member of the NVIDIA Inception program for startups, is developing advanced multimodal foundation models that can understand videos like humans, enabling precise semantic search, content analysis and video-to-text generation. Twelve Labs uses NVIDIA H100 GPUs to significantly improve the models’ inference performance, achieving up to a 7x improvement in requests served per second.

    S4 Capital’s Monks is using cutting-edge AI technologies to enhance live broadcasts with real-time content segmentation and personalized fan experiences. Powered by NVIDIA Holoscan for Media, the company’s solution is integrated with tools like NVIDIA VILA to generate contextual metadata for injection within a time-addressible media store framework — enabling precise, action-based searching within video content.

    Additionally, Monks uses NVIDIA NeMo Curator to help process data to build tailored AI models for sports leagues and IP holders, unlocking new monetization opportunities through licensing. By combining these technologies, broadcasters can seamlessly deliver hyper-relevant content to fans as events unfold, while adapting to the evolving demands of modern audiences.

    Media companies manage vast amounts of video content, which can be challenging and time-consuming to locate, catalog and compile into finished assets. Leading media-focused consultant and system integrator Qvest has developed an AI video discovery engine, built on NIM microservices, that accelerates this process by automating the data capture of video files. This streamlines a user’s ability to both discover and contextualize how videos can fit in their intended story.

    Verizon is transforming global enterprise operations, as well as live media and sports content, by integrating its reliable, secure private 5G network with NVIDIA’s full-stack AI platform, including NVIDIA AI Enterprise and NIM microservices, to deliver the latest AI solutions at the edge.

    Using this solution, streamers, sports leagues and rights holders can enhance fan experiences with greater interactivity and immersion by deploying high-performance 5G connectivity along with generative AI, agentic AI, extended reality and streaming applications that enable personalized content delivery. These technologies also help elevate player performance and viewer engagement by offering real-time data analytics to coaches, players, referees and fans. It can also enable private 5G-powered enterprise AI use cases to drive automation and productivity.

    Welcome to NVIDIA Media2

    The NVIDIA Media2 initiative empowers companies to redefine the future of media and entertainment through intelligent, data-driven and immersive technologies — giving them a competitive edge while equipping them to drive innovation across the industry.

    NIM microservices from NVIDIA and model developers are now available to try, with additional models added regularly.

    Get started with NVIDIA NIM and AI Blueprints, and watch the CES opening keynote delivered by NVIDIA founder and CEO Jensen Huang to hear the latest advancements in AI.

    See notice regarding software product information.

    Categories: Generative AI | Pro Graphics
    Tags: 3D | Artificial Intelligence | CES 2025 | Cloud Services | Cosmos | Creators | Holoscan for Media | Inception | Media and Entertainment | NVIDIA AI Enterprise | NVIDIA Blueprints | NVIDIA NeMo | NVIDIA NIM | Omniverse

     

    From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics — pushing the boundaries of what’s possible in media and entertainment. NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences. Built
    Read Article

    English News

    NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise​on January 7, 2025 at 2:30 am

    By Justin BoitanoJanuary 7, 2025

    NVIDIA and Partners Launch Agentic AI Blueprints to Automate Work for Every Enterprise​on January 7, 2025 at 2:30 am

    New NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work. With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights
    Read ArticleNew NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work. With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights
    Read Article  

     

    New NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work.

    With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights from video, PDF and other images.

    CrewAI, Daily, LangChain, LlamaIndex and Weights & Biases are among leading providers of agentic AI orchestration and management tools that have worked with NVIDIA to build blueprints that integrate the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices and NVIDIA NeMo, with their platforms. These five blueprints — comprising a new category of partner blueprints for agentic AI — provide the building blocks for developers to create the next wave of AI applications that will transform every industry.

    In addition to the partner blueprints, NVIDIA is introducing its own new AI Blueprint for PDF to podcast, as well as another to build AI agents for video search and summarization. These are joined by four additional NVIDIA Omniverse Blueprints that make it easier for developers to build simulation-ready digital twins for physical AI.

    To help enterprises rapidly take AI agents into production, Accenture is announcing AI Refinery for Industry built with NVIDIA AI Enterprise, including NVIDIA NeMo, NVIDIA NIM microservices and AI Blueprints.

    The AI Refinery for Industry solutions — powered by Accenture AI Refinery with NVIDIA — can help enterprises rapidly launch agentic AI across fields like automotive, technology, manufacturing, consumer goods and more.

    Agentic AI Orchestration Tools Conduct a Symphony of Agents

    Agentic AI represents the next wave in the evolution of generative AI. It enables applications to move beyond simple chatbot interactions to tackle complex, multi-step problems through sophisticated reasoning and planning. As explained in NVIDIA founder and CEO Jensen Huang’s CES keynote, enterprise AI agents will become a centerpiece of AI factories that generate tokens to create unprecedented intelligence and productivity across industries.

    Agentic AI orchestration is a sophisticated system designed to manage, monitor and coordinate multiple AI agents working together — key to developing reliable enterprise agentic AI systems. The agentic AI orchestration layer from NVIDIA partners provides the glue needed for AI agents to effectively work together.

    The new partner blueprints, now available from agentic AI orchestration leaders, offer integrations with NVIDIA AI Enterprise software, including NIM microservices and NVIDIA NeMo Retriever, to boost retrieval accuracy and reduce latency of agent workflows. For example:

    • CrewAI is using new Llama 3.3 70B NVIDIA NIM microservices and the NVIDIA NeMo Retriever embedding NIM microservice for its blueprint for code documentation for software development. The blueprint helps ensure code repositories remain comprehensive and easy to navigate.
    • Daily’s voice agent blueprint, powered by the company’s open-source Pipecat framework, uses the NVIDIA Riva automatic speech recognition and text-to-speech NIM microservice, along with the Llama 3.3 70B NIM microservice to achieve real-time conversational AI.
    • LangChain is adding Llama 3.3 70B NVIDIA NIM microservices to its structured report generation blueprint. Built on LangGraph, the blueprint allows users to define a topic and specify an outline to guide an agent in searching the web for relevant information, so it can return a report in the requested format.
    • LlamaIndex’s document research assistant for blog creation blueprint harnesses NVIDIA NIM microservices and NeMo Retriever to help content creators produce high-quality blogs. It can tap into agentic-driven retrieval-augmented generation with NeMo Retriever to automatically research, outline and generate compelling content with source attribution.
    • Weights & Biases is adding its W&B Weave capability to the AI Blueprint for AI virtual assistants, which features the Llama 3.1 70B NIM microservice. The blueprint can streamline the process of debugging, evaluating, iterating and tracking production performance and collecting human feedback to support seamless integration and faster iterations for building and deploying agentic AI applications.

    Summarize Many, Complex PDFs While Keeping Proprietary Data Secure 

    With trillions of PDF files — from financial reports to technical research papers — generated every year, it’s a constant challenge to stay up to date with information.

    NVIDIA’s PDF to podcast AI Blueprint provides a recipe developers can use to turn multiple long and complex PDFs into AI-generated readouts that can help professionals, students and researchers efficiently learn about virtually any topic and quickly understand key takeaways.

    The blueprint — built on NIM microservices and text-to-speech models — allows developers to build applications that extract images, tables and text from PDFs, and convert the data into easily digestible audio content, all while keeping data secure.

    For example, developers can build AI agents that can understand context, identify key points and generate a concise summary as a monologue or a conversation-style podcast, narrated in a natural voice. This offers users an engaging, time-efficient way to absorb information at their desired speed.

    Test, Prototype and Run Agentic AI Blueprints in One Click

    NVIDIA Blueprints empower the world’s more than 25 million software developers to easily integrate AI into their applications across various industries. These blueprints simplify the process of building and deploying agentic AI applications, making advanced AI integration more accessible than ever.

    With just a single click, developers can now build and run the new agentic AI Blueprints as NVIDIA Launchables. These Launchables provide on-demand access to developer environments with predefined configurations, enabling quick workflow setup.

    By containing all necessary components for development, Launchables support consistent and reproducible setups without the need for manual configuration or overhead — streamlining the entire development process, from prototyping to deployment.

    Enterprises can also deploy blueprints into production with the NVIDIA AI Enterprise software platform on data center platforms including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro, or run them on accelerated cloud platforms from Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.

    Accenture and NVIDIA Fast-Track Deployments With AI Refinery for Industry

    Accenture is introducing its new AI Refinery for Industry with 12 new industry agent solutions built with NVIDIA AI Enterprise software and available from the Accenture NVIDIA Business Group. These industry-specific agent solutions include revenue growth management for consumer goods and services, clinical trial companion for life sciences, industrial asset troubleshooting and B2B marketing, among others.

    AI Refinery for Industry offerings include preconfigured components, best practices and foundational elements designed to fast-track the development of AI agents. They provide organizations the tools to build specialized AI networks tailored to their industry needs.

    Accenture plans to launch over 100 AI Refinery for Industry agent solutions by the end of the year.

    Get started with AI Blueprints and join NVIDIA at CES.

    See notice regarding software product information.

    Categories: Generative AI
    Tags: Artificial Intelligence | CES 2025 | NVIDIA AI Enterprise | NVIDIA Blueprints | NVIDIA NeMo | NVIDIA NIM | Riva

     

    New NVIDIA AI Blueprints for building agentic AI applications are poised to help enterprises everywhere automate work. With the blueprints, developers can now build and deploy custom AI agents. These AI agents act like “knowledge robots” that can reason, plan and take action to quickly analyze large quantities of data, summarize and distill real-time insights
    Read Article

    English News

    NVIDIA Enhances Three Computer Solution for Autonomous Mobility With Cosmos World Foundation Models​on January 7, 2025 at 2:30 am

    By Mo PoorsartepJanuary 7, 2025

    NVIDIA Enhances Three Computer Solution for Autonomous Mobility With Cosmos World Foundation Models​on January 7, 2025 at 2:30 am

    Autonomous vehicle (AV) development is made possible by three distinct computers: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA AGX in-vehicle computer to process real-time sensor data for safety. Together, these purpose-built, full-stack systems enable
    Read ArticleAutonomous vehicle (AV) development is made possible by three distinct computers: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA AGX in-vehicle computer to process real-time sensor data for safety. Together, these purpose-built, full-stack systems enable
    Read Article  

     

    Autonomous vehicle (AV) development is made possible by three distinct computers: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA AGX in-vehicle computer to process real-time sensor data for safety.

    Together, these purpose-built, full-stack systems enable continuous development cycles, speeding improvements in performance and safety.

    At the CES trade show, NVIDIA today announced a new part of the equation: NVIDIA Cosmos, a platform comprising state-of-the-art generative world foundation models (WFMs), advanced tokenizers, guardrails and an accelerated video processing pipeline built to advance the development of physical AI systems such as AVs and robots.

    With Cosmos added to the three-computer solution, developers gain a data flywheel that can turn thousands of human-driven miles into billions of virtually driven miles — amplifying training data quality.

    “The AV data factory flywheel consists of fleet data collection, accurate 4D reconstruction and AI to generate scenes and traffic variations for training and closed-loop evaluation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Using the NVIDIA Omniverse platform, as well as Cosmos and supporting AI models, developers can generate synthetic driving scenarios to amplify training data by orders of magnitude.”

    “Developing physical AI models has traditionally been resource-intensive and costly for developers, requiring acquisition of real-world datasets and filtering, curating and preparing data for training,” said Norm Marks, vice president of automotive at NVIDIA. “Cosmos accelerates this process with generative AI, enabling smarter, faster and more precise AI model development for autonomous vehicles and robotics.”

    Transportation leaders are using Cosmos to build physical AI for AVs, including:

    • Waabi, a company pioneering generative AI for the physical world, will use Cosmos for the search and curation of video data for AV software development and simulation.
    • Wayve, which is developing AI foundation models for autonomous driving, is evaluating Cosmos as a tool to search for edge and corner case driving scenarios used for safety and validation.
    • AV toolchain provider Foretellix will use Cosmos, alongside NVIDIA Omniverse Sensor RTX APIs, to evaluate and generate high-fidelity testing scenarios and training data at scale.
    • In addition, ridesharing giant Uber is partnering with NVIDIA to accelerate autonomous mobility. Rich driving datasets from Uber, combined with the features of the Cosmos platform and NVIDIA DGX Cloud, will help AV partners build stronger AI models even more efficiently.

    Availability

    Cosmos WFMs are now available under an open model license on Hugging Face and the NVIDIA NGC catalog. Cosmos models will soon be available as fully optimized NVIDIA NIM microservices.

    Get started with Cosmos and join NVIDIA at CES.

    See notice regarding software product information.

    Categories: Driving
    Tags: Artificial Intelligence | CES 2025 | Cosmos | NVIDIA DGX | Omniverse | Transportation

     

    Autonomous vehicle (AV) development is made possible by three distinct computers: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA AGX in-vehicle computer to process real-time sensor data for safety. Together, these purpose-built, full-stack systems enable
    Read Article

    English News

    NVIDIA Announces Nemotron Model Families to Advance Agentic AI​on January 7, 2025 at 2:30 am

    By Kari BriskiJanuary 7, 2025

    NVIDIA Announces Nemotron Model Families to Advance Agentic AI​on January 7, 2025 at 2:30 am

    Artificial intelligence is entering a new era — agentic AI — where teams of specialized agents can help people solve complex problems and automate repetitive tasks. With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimized for agentic
    Read ArticleArtificial intelligence is entering a new era — agentic AI — where teams of specialized agents can help people solve complex problems and automate repetitive tasks. With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimized for agentic
    Read Article  

     

    Artificial intelligence is entering a new era — agentic AI — where teams of specialized agents can help people solve complex problems and automate repetitive tasks.

    With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimized for agentic AI functions and capabilities. This complexity means that the need for powerful, efficient, enterprise-grade models has never been greater.

    To provide a foundation for enterprise agentic AI, NVIDIA today announced the Llama Nemotron family of open large language models (LLMs). Built with Llama, the models can help developers create and deploy AI agents across a range of applications — including customer support, fraud detection, and product supply chain and inventory management optimization.

    To be effective, many AI agents need both language skills and the ability to perceive the world and respond with the appropriate action.

    With new NVIDIA Cosmos Nemotron vision language models (VLMs) and NVIDIA NIM microservices for video search and summarization, developers can build agents that analyze and respond to images and video from autonomous machines, hospitals, stores and warehouses, as well as sports events, movies and news. For developers seeking to generate physics-aware videos for robotics and autonomous vehicles, NVIDIA today separately announced NVIDIA Cosmos world foundation models.

    Open Llama Nemotron Models Optimize Compute Efficiency, Accuracy for AI Agents

    Built with Llama foundation models — one of the most popular commercially viable open-source model collections, downloaded over 650 million times — NVIDIA Llama Nemotron models provide optimized building blocks for AI agent development. This builds on NVIDIA’s commitment to developing state-of-the-art models, such as Llama 3.1 Nemotron 70B, now available through the NVIDIA API catalog.

    Llama Nemotron models are pruned and trained with NVIDIA’s latest techniques and high-quality datasets for enhanced agentic capabilities. They excel at instruction following, chat, function calling, coding and math, while being size-optimized to run on a broad range of NVIDIA accelerated computing resources.

    “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimization across a system of LLMs to deliver efficient, accurate AI agents,” said Ahmad Al-Dahle, vice president and head of GenAI at Meta. “Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”

    Leading AI agent platform providers including SAP and ServiceNow are expected to be among the first to use the new Llama Nemotron models.

    “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios,” said Philipp Herzig, chief AI officer at SAP. “Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialized AI agents to transform business processes.”

    “AI agents make it possible for organizations to achieve more with less effort, setting new standards for business transformation,” said Jeremy Barnes, vice president of platform AI at ServiceNow. “The improved performance and accuracy of NVIDIA’s open Llama Nemotron models can help build advanced AI agent services that solve complex problems across functions, in any industry.”

    The NVIDIA Llama Nemotron models use NVIDIA NeMo for distilling, pruning and alignment. Using these techniques, the models are small enough to run on a variety of computing platforms while providing high accuracy as well as increased model throughput.

    The Llama Nemotron model family will be available as downloadable models and as NVIDIA NIM microservices that can be easily deployed on clouds, data centers, PCs and workstations. They offer enterprises industry-leading performance with reliable, secure and seamless integration into their agentic AI application workflows.

    Customize and Connect to Business Knowledge With NVIDIA NeMo

    The Llama Nemotron and Cosmos Nemotron model families are coming in Nano, Super and Ultra sizes to provide options for deploying AI agents at every scale.

    • Nano: The most cost-effective model optimized for real-time applications with low latency, ideal for deployment on PCs and edge devices.
    • Super: A high-accuracy model offering exceptional throughput on a single GPU.
    • Ultra: The highest-accuracy model, designed for data-center-scale applications demanding the highest performance.

    Enterprises can also customize the models for their specific use cases and domains with NVIDIA NeMo microservices to simplify data curation, accelerate model customization and evaluation, and apply guardrails to keep responses on track.

    With NVIDIA NeMo Retriever, developers can also integrate retrieval-augmented generation capabilities to connect models to their enterprise data.

    And using NVIDIA Blueprints for agentic AI, enterprises can quickly create their own applications using NVIDIA’s advanced AI tools and end-to-end development expertise. In fact, NVIDIA Cosmos Nemotron, NVIDIA Llama Nemotron and NeMo Retriever supercharge the new NVIDIA Blueprint for video search and summarization, announced separately today.

    NeMo, NeMo Retriever and NVIDIA Blueprints are all available with the NVIDIA AI Enterprise software platform.

    Availability

    Llama Nemotron and Cosmos Nemotron models will be available soon as hosted application programming interfaces and for download on build.nvidia.com and Hugging Face. Access for development, testing and research is free for members of the NVIDIA Developer Program.

    Enterprises can run Llama Nemotron and Cosmos Nemotron NIM microservices in production with the NVIDIA AI Enterprise software platform on accelerated data center and cloud infrastructure.

    Sign up to get notified about Llama Nemotron and Cosmos Nemotron models, and join NVIDIA at CES.

    See notice regarding software product information.

    Categories: Generative AI
    Tags: Artificial Intelligence | CES 2025 | Cosmos | NVIDIA Blueprints | NVIDIA NIM

     

    Artificial intelligence is entering a new era — agentic AI — where teams of specialized agents can help people solve complex problems and automate repetitive tasks. With custom AI agents, enterprises across industries can manufacture intelligence and achieve unprecedented productivity. These advanced AI agents require a system of multiple generative AI models optimized for agentic
    Read Article

    English News

    New GeForce RTX 50 Series GPUs Double Creative Performance in 3D, Video and Generative AI​on January 7, 2025 at 2:30 am

    By Gerardo DelgadoJanuary 7, 2025

    New GeForce RTX 50 Series GPUs Double Creative Performance in 3D, Video and Generative AI​on January 7, 2025 at 2:30 am

    GeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more. Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative
    Read ArticleGeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more. Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative
    Read Article  

     

    GeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more.

    Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative generative AI models up to 2x faster in a smaller memory footprint, compared with the previous generation. They feature ninth-generation NVIDIA encoders for advanced video editing and livestreaming, and come with NVIDIA DLSS 4 and up to 32GB of VRAM  to tackle massive 3D projects.

    These GPUs come with various software updates, including two new AI-powered NVIDIA Broadcast effects, updates to RTX Video and RTX Remix, and NVIDIA NIM microservices — prepackaged and optimized models built to jumpstart AI content creation workflows on RTX AI PCs.

    Built for the Generative AI Era

    Generative AI can create sensational results for creators, but with models growing in both complexity and scale, generative AI can be difficult to run even on the latest hardware.

    The GeForce RTX 50 Series adds FP4 support to help address this issue. FP4 is a lower quantization method, similar to file compression, that decreases model sizes. Compared with FP16 — the default method that most models feature — FP4 uses less than half of the memory and 50 Series GPUs provide over 2x performance compared to the previous generation. This can be done with virtually no loss in quality with advanced quantization methods offered by NVIDIA TensorRT Model Optimizer.

    For example, Black Forest Labs’ FLUX.1 [dev] model at FP16 requires over 23GB of VRAM, meaning it can only be supported by the GeForce RTX 4090 and professional GPUs. With FP4, FLUX.1 [dev] requires less than 10GB, so it can run locally on more GeForce RTX GPUs.

    With a GeForce RTX 4090 with FP16, the FLUX.1 [dev] model can generate images in 15 seconds with 30 steps. With a GeForce RTX 5090 with FP4, images can be generated in just over five seconds.

    A new NVIDIA AI Blueprint for 3D-guided generative AI based on FLUX.1 [dev], which will be offered as an NVIDIA NIM microservice, offers artists greater control over text-based image generation. With this blueprint, creators can use simple 3D objects — created by hand or generated with AI — and lay them out in a 3D renderer like Blender to guide AI image generation.

    A prepackaged workflow powered by the FLUX NIM microservice and ComfyUI can then generate high-quality images that match the 3D scene’s composition.

    The NVIDIA Blueprint for 3D-guided generative AI is expected to be available through GitHub using a one-click installer in February.

    Stability AI announced that its Stable Point Aware 3D, or SPAR3D, model will be available this month on RTX AI PCs. Thanks to RTX acceleration, the new model from Stability AI will help transform 3D design, delivering exceptional control over 3D content creation by enabling real-time editing and the ability to generate an object in less than a second from a single image.

    Professional-Grade Video for All

    GeForce RTX 50 Series GPUs deliver a generational leap in NVIDIA encoders and decoders with support for the 4:2:2 pro-grade color format, multiview-HEVC (MV-HEVC) for 3D and virtual reality (VR) video, and the new AV1 Ultra High Quality mode.

    Most consumer cameras are confined to 4:2:0 color compression, which reduces the amount of color information. 4:2:0 is typically sufficient for video playback on browsers, but it can’t provide the color depth needed for advanced video editors to color grade videos. The 4:2:2 format provides double the color information with just a 1.3x increase in RAW file size — offering an ideal balance for video editing workflows.

    Decoding 4:2:2 video can be challenging due to the increased file sizes. GeForce RTX 50 Series GPUs include 4:2:2 hardware support that can decode up to eight times the 4K 60 frames per second (fps) video sources per decoder, enabling smooth multi-camera video editing.

    The GeForce RTX 5090 GPU is equipped with three encoders and two decoders, the GeForce RTX 5080 GPU includes two encoders and two decoders, the 5070 Ti GPUs has two encoders with a single decoder, and the GeForce RTX 5070 GPU includes a single encoder and decoder. These multi-encoder and decoder setups, paired with faster GPUs, enable the GeForce RTX 5090 to export video 60% faster than the GeForce RTX 4090 and at 4x speed compared with the GeForce RTX 3090.

    GeForce RTX 50 Series GPUs also feature the ninth-generation NVIDIA video encoder, NVENC, that offers a 5% improvement in video quality on HEVC and AV1 encoding (BD-BR), as well as a new AV1 Ultra Quality mode that achieves 5% more compression at the same quality. They also include the sixth-generation NVIDIA decoder, with 2x the decode speed for H.264 video.

    NVIDIA is collaborating with Adobe Premiere Pro, Blackmagic Design’s DaVinci Resolve, Capcut and Wondershare Filmora to integrate these technologies, starting in February.

    3D video is starting to catch on thanks to the growth of VR, AR and mixed reality headsets. The new RTX 50 Series GPUs also come with support for MV-HEVC codecs to unlock such formats in the near future.

    Livestreaming Enhanced

    Livestreaming is a juggling act, where the streamer has to entertain the audience, produce a show and play a video game — all at the same time. Top streamers can afford to hire producers and moderators to share the workload, but most have to manage these responsibilities on their own and often in long shifts — until now.

    Streamlabs, a Logitech brand and leading provider of broadcasting software and tools for content creators, is collaborating with NVIDIA and Inworld AI to create the Streamlabs Intelligent Streaming Assistant.

    Streamlabs Intelligent Streaming Assistant is an AI agent that can act as a sidekick, producer and technical support. The sidekick that can join streams as a 3D avatar to answer questions, comment on gameplay or chats, or help initiate conversations during quiet periods. It can help produce streams, switching to the most relevant scenes and playing audio and video cues during interesting gameplay moments. It can even serve as an IT assistant that helps configure streams and troubleshoot issues.

    Streamlabs Intelligent Streaming Assistant is powered by NVIDIA ACE technologies for creating digital humans and Inworld AI, an AI framework for agentic AI experiences. The assistant will be available later this year.

    Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality — without needing expensive, specialized equipment.

    Two new AI-powered beta effects are being added to the NVIDIA Broadcast app.

    The first, Studio Voice, enhances the sound of a user’s microphone to match that of a high-quality microphone. The other, Virtual Key Light, can relight a subject’s face to deliver even coverage as if it were well-lit by two lights.

    Because they harness demanding AI models, these beta features are recommended for video conferencing or non-gaming livestreams using a GeForce RTX 5080 GPU or higher. NVIDIA is working to expand these features to more GeForce RTX GPUs in future updates.

    The NVIDIA Broadcast upgrade also includes an updated user interface that allows users to apply more effects simultaneously, as well as improvements to the background noise removal, virtual background and eye contact effects.

    The updated NVIDIA Broadcast app will be available in February.

    Livestreamers can also benefit from NVENC — 5% BD-BR video quality improvement for HEVC and AV1 — in the latest beta of Twitch’s Enhanced Broadcast feature in OBS, and the improved AV1 encoder for streaming in Discord or YouTube.

    RTX Video — an AI feature that enhances video playback on popular internet browsers like Google Chrome and Microsoft Edge, and locally with Video Super Resolution and HDR — is getting an update to decrease GPU usage by 30%, expanding the lineup of GeForce RTX GPUs that can run Video Super Resolution with higher quality.

    The RTX Video update is slated for a future NVIDIA App release.

    Unprecedented 3D Render Performance

    The GeForce RTX 5090 GPU offers 32GB of GPU memory — the largest of any GeForce RTX GPU ever, marking a 33% increase over the GeForce RTX 4090 GPU. This lets 3D artists build larger, richer worlds while using multiple applications simultaneously. Plus, new RTX 50 Series fourth-generation RT Cores can run 3D applications 40% faster.

    DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. This enables animators to smoothly navigate a scene with 4x as many frames, or render 3D content at 60 fps or more.

    D5 Render and Chaos Vantage, two popular professional-grade 3D apps for architects and designers, will add support for DLSS 4 in February.

    3D artists have adopted generative AI to boost productivity in generating draft 3D meshes, HDRi maps or even animations to prototype a scene. At CES, Stability AI announced SPAR3D, its new 3D model that can generate 3D meshes from images in seconds with RTX acceleration.

    NVIDIA RTX Remix — a modding platform that lets modders capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing — supports DLSS 4, increasing graphical fidelity and frame rates to maximize realism and immersion during gameplay.

    RTX Remix soon plans to support Neural Radiance Cache, a neural shader that uses AI to train on live game data and estimate per-pixel accurate indirect lighting. RTX Remix creators can also expect access to RTX Skin in their mods, the first ray-traced sub-surface scattering implementation in games. With RTX Skin, RTX Remix mods expect to feature characters with new levels of realism, as light will reflect and propagate through their skin, grounding them in the worlds they inhabit.

    GeForce RTX 5090 and 5080 GPUs will be available for purchase starting Jan. 30 — followed by GeForce RTX 5070 Ti and 5070 GPUs in February and RTX 50 Series laptops in March.

    All systems equipped with GeForce RTX GPUs include the NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps, as well as NVIDIA Studio Drivers, tested extensively and released monthly to enhance performance and maximize stability in popular creative applications.

    Stay tuned for more updates on the GeForce RTX 50 Series. Learn more about how the GeForce RTX 50 Series supercharges gaming, and check out all of NVIDIA’s announcements at CES. 

    Every month brings new creative app updates and optimizations powered by the NVIDIA Studio 

    Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

    See notice regarding software product information.

    Categories: Pro Graphics
    Tags: 3D | Art | Artificial Intelligence | Creators | GeForce | In the NVIDIA Studio | NVIDIA RTX | NVIDIA Studio | NVIDIA Studio Driver | Rendering

     

    GeForce RTX 50 Series Desktop and Laptop GPUs, unveiled today at the CES trade show, are poised to power the next era of generative and agentic AI content creation — offering new tools and capabilities for video, livestreaming, 3D and more. Built on the NVIDIA Blackwell architecture, GeForce RTX 50 Series GPUs can run creative
    Read Article

    English News

    Now See This: NVIDIA Launches Blueprint for AI Agents That Can Analyze Video​on January 7, 2025 at 2:30 am

    By Adam ScrabaJanuary 7, 2025

    Now See This: NVIDIA Launches Blueprint for AI Agents That Can Analyze Video​on January 7, 2025 at 2:30 am

    The next big moment in AI is in sight — literally. Today, more than 1.5 billion enterprise level cameras deployed worldwide are generating roughly 7 trillion hours of video per year. Yet, only a fraction of it gets analyzed. It’s estimated that less than 1% of video from industrial cameras is watched live by humans,
    Read ArticleThe next big moment in AI is in sight — literally. Today, more than 1.5 billion enterprise level cameras deployed worldwide are generating roughly 7 trillion hours of video per year. Yet, only a fraction of it gets analyzed. It’s estimated that less than 1% of video from industrial cameras is watched live by humans,
    Read Article  

     

    The next big moment in AI is in sight — literally.

    Today, more than 1.5 billion enterprise level cameras deployed worldwide are generating roughly 7 trillion hours of video per year. Yet, only a fraction of it gets analyzed.

    It’s estimated that less than 1% of video from industrial cameras is watched live by humans, meaning critical operational incidents can go largely unnoticed.

    This comes at a high cost. For example, manufacturers are losing trillions of dollars annually to poor product quality or defects that they could’ve spotted earlier, or even predicted, by using AI agents that can perceive, analyze and help humans take action.

    Interactive AI agents with built-in visual perception capabilities can serve as always-on video analysts, helping factories run more efficiently, bolster worker safety, keep traffic running smoothly and even up an athlete’s game.

    To accelerate the creation of such agents, NVIDIA today announced early access to a new version of the NVIDIA AI Blueprint for video search and summarization. Built on top of the NVIDIA Metropolis platform — and now supercharged by NVIDIA Cosmos Nemotron vision language models (VLMs), NVIDIA Llama Nemotron large language models (LLMs) and NVIDIA NeMo Retriever — the blueprint provides developers with the tools to build and deploy AI agents that can analyze large quantities of video and image content.

    The blueprint integrates the NVIDIA AI Enterprise software platform — which includes NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for retrieval-augmented generation — to enable batch video processing that’s 30x faster than watching it in real time.

    The blueprint contains several agentic AI features — such as chain-of-thought reasoning, task planning and tool calling — that can help developers streamline the creation of powerful and diverse visual agents to solve a range of problems.

    AI agents with video analysis abilities can be combined with other agents with different skill sets to enable even more sophisticated agentic AI services. Enterprises have the flexibility to build and deploy their AI agents from the edge to the cloud.

    How Video Analyst AI Agents Can Help Industrial Businesses 

    AI agents with visual perception and analysis skills can be fine-tuned to help businesses with industrial operations by:

    • Increasing productivity and reducing waste: Agents can help ensure standard operating procedures are followed during complex industrial processes like product assembly. They can also be fine-tuned to carefully watch and understand nuanced actions, and the sequence in which they’re implemented.
    • Boosting asset management efficiency through better space utilization: Agents can help optimize inventory storage in warehouses by performing 3D volume estimation and centralizing understanding across various camera streams.
    • Improving safety through auto-generation of incident reports and summaries: Agents can process huge volumes of video and summarize it into contextually informative reports of accidents. They can also help ensure personal protective equipment compliance in factories, improving worker safety in industrial settings.
    • Preventing accidents and production problems: AI agents can identify atypical activity to quickly mitigate operational and safety risks, whether in a warehouse, factory or airport, or at a traffic intersection or other municipal setting.
    • Learning from the past: Agents can search through operations video archives, find relevant information from the past and use it to solve problems or create new processes.

    Video Analysts for Sports, Entertainment and More

    Another industry where video analysis AI agents stand to make a mark is sports — a $500 billion market worldwide, with hundreds of billions in projected growth over the next several years.

    Coaches, teams and leagues — whether professional or amateur — rely on video analytics to evaluate and enhance player performance, prioritize safety and boost fan engagement through player analytics platforms and data visualization. With visually perceptive AI agents, athletes now have unprecedented access to deeper insights and opportunities for improvement.

    During his CES opening keynote, NVIDIA founder and CEO Jensen Huang demonstrated an AI video analytics agent that assessed the fastball pitching skills of an amateur baseball player compared with a professional’s. Using video captured from the ceremonial first pitch that Huang threw for the San Francisco Giants baseball team, the video analytics AI agent was able to suggest areas for improvement.

    https://blogs.nvidia.com/wp-content/uploads/2025/01/JHH-pitch-metropolis-trim-final.mp4

    The $3 trillion media and entertainment industry is also poised to benefit from video analyst AI agents. Through the NVIDIA Media2 initiative, these agents will help drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.

    Worldwide Adoption and Availability 

    Partners from around the world are integrating the blueprint for building AI agents for video analysis into their own developer workflows, including Accenture, Centific, Deloitte, EY, Infosys, Linker Vision, Pegatron, TATA Consultancy Services (TCS), Telit Cinterion and VAST.

    Apply for early access to the NVIDIA Blueprint for video search and summarization.

    See notice regarding software product information.

    Editor’s note: Omdia is the source for 1.5 billion enterprise-level cameras deployed.   

    Categories: Generative AI
    Tags: Artificial Intelligence | CES 2025 | Industrial and Manufacturing | Media and Entertainment | Metropolis | NVIDIA AI Enterprise | NVIDIA Blueprints | NVIDIA NIM

     

    The next big moment in AI is in sight — literally. Today, more than 1.5 billion enterprise level cameras deployed worldwide are generating roughly 7 trillion hours of video per year. Yet, only a fraction of it gets analyzed. It’s estimated that less than 1% of video from industrial cameras is watched live by humans,
    Read Article

    English News

    Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTX​on January 7, 2025 at 2:30 am

    By Katie WashabaughJanuary 7, 2025

    Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTX​on January 7, 2025 at 2:30 am

    Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
    Read ArticleGenerative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
    Read Article  

     

    Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy.

    These larger models require massive amounts of diverse data for training, fine-tuning and validation. But collecting such data — including from rare edge cases and potentially hazardous scenarios, like a pedestrian crossing in front of an autonomous vehicle (AV) at night or a human entering a welding robot work cell — can be incredibly difficult and resource-intensive.

    To help developers fill this gap, NVIDIA Omniverse Cloud Sensor RTX APIs enable physically accurate sensor simulation for generating datasets at scale. The application programming interfaces (APIs) are designed to support sensors commonly used for autonomy — including cameras, radar and lidar — and can integrate seamlessly into existing workflows to accelerate the development of autonomous vehicles and robots of every kind.

    Omniverse Sensor RTX APIs are now available to select developers in early access. Organizations such as Accenture, Foretellix, MITRE and Mcity are integrating these APIs via domain-specific blueprints to provide end customers with the tools they need to deploy the next generation of industrial manufacturing robots and self-driving cars.

    Powering Industrial AI With Omniverse Blueprints

    In complex environments like factories and warehouses, robots must be orchestrated to safely and efficiently work alongside machinery and human workers. All those moving parts present a massive challenge when designing, testing or validating operations while avoiding disruptions.

    Mega is an Omniverse Blueprint that offers enterprises a reference architecture of NVIDIA accelerated computing, AI, NVIDIA Isaac and NVIDIA Omniverse technologies. Enterprises can use it to develop digital twins and test AI-powered robot brains that drive robots, cameras, equipment and more to handle enormous complexity and scale.

    Integrating Omniverse Sensor RTX, the blueprint lets robotics developers simultaneously render sensor data from any type of intelligent machine in a factory for high-fidelity, large-scale sensor simulation.

    With the ability to test operations and workflows in simulation, manufacturers can save considerable time and investment, and improve efficiency in entirely new ways.

    International supply chain solutions company KION Group and Accenture are using the Mega blueprint to build Omniverse digital twins that serve as virtual training and testing environments for industrial AI’s robot brains, tapping into data from smart cameras, forklifts, robotic equipment and digital humans.

    The robot brains perceive the simulated environment with physically accurate sensor data rendered by the Omniverse Sensor RTX APIs. They use this data to plan and act, with each action precisely tracked with Mega, alongside the state and position of all the assets in the digital twin. With these capabilities, developers can continuously build and test new layouts before they’re implemented in the physical world.

    Driving AV Development and Validation

    Autonomous vehicles have been under development for over a decade, but barriers in acquiring the right training and validation data and slow iteration cycles have hindered large-scale deployment.

    To address this need for sensor data, companies are harnessing the NVIDIA Omniverse Blueprint for AV simulation, a reference workflow that enables physically accurate sensor simulation. The workflow uses Omniverse Sensor RTX APIs to render the camera, radar and lidar data necessary for AV development and validation.

    AV toolchain provider Foretellix has integrated the blueprint into its Foretify AV development toolchain to transform object-level simulation into physically accurate sensor simulation.

    The Foretify toolchain can generate any number of testing scenarios simultaneously. By adding sensor simulation capabilities to these scenarios, Foretify can now enable  developers to evaluate the completeness of their AV development, as well as train and test at the levels of fidelity and scale needed to achieve large-scale and safe deployment. In addition, Foretellix will use the newly announced NVIDIA Cosmos platform to generate an even greater diversity of scenarios for verification and validation.

    Nuro, an autonomous driving technology provider with one of the largest level 4 deployments in the U.S., is using the Foretify toolchain to train, test and validate its self-driving vehicles before deployment.

    In addition, research organization MITRE is collaborating with the University of Michigan’s Mcity testing facility to build a digital AV validation framework for regulatory use, including a digital twin of Mcity’s 32-acre proving ground for autonomous vehicles. The project uses the AV simulation blueprint to render physically accurate sensor data at scale in the virtual environment, boosting training effectiveness.

    The future of robotics and autonomy is coming into sharp focus, thanks to the power of high-fidelity sensor simulation. Learn more about these solutions at CES by visiting Accenture at Ballroom F at the Venetian and Foretellix booth 4016 in the West Hall of Las Vegas Convention Center.

    Learn more about the latest in automotive and generative AI technologies by joining NVIDIA at CES.

    See notice regarding software product information.

    Categories: Robotics
    Tags: Artificial Intelligence | CES 2025 | Cosmos | Digital Twin | Industrial and Manufacturing | Isaac | NVIDIA Blueprints | Omniverse | Robotics | Simulation and Design | Transportation

     

    Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
    Read Article

    English News

    NVIDIA Blackwell GeForce RTX 50 Series Opens New World of AI Computer Graphics​on January 7, 2025 at 3:06 am

    By MAK GojarJanuary 7, 2025

    NVIDIA Blackwell GeForce RTX 50 Series Opens New World of AI Computer Graphics​on January 7, 2025 at 3:06 am

    NVIDIA today unveiled the most advanced consumer GPUs for gamers, creators and developers — the GeForce RTX™ 50 Series Desktop and Laptop GPUs.NVIDIA today unveiled the most advanced consumer GPUs for gamers, creators and developers — the GeForce RTX™ 50 Series Desktop and Laptop GPUs.  

     

    Next Generation of GeForce RTX GPUs Deliver Stunning Visual Realism and 2x Performance Increase, Made Possible by AI, Neural Shaders and DLSS 4

    CES—NVIDIA today unveiled the most advanced consumer GPUs for gamers, creators and developers — the GeForce RTX™ 50 Series Desktop and Laptop GPUs.

    Powered by the NVIDIA Blackwell architecture, fifth-generation Tensor Cores and fourth-generation RT Cores, the GeForce RTX 50 Series delivers breakthroughs in AI-driven rendering, including neural shaders, digital human technologies, geometry and lighting.

    “Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives,” said Jensen Huang, founder and CEO of NVIDIA. “Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago.”

    The GeForce RTX 5090 GPU — the fastest GeForce RTX GPU to date — features 92 billion transistors, providing over 3,352 trillion AI operations per second (TOPS) of computing power. Blackwell architecture innovations and DLSS 4 mean the GeForce RTX 5090 GPU outperforms the GeForce RTX 4090 GPU by up to 2x.

    GeForce Blackwell comes to laptops with all the features of desktop models, bringing a considerable upgrade to portable computing, including extraordinary graphics capabilities and remarkable efficiency. The Blackwell generation of NVIDIA Max-Q technology extends battery life by up to 40%, and includes thin and light laptops that maintain their sleek design without sacrificing power or performance.

    NVIDIA DLSS 4 Boosts Performance by Up to 8x
    DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. It works in unison with the suite of DLSS technologies to increase performance by up to 8x over traditional rendering, while maintaining responsiveness with NVIDIA Reflex technology.

    DLSS 4 also introduces the graphics industry’s first real-time application of the transformer model architecture. Transformer-based DLSS Ray Reconstruction and Super Resolution models use 2x more parameters and 4x more compute to provide greater stability, reduced ghosting, higher details and enhanced anti-aliasing in game scenes. DLSS 4 will be supported on GeForce RTX 50 Series GPUs in over 75 games and applications the day of launch.

    NVIDIA Reflex 2 introduces Frame Warp, an innovative technique to reduce latency in games by updating a rendered frame based on the latest mouse input just before it is sent to the display. Reflex 2 can reduce latency by up to 75%. This gives gamers a competitive edge in multiplayer games and makes single-player titles more responsive.

    Blackwell Brings AI to Shaders

    Twenty-five years ago, NVIDIA introduced GeForce 3 and programmable shaders, which set the stage for two decades of graphics innovation, from pixel shading to compute shading to real-time ray tracing. Alongside GeForce RTX 50 Series GPUs, NVIDIA is introducing RTX Neural Shaders, which brings small AI networks into programmable shaders, unlocking film-quality materials, lighting and more in real-time games.

    Rendering game characters is one of the most challenging tasks in real-time graphics, as people are prone to notice the smallest errors or artifacts in digital humans. RTX Neural Faces takes a simple rasterized face and 3D pose data as input, and uses generative AI to render a temporally stable, high-quality digital face in real time.

    RTX Neural Faces is complemented by new RTX technologies for ray-traced hair and skin. Along with the new RTX Mega Geometry, which enables up to 100x more ray-traced triangles in a scene, these advancements are poised to deliver a massive leap in realism for game characters and environments.

    The power of neural rendering, DLSS 4 and the new DLSS transformer model is showcased on GeForce RTX 50 Series GPUs with Zorah, a groundbreaking new technology demo from NVIDIA.

    Autonomous Game Characters

    GeForce RTX 50 Series GPUs bring industry-leading AI TOPS to power autonomous game characters in parallel with game rendering.

    NVIDIA is introducing a suite of new NVIDIA ACE technologies that enable game characters to perceive, plan and act like human players. ACE-powered autonomous characters are being integrated into KRAFTON’s PUBG: BATTLEGROUNDS and InZOI, the publisher’s upcoming life simulation game, as well as Wemade Next’s MIR5.

    In PUBG, companions powered by NVIDIA ACE plan and execute strategic actions, dynamically working with human players to ensure survival. InZOI features Smart Zoi characters that autonomously adjust behaviors based on life goals and in-game events. In MIR5, large language model (LLM)-driven raid bosses adapt tactics based on player behavior, creating more dynamic, challenging encounters.

    AI Foundation Models for RTX AI PCs

    Showcasing how RTX enthusiasts and developers can use NVIDIA NIM microservices to build AI agents and assistants, NVIDIA will release a pipeline of NIM microservices and AI Blueprints for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI.

    Use cases span LLMs, vision language models, image generation, speech, embedding models for retrieval-augmented generation, PDF extraction and computer vision. The NIM microservices include all the necessary components for running AI on PCs and are optimized for deployment across all NVIDIA GPUs.

    To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, NVIDIA today previewed Project R2X, a vision-enabled PC avatar that can put information at a user’s fingertips, assist with desktop apps and video conference calls, read and summarize documents, and more.

    AI-Powered Tools for Creators

    The GeForce RTX 50 Series GPUs supercharge creative workflows. RTX 50 Series GPUs are the first consumer GPUs to support FP4 precision, boosting AI image generation performance for models such as FLUX by 2x and enabling generative AI models to run locally in a smaller memory footprint, compared with previous-generation hardware.

    The NVIDIA Broadcast app gains two AI-powered beta features for livestreamers: Studio Voice, which upgrades microphone audio, and Virtual Key Light, which relights faces for polished streams. Streamlabs is introducing the Intelligent Streaming Assistant, powered by NVIDIA ACE and Inworld AI, which acts as a cohost, producer and technical assistant to enhance livestreams.

    Availability

    For desktop users, the GeForce RTX 5090 GPU with 3,352 AI TOPS and the GeForce RTX 5080 GPU with 1,801 AI TOPS will be available on Jan. 30 at $1,999 and $999, respectively.

    The GeForce RTX 5070 Ti GPU with 1,406 AI TOPS and GeForce RTX 5070 GPU with 988 AI TOPS will be available starting in February at $749 and $549, respectively.

    The NVIDIA Founders Editions of the GeForce RTX 5090, RTX 5080 and RTX 5070 GPUs will be available directly from nvidia.com and select retailers worldwide.

    Stock-clocked and factory-overclocked models will be available from top add-in card providers such as ASUS, Colorful, Gainward, GALAX, GIGABYTE, INNO3D, KFA2, MSI, Palit, PNY and ZOTAC, and in desktops from system builders including Falcon Northwest, Infiniarc, MAINGEAR, Mifcom, ORIGIN PC, PC Specialist and Scan Computers.

    Laptops with GeForce RTX 5090, RTX 5080 and RTX 5070 Ti Laptop GPUs will be available starting in March, and RTX 5070 Laptop GPUs will be available starting in April from the world’s top manufacturers, including Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MECHREVO, MSI and Razer.

     

    NVIDIA today unveiled the most advanced consumer GPUs for gamers, creators and developers — the GeForce RTX™ 50 Series Desktop and Laptop GPUs.

    English News

    NVIDIA Launches AI Foundation Models for RTX AI PCs​on January 7, 2025 at 3:25 am

    By MAK GojarJanuary 7, 2025

    NVIDIA Launches AI Foundation Models for RTX AI PCs​on January 7, 2025 at 3:25 am

    NVIDIA today announced foundation models running locally on NVIDIA RTX™ AI PCs that supercharge digital humans, content creation, productivity and development.NVIDIA today announced foundation models running locally on NVIDIA RTX™ AI PCs that supercharge digital humans, content creation, productivity and development.  

     

    NVIDIA NIM Microservices and AI Blueprints Help Developers and Enthusiasts Build AI Agents and Creative Workflows on PC

    CES—NVIDIA today announced foundation models running locally on NVIDIA RTX™ AI PCs that supercharge digital humans, content creation, productivity and development. 

    These models — offered as NVIDIA NIM™ microservices — are accelerated by new GeForce RTX™ 50 Series GPUs, which feature up to 3,352 trillion operations per second of AI performance and 32GB of VRAM. Built on the NVIDIA Blackwell architecture, RTX 50 Series are the first consumer GPUs to add support for FP4 compute, boosting AI inference performance by 2x and enabling generative AI models to run locally in a smaller memory footprint, compared with previous-generation hardware.

    GeForce™ has long been a vital platform for AI developers. The first GPU-accelerated deep learning network, AlexNet, was trained on the GeForce GTX™ 580 in 2012 — and last year, over 30% of published AI research papers cited the use of GeForce RTX.

    Now, with generative AI and RTX AI PCs, anyone can be a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow and LM Studio, enable enthusiasts to use AI models in complex workflows via simple graphical user interfaces.

    NIM microservices connected to these GUIs will make it effortless to access and deploy the latest generative AI models. NVIDIA AI Blueprints, built on NIM microservices, provide easy-to-use, preconfigured reference workflows for digital humans, content creation and more.

    To meet the growing demand from AI developers and enthusiasts, every top PC manufacturer and system builder is launching NIM-ready RTX AI PCs with GeForce RTX 50 Series GPUs.

    “AI is advancing at light speed, from perception AI to generative AI and now agentic AI,” said Jensen Huang, founder and CEO of NVIDIA. “NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.”

    Making AI NIMble

    Foundation models — neural networks trained on immense amounts of raw data — are the building blocks for generative AI.

    NVIDIA will release a pipeline of NIM microservices for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI. Use cases span large language models (LLMs), vision language models, image generation, speech, embedding models for retrieval-augmented generation (RAG), PDF extraction and computer vision.

    “GeForce RTX 50 Series GPUs with FP4 compute will unlock a massive range of models that can run on PC, which were previously limited to large data centers,” said Robin Rombach, CEO of Black Forest Labs. “Making FLUX an NVIDIA NIM microservice increases the rate at which AI can be deployed and experienced by more users, while delivering incredible performance.”

    NVIDIA today also announced the Llama Nemotron family of open models that provide high accuracy on a wide range of agentic tasks. The Llama Nemotron Nano model will be offered as a NIM microservice for RTX AI PCs and workstations, and excels at agentic AI tasks like instruction following, function calling, chat, coding and math.

    NIM microservices include the key components for running AI on PCs and are optimized for deployment across NVIDIA GPUs — whether in RTX PCs and workstations or in the cloud.

    Developers and enthusiasts will be able to quickly download, set up and run these NIM microservices on Windows 11 PCs with Windows Subsystem for Linux (WSL).

    “AI is driving Windows 11 PC innovation at a rapid rate, and Windows Subsystem for Linux (WSL) offers a great cross-platform environment for AI development on Windows 11 alongside Windows Copilot Runtime,” said Pavan Davuluri, corporate vice president of Windows at Microsoft. “NVIDIA NIM microservices, optimized for Windows PCs, give developers and enthusiasts ready-to-integrate AI models for their Windows apps, further accelerating deployment of AI capabilities to Windows users.”

    The NIM microservices, running on RTX AI PCs, will be compatible with top AI development and agent frameworks, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow and LM Studio. Developers can connect applications and workflows built on these frameworks to AI models running NIM microservices through industry-standard endpoints, enabling them to use the latest technology with a unified interface across the cloud, data centers, workstations and PCs.

    Enthusiasts will also be able to experience a range of NIM microservices using an upcoming release of the NVIDIA ChatRTX tech demo.

    Putting a Face on Agentic AI

    To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, NVIDIA today previewed Project R2X, a vision-enabled PC avatar that can put information at a user’s fingertips, assist with desktop apps and video conference calls, read and summarize documents, and more.

    The avatar is rendered using NVIDIA RTX Neural Faces, a new generative AI algorithm that augments traditional rasterization with entirely generated pixels. The face is then animated by a new diffusion-based NVIDIA Audio2Face™-3D model that improves lip and tongue movement. R2X can be connected to cloud AI services such as OpenAI’s GPT4o and xAI’s Grok, and NIM microservices and AI Blueprints, such as PDF retrievers or alternative LLMs, via developer frameworks such as CrewAI, Flowise AI and Langflow. Sign up for Project R2X updates.

    AI Blueprints Coming to PC

    NIM microservices are also available to PC users through AI Blueprints — reference AI workflows that can run locally on RTX PCs. With these blueprints, developers can create podcasts from PDF documents, generate stunning images guided by 3D scenes and more.

    The blueprint for PDF to podcast extracts text, images and tables from a PDF to create a podcast script that can be edited by users. It can also generate a full audio recording from the script using voices available in the blueprint or based on a user’s voice sample. In addition, users can have a real-time conversation with the AI podcast host to learn more about specific topics.

    The blueprint uses NIM microservices like Mistral-Nemo-12B-Instruct for language, NVIDIA Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever collection of microservices for PDF extraction.

    The AI Blueprint for 3D-guided generative AI gives artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects laid out in a 3D renderer like Blender to guide AI image generation. The artist can create 3D assets by hand or generate them using AI, place them in the scene and set the 3D viewport camera. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.

    NVIDIA NIM microservices and AI Blueprints will be available starting in February with initial hardware support for GeForce RTX 50 Series, GeForce RTX 4090 and 4080, and NVIDIA RTX 6000 and 5000 professional GPUs. Additional GPUs will be supported in the future.

    NIM-ready RTX AI PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.

    Learn more about how NIM microservices, AI Blueprints and NIM-ready RTX AI PCs are accelerating generative AI by joining NVIDIA at CES.

     

    NVIDIA today announced foundation models running locally on NVIDIA RTX™ AI PCs that supercharge digital humans, content creation, productivity and development.

    Previous 1 … 6,036 6,037 6,038 6,039 6,040 … 6,269 Next

    Recent Posts

    • WATCH: United flight hits light pole while landing
    • WATCH: Car filled with explosives crashes into club
    • Rudy Giuliani hospitalized in critical condition, his spokesman says
    • Late crossing guard Peter Clark remembered as steady and familiar presence in Barrhaven community
    • Hamilton man, 38, charged after shooting himself during attempted home invasion: police

    Categories

    • Agriculture
    • Automobile
    • Company Bio
    • English News
    • Entertainment
    • Home
    • Markets News
    • Sports
    • Tech
    World Byte News
    Facebook X (Twitter) Bluesky LinkedIn RSS
    © 2026 WorldByteNews. Designed by WorldByteNews.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.