Disney Bolsters Live-Event Advertising Efforts, Unveiling Key Partnerships And Certification Programon January 6, 2025 at 6:00 pm
Posted on by
Disney is boosting its advertising capabilities for live events, with third-party partnerships, a new certification program for live sports and entertainment as well as biddable deals for live sports. Google’s Display & Video 360, The Trade Desk, and Yahoo DSP are the first demand-side platforms to be certified by Disney. Magnite is the only third-party […]Disney is boosting its advertising capabilities for live events, with third-party partnerships, a new certification program for live sports and entertainment as well as biddable deals for live sports. Google’s Display & Video 360, The Trade Desk, and Yahoo DSP are the first demand-side platforms to be certified by Disney. Magnite is the only third-party
Disney is boosting its advertising capabilities for live events, with third-party partnerships, a new certification program for live sports and entertainment as well as biddable deals for live sports.
Google’s Display & Video 360, The Trade Desk, and Yahoo DSP are the first demand-side platforms to be certified by Disney. Magnite is the only third-party supply-side partner at launch.
Disney made the announcement at CES in Las Vegas, where is getting set for its annual Tech + Data Showcase on Wednesday. The event is a kickoff to the upfront process, with a focus on streaming and digital platforms.
Related Stories
The company sees automation as a key part of unlocking the potential of streaming inventory in real time. It said in a press release that it is looking to help brands fully tap into “lightning-in-a-bottle” moments via buy-side platforms.
In announcing the initiative, Disney posited a sports scenario. It said if a game is in its waning moments and then suddenly goes into overtime unexpectedly, it would now be able to offer a range of bid opportunities and enable dynamic pricing reflecting the real-time changes in supply and demand. The traditional system has resulted in repetition and locked-in pricing.
“As an industry, we need to reframe and redefine how we transact advertising using automation,” said Jamie Power, SVP of Addressable Sales, Disney Advertising. “The standards of the past don’t define the needs of today’s streaming, real-time requirements, and we’re committed to building a new and necessary framework for the modern marketer. Disney first transformed the traditional ad pod with choice-based and user-initiated ads with Hulu, and now – we’re setting a new standard for how advertising transactions are facilitated in a live-streaming environment.”
The certification program will equip both partners to handle large-scale inventory in live programming as well as pre-ingesting pre-approved creative messages, which can then be placed automatically.
“Planning for sports, specifically, requires a different strategy for biddable advertising,” said Matt Barnes, VP of Programmatic Sales, Disney Advertising. “While a traditional media plan may be focused on even delivery throughout the week, brands can miss out on a highly engaged audience and all those edge-of-your seat moments in a live game, if they’re limited by standard rules and frequency caps. With the introduction of Disney’s DSP certification for live, now more advertisers – across an even wider variety of categories – can capture the spikes in critical moments of engagement and fandom.”
AN URGENT search has been launched for a missing girl who vanished after last being seen on a “notoriously dangerous” road. Leyla Marczyk was last seen […]
Wembanyama’s Career Night Edges Out Brunson’s 61 Points as Spurs Top Knicks Victor Wembanyama delivered a star-making performance, leading the San Antonio Spurs to a […]
Building Smarter Autonomous Machines: NVIDIA Announces Early Access for Omniverse Sensor RTXon January 7, 2025 at 2:30 am
Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
Read ArticleGenerative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
Read Article
Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy.
These larger models require massive amounts of diverse data for training, fine-tuning and validation. But collecting such data — including from rare edge cases and potentially hazardous scenarios, like a pedestrian crossing in front of an autonomous vehicle (AV) at night or a human entering a welding robot work cell — can be incredibly difficult and resource-intensive.
To help developers fill this gap, NVIDIA Omniverse Cloud Sensor RTX APIs enable physically accurate sensor simulation for generating datasets at scale. The application programming interfaces (APIs) are designed to support sensors commonly used for autonomy — including cameras, radar and lidar — and can integrate seamlessly into existing workflows to accelerate the development of autonomous vehicles and robots of every kind.
Omniverse Sensor RTX APIs are now available to select developers in early access. Organizations such as Accenture, Foretellix, MITRE and Mcity are integrating these APIs via domain-specific blueprints to provide end customers with the tools they need to deploy the next generation of industrial manufacturing robots and self-driving cars.
Powering Industrial AI With Omniverse Blueprints
In complex environments like factories and warehouses, robots must be orchestrated to safely and efficiently work alongside machinery and human workers. All those moving parts present a massive challenge when designing, testing or validating operations while avoiding disruptions.
Mega is an Omniverse Blueprint that offers enterprises a reference architecture of NVIDIA accelerated computing, AI, NVIDIA Isaac and NVIDIA Omniverse technologies. Enterprises can use it to develop digital twins and test AI-powered robot brains that drive robots, cameras, equipment and more to handle enormous complexity and scale.
Integrating Omniverse Sensor RTX, the blueprint lets robotics developers simultaneously render sensor data from any type of intelligent machine in a factory for high-fidelity, large-scale sensor simulation.
With the ability to test operations and workflows in simulation, manufacturers can save considerable time and investment, and improve efficiency in entirely new ways.
International supply chain solutions company KION Group and Accenture are using the Mega blueprint to build Omniverse digital twins that serve as virtual training and testing environments for industrial AI’s robot brains, tapping into data from smart cameras, forklifts, robotic equipment and digital humans.
The robot brains perceive the simulated environment with physically accurate sensor data rendered by the Omniverse Sensor RTX APIs. They use this data to plan and act, with each action precisely tracked with Mega, alongside the state and position of all the assets in the digital twin. With these capabilities, developers can continuously build and test new layouts before they’re implemented in the physical world.
Driving AV Development and Validation
Autonomous vehicles have been under development for over a decade, but barriers in acquiring the right training and validation data and slow iteration cycles have hindered large-scale deployment.
To address this need for sensor data, companies are harnessing the NVIDIA Omniverse Blueprint for AV simulation, a reference workflow that enables physically accurate sensor simulation. The workflow uses Omniverse Sensor RTX APIs to render the camera, radar and lidar data necessary for AV development and validation.
AV toolchain provider Foretellix has integrated the blueprint into its Foretify AV development toolchain to transform object-level simulation into physically accurate sensor simulation.
The Foretify toolchain can generate any number of testing scenarios simultaneously. By adding sensor simulation capabilities to these scenarios, Foretify can now enable developers to evaluate the completeness of their AV development, as well as train and test at the levels of fidelity and scale needed to achieve large-scale and safe deployment. In addition, Foretellix will use the newly announced NVIDIA Cosmos platform to generate an even greater diversity of scenarios for verification and validation.
Nuro, an autonomous driving technology provider with one of the largest level 4 deployments in the U.S., is using the Foretify toolchain to train, test and validate its self-driving vehicles before deployment.
In addition, research organization MITRE is collaborating with the University of Michigan’s Mcity testing facility to build a digital AV validation framework for regulatory use, including a digital twin of Mcity’s 32-acre proving ground for autonomous vehicles. The project uses the AV simulation blueprint to render physically accurate sensor data at scale in the virtual environment, boosting training effectiveness.
The future of robotics and autonomy is coming into sharp focus, thanks to the power of high-fidelity sensor simulation. Learn more about these solutions at CES by visiting Accenture at Ballroom F at the Venetian and Foretellix booth 4016 in the West Hall of Las Vegas Convention Center.
Learn more about the latest in automotive and generative AI technologies by joining NVIDIA at CES.
See notice regarding software product information.
Generative AI and foundation models let autonomous machines generalize beyond the operational design domains on which they’ve been trained. Using new AI techniques such as tokenization and large language and diffusion models, developers and researchers can now address longstanding hurdles to autonomy. These larger models require massive amounts of diverse data for training, fine-tuning and
Read Article