Following Nvidia's DLSS 5 announcement and ensuing backlash, GeForce Evangelist Jacob Freeman described the technology as processing a single 2D frame plus motion vectors—seemingly contradicting CEO Jensen Huang's emphasis on geometry-level generative control. The exchange underscores confusion around the AI upscaling feature unveiled earlier this month.
Nvidia's DLSS 5, announced on March 16, 2026, as an AI-powered neural rendering breakthrough, has faced criticism for resembling a 2D post-processing filter. CEO Jensen Huang countered this at a live event and in a March 17 Tom's Hardware interview, insisting: "It’s not post-processing at the frame level, it’s generative control at the geometry level." He emphasized developer control, dubbing it "content-control generative AI" or neural rendering.
Subsequently, PC gaming YouTuber Daniel Owens asked Nvidia GeForce Evangelist Jacob Freeman if DLSS 5 "effectively tak[es] a single 2D frame as an input (with motion vectors) to create the output frame." Freeman affirmed: "Yes, DLSS5 takes a 2D frame plus motion vectors as an input." He added that DLSS 5 is "trained end to end to understand complex scene semantics such as characters, hair, fabric, and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast – all by analyzing a single frame."
This highlights a potential discrepancy: Huang's focus on 3D geometry control versus Freeman's single-frame emphasis. The demo's lighting effects had already sparked backlash, with observers questioning the tech's depth beyond 2D filtering.