Heartopia discloses AI-generated content after Steam backlash

The cozy life simulation game Heartopia, reminiscent of Animal Crossing with chibi anime style, has updated its Steam page to reveal the use of generative AI following player complaints. Developed by XD International, the game launched on January 17, 2026, and quickly peaked at over 35,000 concurrent players. However, its 66% positive review rating reflects criticism over the initial lack of transparency about AI in puzzles and chat features.

Heartopia emerged as a notable release on Steam this month, drawing comparisons to Animal Crossing but featuring chibi anime aesthetics. XD International's title underwent closed betas in November and December 2025 before its January 17, 2026, debut. The game saw strong initial interest, with concurrent player counts surpassing 35,000, yet it has faced scrutiny that has tempered its reception.

Many negative Steam reviews highlight the developer's failure to disclose AI-generated elements upfront. One player, parupa206, noted: “This game contains Generative AI (used to generate the puzzles), which they didn’t disclose until the accusations of it surfaced.” The revelation came after GAMINGbible journalist Richard Breslin contacted the UK PR team, prompting a Discord statement from the developers.

In that message, XD International clarified: “Monetization Content in Heartopia does not include AI-generated materials.” They added, “Should any AI-assisted content be used in the game in the future, it will be announced in advance. Marketing materials produced outside the game, including those created in collaboration with third-party suppliers, are not covered by the scope of this disclosure.”

Subsequently, the Steam page added an “AI Generated Content Disclosure” section, specifying that “AI is used in the puzzle gameplay to reinterpret and redraw in-game snapshot images” and that it assists “in in-game chat to help players understand different languages.” No official apology for the initial omission has appeared in checked channels, though the AI's role in puzzles was described as evidently prominent.

This episode underscores ongoing debates in gaming about AI transparency, particularly as tools become integral to development.

관련 기사

Dramatic illustration of Indie Game Awards revoking trophies from Clair Obscur: Expedition 33 due to generative AI use, featuring a judge pulling away a glitching trophy amid new winners.
AI에 의해 생성된 이미지

Clair Obscur: Expedition 33 AI controversy: Indie Game Awards revokes wins over generative AI use

AI에 의해 보고됨 AI에 의해 생성된 이미지

The Indie Game Awards 2025, organized by Six One Indie, revoked Game of the Year and Best Debut Indie Game awards from Clair Obscur: Expedition 33 after developer Sandfall Interactive confirmed using generative AI for temporary placeholder textures—a violation of the event's strict no-AI rules. Blue Prince and Sorry We’re Closed are the new recipients amid criticism of enforcement timing.

Japanese developer Cygames has issued an apology following backlash to its announcement of an AI-focused subsidiary. The studio assured fans that generative AI is not currently used in its games and promised prior notice for any future implementation. This comes amid growing industry debates over AI's role in game development.

AI에 의해 보고됨

PC game publisher Hooded Horse has implemented a strict ban on generative AI-generated art in all its titles, extending the prohibition to every stage of development. CEO Tim Bender argues that even temporary use of AI assets risks contaminating final builds. This policy aims to safeguard artistic integrity and avoid potential backlash from players.

Ubisoft revealed Teammates, a first-person shooter prototype powered by generative AI, during its November 21, 2025, investor presentation. The demo features AI companions that respond to voice commands in real-time, aiming to create more interactive gameplay. CEO Yves Guillemot described generative AI as a revolution comparable to the shift to 3D graphics.

AI에 의해 보고됨

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

AI에 의해 보고됨

Video game developers are increasingly using AI for voice acting, sparking backlash from actors and unions concerned about livelihoods and ethics. Recent examples include Embark Studios' Arc Raiders and Supertrick Games' Let it Die: Inferno, where AI generated incidental dialogue or character voices. SAG-AFTRA and Equity are pushing for consent, fair pay, and regulations to protect performers.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부