An Initiative to Create a New System for Labeling Human-Generated Content Without the Use of AI
As artificial intelligence algorithms churn out text and images faster than humans can, a fundamental question emerges: how do we distinguish genuine creativity from machine imitation? The ЧИ and PI marker initiative offers a simple yet principled answer—a voluntary system for designating content created exclusively by human intelligence.

What if there was a simple, recognizable way to mark content created exclusively by humans? Not a complex technical system requiring specialized knowledge. Not another standard that nobody will adopt. Just a simple symbol—an authenticity marker.
PI – people intelligence
ЧИ – Человеческий интеллект (Chelovechesky Intellect)
This voluntary labeling symbol shows that your content was created exclusively by a human. Download the marker graphics and use them on your authentic, human-generated content.
PI Marker:on orange background (PNG format, SVG format) on yellow background (PNG format, SVG format)
ЧИ Marker:on orange background (PNG format, SVG format) on yellow background (PNG format, SVG format)

© Initiative Author – President of "Partner NKO" Alexander Pavlovich Goryachev
Table of Contents
- Initiative to Create a New System for Labeling Human-Generated Content Without the Use of AI
- Two Formats, One Idea
- Voluntariness as the Foundation of Trust
- The Battle for Authenticity: Children's Content and the Heritage of Future Generations
- Invitation to Discussion: The Essence of a Public and Legislative Initiative
Initiative to Create a New System for Labeling Human-Generated Content Without the Use of AI
In recent years, we've seen an explosion in rapid content creation. Artificial intelligence has learned to write texts, create illustrations, generate photorealistic images, and even produce videos. Technologies are developing at breakneck speed, and what seemed like science fiction yesterday is now available to anyone with a smartphone—press a button, and it's ready.
However, along with these remarkable capabilities, a new problem has emerged: How do we distinguish the real from the artificial? How can we tell whether what we're looking at is a genuine photograph or a generated image? A real person or a digital copy? And most importantly—how can we preserve the value of human creativity in a world where machines create content faster than we can consume it?
Sharp questions and urgent concerns have begun to surface. Picture a parent showing their child a beautiful photograph of nature. The child asks, "Is this real?"—and the parent doesn't know how to answer. The internet is already flooded with generated flora and fauna, and no one can reliably determine whose handiwork it is. Landscapes that don't exist. People who never lived. Events that never happened. Often they're obviously absurd and clearly fake, but then you encounter flat earth communities and Bigfoot believers who take things to a whole new level. And these are the most harmless examples.
This isn't just a philosophical question. It's a question of trust in the information we consume every day. It's about protecting future generations—our children—from a distorted view of reality. It's about respecting the labor of real photographers, artists, writers, historians, and musicians who pour their souls into their work.
The demand for reality is growing louder. People want to know the truth. They want to understand when they're looking at the result of human labor and when they're looking at the product of a digital algorithm. And they have—indeed, we have—every right to that knowledge.
Two Formats, One Idea
What if there was a simple, recognizable way to mark content made entirely by humans? Not a complex technical system requiring specialized knowledge. Not another standard that nobody will adopt. Just a straightforward, familiar symbol—an authenticity marker.
Picture this: "ЧИ" or, in international format: "PI"—a symbol next to your photograph, text, or video that clearly states:
"I created this as a human. No filters, no computer modifications, no artificial intelligence. This is the result of my labor, my vision, my creativity."
This isn't just an icon—it's a statement, a position, a way to highlight the authentic in an ocean of artificiality.
The authenticity marker exists in two equally valid formats, each serving the same purpose.

ЧИ is the Russian format, standing for Человеческий интеллект (Chelovechesky Intellect). Use this designation if you speak Russian and create content within Russian cultural spaces.
PI is the international format, standing for People Intelligence—designed for a global audience, for creators and content consumers worldwide.
Both markers carry identical meaning and function—they differ only in language and locality. Choose the format that resonates with your audience. The essence remains the same: a sign of honesty, a seal of human creativity, and your right to recognition for your work in a world where the line between real and artificial has blurred.
Think about the © and ® symbols we see everywhere—in Word documents, on websites, in logos. They didn't just appear overnight. They're the result of centuries of intellectual property law evolving and taking shape. The © marks copyright, while ® protects trademarks. These symbols have become a universal language that everyone understands, no matter where they are. They send a clear message: "This is mine. I created this. The law protects it." They work because they're simple, instantly recognizable, and carry real legal weight.
The ЧИ and PI markers are the natural evolution of this tradition for our era. While © and ® protect copyrights and trademarks, ЧИ and PI safeguard something more fundamental in the digital world—the authenticity of human creativity. They answer a pressing challenge: when AI generates content indistinguishable from human work, how do we preserve trust? These markers establish a new standard of transparency—as universal and recognizable as © and ®, but designed specifically to protect human authorship in the age of AI.
The applications for such a marker are truly boundless:
Live photographs — shots taken with a real camera, without generative filters or AI processing. A photographer who spent hours searching for the perfect shot, waited for the right light, and captured the moment. Their work deserves recognition.
Independent texts — articles, essays, posts written by a human from beginning to end. Every word is personally chosen by the author; every thought is formulated by their own mind. Yes, perhaps with grammar checking, but without content generation using AI. This article is an example.
Live video and films — content shot with real cameras featuring real people and real objects. Directors, cinematographers, actors—each has invested their talent and time in creating this work.
Paintings—works created by artists with brush, pencil, or pastels. Every stroke is the result of hand movement; every shade is a deliberate, painstaking choice by the master.
Audio, music, and songs — compositions played on real instruments, sung by live voices, recorded in studios or at concerts. Musicians have honed their craft for years, and this labor deserves recognition.
And much, much more. Any content, intellectual product, or object created by a human in their natural habitat can and should be marked with an authenticity marker.
Voluntariness as the Foundation of Trust
The key principle of this system is voluntariness. No one is required to use the marker. It's not a law, not a requirement, not a restriction. It's simply a tool for those who want to declare the authenticity of their content.
This approach carries profound meaning. The marker becomes a badge of pride, a symbol of honesty, a statement of values. When you place this sign, you're saying:
"I'm proud that I created this myself. I'm not hiding behind technology. I take full responsibility for my creativity."
This voluntary, declarative approach helps audiences understand when content is genuinely human-made. Conversely, it allows creators to openly acknowledge AI-generated content that might otherwise pass as human work.
In a world where AI creates images in seconds while humans need days or weeks, how do we protect the value of human labor? The authenticity marker becomes a public tool for creator protection.
When viewers see the marker, they understand: a real person stands behind this. Someone spent time, invested their energy, applied their craft. This isn't just pixels on a screen—it's a piece of someone's life embodied in their work.
Content created by humans will naturally become more valuable. The market will decide what matters more: mass-produced AI content or unique human creativity. But the market needs transparency first—people must be able to tell the difference.
Imagine a gallery where each painting is labeled: created by an artist or generated by AI. A music platform where you can filter for tracks performed by live musicians. A news site where journalists' articles carry a special badge.
This isn't discrimination against AI—it's honesty with people.
The Battle for Authenticity: Children's Content and the Heritage of Future Generations
The question of authenticity cuts especially deep in two areas that will shape our future: children's content and the cultural heritage we pass to future generations.
Start with children. They build their understanding of the world through what they see and consume. Every image, story, and melody becomes a building block in the foundation of their reality. When a child grows up surrounded by generated images, their perception warps. They don't learn what a real sunset looks like. How a cat actually moves. How a living person truly smiles.
Parents' demand for understanding what's real and what's AI-generated is becoming insistent. And this isn't panic—it's common sense. A generation raised on AI content risks losing touch with reality.
The authenticity marker helps parents and educators explain the difference to children.
"See this sign? It means a real photographer took this picture. This place is real. You can actually go there."
Or:
"No marker exists here—this was generated by a computer. It's beautiful, but it isn't real."
This isn't a fight against technology—it's a fight to preserve our connection with the real world. We need to ensure that children know what real places, people, and emotions look like. The boundary between play and reality must stay clear, like the physical act of removing VR glasses and understanding: now you're in the "real" world.
Which brings us to cultural heritage. What exactly are we leaving for future generations?
When our descendants study the early 21st century, what will they see? Which names? Which masterpieces? Who will be called the great artists, poets, and composers of this era?
Humanity's cultural heritage has always been built on genuine human creation. We remember Pushkin because these are his words, his thoughts, his genius. We admire Repin because his hand held the brush. We listen to Tchaikovsky because his soul sounds in every note.
But what happens if we can no longer distinguish human creativity from machine-made work? If in fifty years historians can't tell what was created by humans and what by algorithms?
We risk losing the very essence of cultural heritage. Heritage isn't just beautiful pictures or pleasant melodies—it's the trace of the human spirit. It's proof of what humans are capable of and inspiration for future generations.
The authenticity marker offers a way to preserve this heritage. It marks what humans truly created, ensuring that in a hundred years, our descendants can say:
"This was written by a living poet. This was painted by a real artist. This was composed by a human being who felt, suffered, and loved."
Of course, everyone has the right to use AI as an assistant. That's normal. It's useful. It accelerates work and opens new possibilities. But we need to highlight content that's 100% a product of human effort. People are tired of deception—of fake news, manipulated images, and texts that look human but are written by algorithms. They want to know the truth. An "authentic" marker will give them this opportunity, or at least bring them closer to it.
This doesn't mean all AI content is bad. Not at all. It has its place and applications. But when someone passes off generated content as their own labor, when AI text is published without indicating its origin—that's dishonest.
And most importantly—it deprives future generations of the opportunity to see the true face of our era. To understand who we were, what we learned, what we felt. What masterpieces we created with our own hands, minds, and souls.
The authenticity marker isn't just a symbol. It's our message to future generations:
"This was created by us. Living people. And this is part of our heritage to you."
How should this work in practice?
The authenticity marker system rests on several fundamental principles.
Application of the marker. Authors add the ЧИ or PI symbol to their content when they publish it—in a photo caption, at the end of an article, in a music track description. This simple action requires no special technology.
Value for the audience. The marker signals that a human created this content without AI generation. It confirms authorship and helps distinguish human creativity from machine-made work.
Levels of technical implementation. The basic level is a text symbol or graphic icon. The advanced level includes file metadata, digital signatures, and blockchain verification of authorship. The system scales easily from simple solutions to complex ones.
Operating principles. Voluntary—no one must use the marker. Transparent—everyone understands what it means. Decentralized—it's a tool, not a regulatory system.
Trust mechanisms. The author's reputation is key. Platforms verify creators' identities, confirming their authenticity. Society polices the process and exposes fakers who abuse the system. Over time, accepted standards emerge. Reputation, once lost, can't be rebuilt.
Practical application. Social media incorporates the marker in posts or profiles. Photo stocks and art platforms filter content by marker. Publishers and media outlets add the designation in their imprints. Music services create a separate category or tag.
The system doesn't require revolutionary changes—it integrates organically into existing infrastructure and grows gradually as needed.
Invitation to Discussion: The Essence of a Public and Legislative Initiative
Technologies are developing rapidly, and distinguishing human from machine content is becoming increasingly relevant. The ЧИ and PI authenticity markers offer one possible tool for addressing this challenge. They're not the only solution, nor are they universal, but they're worthy of attention. This rapidly developing public initiative aims to bring legislators to the table.
We'd like to discuss this idea with you. How viable is it? What are its strengths and weaknesses? How can we improve it? Your experience, doubts, and proposals are essential for building a system that actually works.
If you create content, ask yourself: would you want to mark your work with such a marker? If you consume content, does it matter to you whether what you read, watch, or listen to comes from a human or a machine? Let's discuss this together—you can join the conversation right now.
Now we'll try to formulate our Public and Legislative Initiative:
We propose legislatively enshrining the right (or obligation for government structures and institutions) for authors to voluntarily label content created without generative artificial intelligence using standardized designations: ЧИ (Человеческий интеллект/Chelovechesky Intellect) and PI (People Intelligence).
Legislative regulation should establish:
- A clear definition of "human-created content"
- Legal status of these markers as a form of voluntary authorship declaration
- Penalties for dishonest use of markers (applying them to AI-generated content)
- Verification mechanisms for platforms and publishers who want to implement a verification system
The goal of this initiative is to create a legal foundation for transparency in the digital environment, protect consumers' right to informed choice, and give authors a tool to confirm the authenticity of their work. The system remains voluntary and doesn't create barriers to using AI technologies, but it does cultivate a culture of honesty and responsibility in the digital space.
Initiative Author:
President of "Partner NKO"
Alexander Goryachev
Остались вопросы?
Пишите или звоните: +7 (495) 004-02-10
Академия НКО — образование руководителей и команды от «Партнёр НКО».
Вебинары НКО — 25 часов, с онлайн разбором ваших кейсов.
Конференция НКО — Общероссийский форум по развитию НКО.
Форум НКО — крупнейшее сообщество НКО в России.
Подпишитесь на наш Телеграм-канал Самое срочное и полезное для Вашей НКО
Подпишитесь на свежие новости НКО Будьте вкурсе важных событий в сфере НКО.





