AI-driven content creation: Opportunities and challenges

Generative AI is revolutionising content creation, but challenges like quality, copyright, and privacy remain. CMOs must adopt AI responsibly to stay ahead.

author-image
John Premkumar
New Update
john info

Generative AI has become a game-changer in content creation with its ability to quickly produce text, images, videos, and music—tasks that once relied solely on human creativity. Marketers, in particular, have embraced it for its ability to generate personalised, multimodal content. 

Advertisment

CMOs have recognised the value of generative AI and are quickly adopting it. According to a recent Infosys survey, 57% of marketers are actively using generative AI for content creation and personalisation. 

However, generative AI comes with its challenges. The survey also shows that while marketers are excited about adopting generative AI, their key concerns include data privacy (43%) and regulatory compliance (42%), as well as intellectual property, quality control, and ethical use.

While 2024 has seen widespread adoption of AI, the focus is now shifting to using the technology responsibly.

John Premkumar, Infosys

Quality, copyright, privacy, and creativity

A major issue with generative AI-produced content is the quality and accuracy of the content it generates.

While AI can produce seemingly coherent outputs, it can also introduce factual inaccuracies, known as hallucinations, as well as misleading information and biases—especially problematic in fields such as news, medical, or technical content.

Hallucinations arise because generative AI’s output is based on predicting the next words in a sequence—it does not check its outputs for accuracy.

Another concern is copyright. There are two main issues here: The first issue pertains to the ownership of content generated by AI. AI doesn’t hold copyright, leaving it unclear who owns the rights—whether it’s the user or the platform developer. 

Second, AI models trained on material are being challenged by the creatives whose work the models have been trained on as infringing their copyright, as in a recent lawsuit filed by Asian News International against ChatGPT for using its news content without permission.

Data privacy risks are also a concern, as AI models could inadvertently leak sensitive information, violating privacy rights.

John Premkumar, Infosys

Additionally, the rise of AI-generated content raises worries about the decline of human creativity and the displacement of jobs in creative fields.

To address these concerns, a proactive and thoughtful approach is required to ensure that AI complements human efforts, strengthens content accuracy, and upholds legal and ethical standards. Marketers should build their use of AI on the following ethical foundations:

Human input and oversight: From preparing data for training the model to prompting the LLM for the desired output, human oversight is crucial. Blending AI-generated content with human supervision ensures originality, ethics, and alignment with the brand’s values and goals.

Using proprietary data: Training AI models with brand-specific data can address concerns about content quality and originality. 

Contracts and legal advocacy for copyright concerns: To avoid legal challenges with ownership, companies should establish clear contractual agreements that specify the ownership and liability of AI-generated content. 

• While using open-source LLMs, users should carefully assess models' pros, cons, and legal risks. Reviewing licensing terms and ensuring datasets have proper permissions or fall under fair use will help reduce copyright infringement risks.

• Equipping AI systems with content detection tools that compare generated output against extensive databases of copyrighted material can help avoid inadvertently plagiarising existing works that have been used to train the model.

Technology and transparency for data privacy: To protect data privacy in AI-driven content creation, consider using anonymisation techniques such as data masking and differential privacy to reduce exposure. 

SAP uses these methods to train AI models without personal data. Implementing access control and obtaining user consent for data usage also helps ensure proper handling and fosters trust and accountability.

Empowering employees to alleviate job loss fears: Organisations should train employees to work alongside AI technologies, positioning AI as a tool rather than a threat. It is crucial to equip staff with generative AI skills and involve them in quality testing and usability assessments.

AI-driven content creation: Best practices for CMOs

Companies can mitigate the risks and challenges of generative AI by implementing clear frameworks, establishing dedicated teams, and fostering collaboration with lawmakers.

Specifically, CMOs should bring together cross-functional teams—legal, ethical, technical, creative, and marketing—to align AI-driven content creation with marketing goals and ethical standards.

John Premkumar, Infosys

Key recommendations include:

• Establish dedicated teams to address AI-related copyright, legal, and data privacy concerns.

• Set up a responsible AI office to oversee ethical practices and ensure compliance.

• Educate all employees on AI usage, ensuring they understand ethical practices, accountability, and the responsible application of AI.

• Collaborate with lawmakers to shape clear and specific copyright laws for AI-generated content.

By taking these steps, organisations can harness the power of AI to scale content creation, drive personalisation, and unlock new opportunities for innovation and efficiency, all while ensuring compliance with ethical and legal standards.

(John Premkumar is the Vice President & Service Offering Head for Digital Experience Business at Infosys.) 

 

Marketing artificial intelligence
Advertisment