Now is the time to get started with generative artificial intelligence (AI) — especially because your competition is most likely using the technology already or planning to use it soon.
Need proof? Consider this: In a survey recently conducted by Scale AI, more than 8 in 10 business leaders said they either have Generative AI already in production or are planning to use or experiment with the technology soon.
For further evidence, consider the rapid adoption of the ChatGPT chatbot, software that uses Generative AI. In just the first week after ChatGPT launched, it gained over 1 million users. Since then ChatGPT has attracted more than 100 million users who generate some 10 million queries daily. While these figures undoubtedly include some nonbusiness users, they nonetheless point to the high level of interest that Generative AI has, well, generated. ChatGPT has also spawned competitors and collaborators that are rushing to capture market share, test functionality or embed it in their own offerings.
Big advances in both business and tech
What’s behind this sudden groundswell of activity around Generative AI? One major factor is that businesses expect the technology to deliver real business benefits, based on what they have already seen by leveraging AI capabilities in robotic process automation, computer vision and more. They have realized revenue increases and decreased costs for everything from supply chain management to marketing and sales. The technology has also helped them strengthen collaboration, discover valuable insights, and improve products, programs, services and offers.
The latest incarnation of Generative AI is surprisingly sophisticated. The technology has matured rapidly and significantly.
The basic concept behind Generative AI is twofold. First is its ability to learn patterns and structures from existing content. Second is the technology’s ability to use what it has learned to generate entirely new content. This content can include text, videos, images, audio and code.
Many early examples have involved the generation of new text, such as articles, press releases and news stories. But Generative AI can be used for much more. Other applications now include advanced assistants and “copilots,” education, coding, accelerating new discoveries and automation.
For example, GPT-4, the latest iteration from developer OpenAI, can generate, edit and iterate with users on both creative and technical tasks. That breadth of GPT-4’s functionality has led to its adoption by a wide range of users. These include Duolingo for its language-teaching app, Morgan Stanley wealth management services and even the government of Iceland.
How to use Generative AI now
With so much progress being made with Generative AI today, now is no time to remain on the sidelines. In fact, there is much your organization can do with the technology today:
- You can use Generative AI to create entirely new content, including text, images, audio, videos and code.
- You can apply Generative AI to applications that include PR and marketing, drug discovery, knowledge management, HR applications, computer programming and much more. Industries already using Generative AI include media and entertainment, manufacturing, fashion and financial services.
- You can enjoy serious business benefits. The Scale AI survey found that the benefits from traditional AI can include improved customer service (cited by 61% of respondents), greater operational efficiency (56%), improved profitability (50%) and the development of new product capabilities (49%). Your organization can enjoy these gains, too.
The need for responsible AI
To be sure, there are serious challenges and concerns facing the use of Generative AI, including ethics, regulatory compliance and security. Currently, few regulations limit the use of Generative AI, or protect the privacy and security of its users. But this will change, and soon.
Other concerns include the possibility of input bias, a lack of diversity in AI’s values, the exposure of confidential and private data, and the theft of intellectual property. Additionally, there are concerns about what’s known as a “hallucination,” the creation of false or misleading content that is not justified by the AI’s training data.
However, with the right safeguards in place, organizations can mitigate many challenges and concerns. In this way, they can embark on their Generative AI journey, even if only on an experimental basis, rather than remaining on the sidelines.
What organizations need today is an approach that leads to responsible AI. This involves six important tasks:
- Define proper usage and governance policies. Set clear guidelines for your staff, letting them know what is — as well as what is not — permissible when using Generative AI.
- Introduce accountability via an organizational structure. This could include the creation of an AI oversight committee or governance office. You’ll also want a process for elevating and resolving concerns around the use of AI, whether generative or otherwise.
- Mitigate the risks and challenges to increase resilience and proper output. No one knows your business better than you do. Apply that knowledge to outline the ways Generative AI could be misused in your business. Then apply safeguards to either prevent these misuses or generate alarms when they do occur.
- Acknowledge and remediate the inherent bias of Generative AI models. Because the technology learns from existing content, you’ll need to check this content for bias. Then do what you can to correct for it.
- Create human oversight and feedback channels. Generative AI may appear intelligent, but in the end, it’s only a computer program. You still must design, embed and implement AI with human oversight to ensure the technology is used safely, constructively and securely.
- Ensure transparency. Share information about your Generative AI projects widely and openly within your organization. Clearly explain the project’s goals. Detail the measures being taken to ensure security and fight bias. Welcome questions, and do your best to answer them.
How to get started
Ready to get started with Generative AI, but not sure how? Try initiating these four steps:
- Form an AI governance construct such as an AI Council. Make sure it involves not only your AI subject-matter experts, but also members of your legal, ethics and security staffs. Board-level exposure will be critical, too.
- Create AI policies and guidelines for the safe use of AI. Find the right blend, one that’s not too restrictive, but still robust enough to keep your organization safe.
- Openly discuss the technology’s possible impact on jobs and duties. Explain which tasks now done by humans could be augmented and/or performed in the near future by Generative AI — as well as those tasks that could not. Also detail which jobs, if any, the technology is likely to replace, as well as your hopes to create new job opportunities for those affected with additional training.
- Focus on companywide enablement. This means including not just IT and engineering. You’ll also need to include business users.
How DXC can help you
Wherever your organization is on its Generative AI journey, DXC can help. We offer AI services that include roadmap and strategy, design, implementation and industrialization.
You can also benefit from DXC’s deep experience with AI technology. We’ve been developing our AI capabilities for nearly two decades, starting with Natural Language Processing (NLP) and earlier versions of foundation models (AI systems trained on large quantities of data), including open-source predecessors. To date, our Global AI Practice has helped organizations such as these use AI for business gains:
- American Airlines worked with DXC to apply the power of AI and machine learning to improve the accuracy of predictions of aircraft touchdown times and runway arrivals.
- A leading European aircraft-maintenance company partnered with DXC to use AI for improving the accuracy of its sales quotes and bolstering its maintenance, repair and overhaul (MRO) services.
Also, DXC practices what it preaches. We’re committed to using Generative AI internally. And to ensure that we use these AI engines ethically and safely, DXC has set internal policies for their responsible use.