Generative AI: 5 Guidelines for Responsible Development

Generative AI: 5 Guidelines for Responsible Development
63 / 100

Ease administrative burden: Businesses that perform heavy manual processes (such as medical coding/billing) can use generative AI to speed up work. This can free employees to focus on more fulfilling tasks.

Make outputs traceable and explainable: Ensure that a human can understand how a tool arrived at an outcome, including any potential biases. This can help individuals challenge a decision that impacts them.

1. Human-centered design

Generative AI systems create new content based on input from users—prompts may be text, images, design ideas, musical notes, or more complex content. Users can add to or modify the generated content via a user interface that allows them to manipulate visual elements and words associated with those elements.

These features allow generative AI models to do things like organize data, expedite research and product first drafts, and assist knowledge workers with solving problems. In these cases, the technology has a human multiplier effect for businesses and can save time by automating repetitive tasks that otherwise sap staff energy.

But generative AI also presents challenges that require thoughtful consideration, particularly when it is being used in sensitive or highly personal ways. For example, if a business uses generative AI to develop candidate profiles for new hires, employees could worry that the model will discriminate against them or that the model will recommend firing them. To avoid this, a company needs to establish clear and transparent protocols for how its use of generative AI will impact employee hiring and firing.

2. Trustworthiness

The first step in creating your generative AI policy is to set clear objectives that are in line with your organization’s broader vision and values. Once you’ve established your goals, be sure to communicate them to employees and leadership so that everyone is on the same page about how generative AI will be used.

The latest advancements in generative AI are based on transformers and breakthrough language models that allow models to train larger and more complex models without the need to label each individual word or phrase ahead of time. These systems can answer more complex questions and track relationships across large corpuses of text, images, chemicals, proteins and DNA.

Generative AI tools used to make administrative decisions are subject to federal laws and policies that require a human in the loop and a determination that the benefits outweigh potential harms. Institutions should consult with their designated officials for privacy and security management and the institutional chief information officer before deploying or using these tools. They should also assess and mitigate risks, document problems and pause deployment if necessary to meet performance goals.

3. Reliability

Historically, technology was most effective in automating tasks that were relatively rote and had well-defined, standardized rules. Generative AI has the potential to change that, automating a wide range of cognitive tasks and introducing significant changes to work processes and functions.

As a result, enterprises should carefully consider the implications of these tools on employees and workplace culture. In the planning and design stages, companies should use GBA Plus to understand how these tools could impact different population groups and develop measures to mitigate any negative impacts.

GBA-based analysis is a key component in developing a generative AI policy and a critical step for responsible development. It allows for the inclusion of ethical principles and best practices for these emerging technologies.

Having a clear policy also promotes transparency and accountability. It prompts organizations to regularly evaluate and validate the quality and accuracy of AI outputs. This mitigates misinformation, poor decisions, and subpar outputs. It also helps to reassure employees, mitigating fears of replacement and over-monitoring, and fosters organizational morale. The policy should be accessible to all employees, posted on an internal wiki or intranet, and updated as needed.

4. Security

In an era of data breaches and distrust, public sector leaders must make sure that generative AI is secure. This means ensuring that the system can’t be exploited to compromise personal information or cause any other negative impact.

It also means making sure that the system’s model, algorithms and outputs can be audited and contested by citizens or other external parties (with due consideration for privacy). This can help to ensure that systems are performing as intended and do not generate false positive results.

One example of this is the way that generative AI can use training data to impersonate users in email, social media or video communications. This enables malicious actors to spread misinformation or urge stakeholders to share sensitive information with them.

Another important security concern is the way that generative AI can produce artifacts that infringe on copyrights and licenses. It’s not always possible to determine which parts of a training dataset adhere to particular copyrights or licenses, and this can lead to unexpected legal implications for users that depend on these systems.

5. Privacy

The rapid developments in generative AI have caught many enterprise leaders off guard. They must recommit to their commitment to AI practices that prioritize transparency, fairness and accountability.

For example, local governments may prohibit the use of generative AI to generate images, audio recordings or videos that represent a real public official, employee or member of the public (as exhibited by San Francisco’s draft guidelines for staff using Generative AI). They may also want to review and update their cyber, data governance and privacy protocols to mitigate the risks of malicious actors utilizing generative AI tools to infer private information, unravel identities or conduct cyber attacks.

Some generative AI models are not transparent or easy to understand, raising the risk of misinterpretation or bias. Local governments can address this issue by ensuring that any policies or guidelines on the use of generative AI are written in plain language to ensure that employees actually understand them. They should also provide ongoing training to help them recognize the risks and nuances of generative AI. This will be essential to the success of any generative AI initiatives that they adopt.

Dulquer X Margin

Dulquer X Margin is a passionate writer contributing insightful content on the Mirror Eternally website. His current focus explores the captivating world of interesting articles, ensuring every event leaves a lasting impression.