Make no bones about it, data privacy/security and the ethical/responsible use of generative AI (GAI) are hot-button issues. Concerns range from needing to protect sensitive data and intellectual property to preventing the use of GAI to create deepfakes, fake news, and other misinformation. There is also the issue of perpetuating biases, which can lead to discrimination. But to what extent are enterprises actually taking steps to address these concerns as they rapidly adopt GAI? In a recent Cutter survey, we asked organizations this key question (see Figure 1).
More than 40% of surveyed organizations say they have taken steps to deal with data privacy and security concerns that could potentially impact their GAI adoption initiatives; another 31% claim they plan to do so within the next 6-12 months or are seriously considering doing so. Overall, 73% of surveyed organizations have either already implemented practices to ensure privacy and security with their GAI usage, plan to do so within the near future, or are considering doing so.
On the other hand, just under one-third of organizations said they have undertaken efforts to ensure they can use GAI in ethical and responsible ways. Additionally, another 37% indicate they plan to do so within the next 6-12 months or are considering doing so. Overall, 69% of organizations claim to have either already implemented practices that will allow them to utilize GAI ethically and responsibly, are planning to do so in the near future, or are considering doing so.
Significance of Findings
These findings are revealing. They indicate that organizations are not only definitely moving to ensure data privacy, security, and the ethical/responsible use of GAI technology but are doing so rapidly.
Currently, organizations appear more focused (at least initially) on ensuring data privacy and security with their GAI usage initiatives. This is hardly surprising; security and privacy issues have always ranked at the very top among concerns when it comes to the enterprise adoption of new technologies. However, our research also strongly suggests that, over the next 6-12 months, we should expect to see more organizations ramping up their efforts to ensure that their use of GAI will be done in an ethical and responsible manner.
Going forward, we should expect that in the not-too-distant future, end-user organizations — and (especially) the Big Tech companies that are developing AI products — will need to place an even greater focus on the ethical and responsible use of GAI (and AI technologies in general), rather than considering these issues as concerns to be addressed after the technology has been implemented and released on the world. Such a shift, coupled with more transparency around functionality (including training/decisioning/output), would go a long way toward helping to alleviate much of the fear, mistrust, and other worries loudly voiced by opponents when it comes to the widespread adoption of AI technology.
Finally, I’d like to get your thoughts on the use of GAI — particularly practices for ensuring data privacy, security, and its ethical and responsible use. As always, your comments will be held in strict confidence. You can email me at experts@cutter.com or call +1 510 356 7299 with your comments.