Organizations Restrict Generative AI Amid Privacy and Security Concerns

By Sharique

A Cisco 2024 Data Privacy Benchmark Study reveals that over a quarter (27%) of organizations have temporarily banned the use of generative AI within their workforce due to concerns about privacy and data security. Additionally, 63% have implemented controls limiting the data input into these tools, and 61% have specified which generative AI tools employees can use. Despite these precautions, organizations admit to inputting sensitive data, such as internal processes (62%), employee information (45%), non-public company data (42%), and customer details (38%). Respondents (92%) see generative AI as a distinct technology, posing novel challenges requiring new approaches to manage data and mitigate risks. Key concerns include potential harm to legal and intellectual property rights (69%), public or competitor sharing of entered information (68%), and the possibility of incorrect information returned to users (68%). To build trust, 91% of security and privacy professionals recognize the need for more efforts to reassure customers about AI data use.

Data privacy is deemed critical to commercial success by 94% of security and privacy professionals, with 97% believing customers would refrain from buying if data protection is lacking. Ethical use of data is a responsibility acknowledged by 97%, and 95% argue that the business benefits of privacy investment outweigh the costs. A majority of respondents (98%) report privacy metrics to the board, with top metrics including audit results (44%), data breaches (43%), data subject requests (31%), and incident response (29%). Governments implementing data privacy laws are endorsed by 80% of respondents, with 6% perceiving a negative impact and 95% noting positive effects. Compliance with data privacy laws is seen as essential to demonstrating effective data protection to consumers, reinforcing the connection between privacy and customer trust in the era of AI.

Leave a Comment