TechCentralTechCentral
    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      Dimension Data to be renamed NTT Data

      27 October 2023

      DStv makes RWC final stream available for R19.95

      27 October 2023

      Karpowership gets green light for Richards Bay plant

      27 October 2023

      Why people wave on Zoom

      27 October 2023

      Microsoft gaining ground in cloud race with AWS, Google

      27 October 2023
    • World

      Intel beats expectations; manufacturing momentum builds

      27 October 2023

      Google CEO to testify on Monday in antitrust trial

      27 October 2023

      Huawei sees growth in cloud, digital power segments

      27 October 2023

      China rushes to swap Western tech for domestic options

      26 October 2023

      Alphabet, Meta deliver solid financial performances

      26 October 2023
    • In-depth

      Quantum computers in 2023: what they do and where they’re heading

      22 October 2023

      How did Stephen van Coller really do as EOH CEO?

      19 October 2023

      Risc-V emerges as new front in US-China tech war

      6 October 2023

      Get ready for a tidal wave of software M&A

      26 September 2023

      Watch | A tour of Vumatel’s Alexandra fibre roll-out

      19 September 2023
    • TCS

      TCS | Mesh.trade’s Connie Bloem on the future of finance

      26 October 2023

      TCS | Rahul Jain on Peach Payments’ big funding round

      23 October 2023

      TCS+ | How MiWay uses conversation analytics

      16 October 2023

      TCS+ | The story behind MTN SuperFlex

      13 October 2023

      TCS | The Information Regulator bares its teeth – an interview with Pansy Tlakula

      6 October 2023
    • Opinion

      Big banks, take note: PayShap should be free

      20 October 2023

      Eskom rolling out virtual wheeling – here’s how it works

      4 October 2023

      How blockchain can help defeat the scourge of counterfeit goods

      29 September 2023

      There’s more to the skills crisis than emigration

      29 September 2023

      The role of banks in Africa’s digital future

      22 August 2023
    • Company Hubs
      • 4IRI
      • Africa Data Centres
      • Altron Document Solutions
      • Altron Systems Integration
      • Arctic Wolf
      • AvertITD
      • CoCre8
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • E4
      • Entelect
      • ESET
      • Euphoria Telecom
      • iKhokha
      • Incredible Business
      • iONLINE
      • LSD Open
      • Maxtec
      • MiRO
      • NEC XON
      • Next DLP
      • Ricoh
      • Skybox Security
      • SkyWire
      • Velocity Group
      • Videri Digital
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud computing
      • Consumer electronics
      • Cryptocurrencies
      • E-commerce
      • Education and skills
      • Energy
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Metaverse and gaming
      • Motoring and transport
      • Open-source software
      • Public sector
      • Science
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » Sections » AI and machine learning » Protecting IP and data in the AI-as-a-service era

    Protecting IP and data in the AI-as-a-service era

    Businesses need to realise that the AI revolution isn't on the horizon - it's already here.
    By Next DLP24 August 2023
    Facebook Twitter LinkedIn WhatsApp Telegram Email

    Protecting IP and data in the AI-as-a-service eraThe AI landscape, with the emergence of tools like ChatGPT, Google Bard and other large language models (LLMs), has become deeply embedded in the operational fabric of our business and personal lives.

    Its recent evolution into AI-as-a-service (AIaaS) has been a game changer. No longer are organisations required to invest heavily in building their own AI infrastructure. Instead, with AIaaS, they can conveniently harness the might of AI to optimise operations, enhance user experiences and generate previously unimaginable nsights.

    Chatbots, AI-generated content and advanced search tools are merely the tip of the iceberg.

    Understanding the nuances

    Successfully navigating the AI landscape requires a keen understanding of the nuances of these tools, their capabilities and their potential pitfalls – only then can businesses confidently and securely capitalise on AI’s immense potential.

    Each AI tool, depending on its purpose and function, comes with its own set of risks, ranging from data privacy concerns to intellectual property threats. Imagine a situation where proprietary data, once thought to be securely held, is accidentally integrated into a public-facing chatbot, or where AI-generated content unknowingly breaches copyright laws.

    These aren’t just hypothetical scenarios: they have already happened.

    The strengths and pitfalls

    In the spirit of supporting, rather than slowing down or stopping, businesses in their daily operations, we’ve compiled what we’ve found to be the most popular generative AI tools, their strengths, their pitfalls, and what businesses should consider when making the decision to use them.

    There are several categories of generative AI tools. Chatbots, for example, are used in various scenarios, from guiding website visitors to generating data-driven responses and enhancing user engagement and business intelligence for businesses in every industry.

    Next, synthetic data: AI-generated datasets are enabling businesses to circumvent the need for vast real-world data, ensuring privacy while refining algorithms. We also have AI-generated code, where AI is accelerating software development and turning mere descriptions into executable code.

    Then we have “search”, where new AI tools offer natural language responses and are reimagining what search engines are capable of. However, many like ChatGPT, remain “black box” models whose mechanisms aren’t always transparent. AI tools are also revolutionising content generation, be it converting audio to text or transforming descriptions into visuals.

    Understanding the AI risk spectrum

    With AI, there are several hypothetical risks, and many pragmatic, real-life ones. The latter include things like consumer privacy, legal issues, bias, ethics and others. The former includes machines becoming sentient and taking over the world, AI programmed for harm, or AI developing behaviour that is destructive.

    Either way, as AI tools become more integrated into our organisations, there is growing concern over the risks they pose to data security. For example, intellectual property risk is very real. Platforms continually learn and adapt from user inputs. This presents a risk that proprietary information becomes embedded within a system’s dataset. A case in point would be Samsung’s IP exposure incident after an employee interfaced with ChatGPT.

    Covering all the bases

    To counter this risk, we recommend that businesses recognise that AI tools can be channels for data leakage. More and more, workforces are using AI tools like ChatGPT to help with their daily tasks, often without considering the potential consequences of uploading proprietary or confidential data. One needs to thoroughly scrutinise and assess an AI tool’s encryption, data handling policies and ownership agreements.

    IP ownership is another issue. An AI’s output is based on its training data, potentially sourced from multiple proprietary inputs. This blurred lineage raises questions about the ownership of generated outputs. In this instance, we recommend reviewing the legal terms and conditions of AI systems and even engaging legal teams during evaluations.

    All third-party generative AI tools should be carefully reviewed to understand both the legal protections and potential exposures. There are subtleties that are crucial to consider, including those that cover ownership of intellectual property and privacy matters. Check the relevant terms and conditions periodically, as these documents may be updated without notifying users.

    Fighting AI system attacks

    Entities also need to remember that AI tools aren’t immune to hacking. Bad actors are able to manipulate these systems in order to alter their behaviour to help them achieve a malicious objective. For instance, techniques such as Indirect prompt injection can manipulate chatbots, exposing users to risks.

    As AI systems are increasingly integrated into critical components of our lives, these attacks represent a clear and present danger, with the potential to have catastrophic effects on the security not only of companies, but nations, too.

    To protect against attacks of this nature, we recommend having AI usage policies, much in the same way companies today set and review social media policies. Also, establish reporting mechanisms for irregular outputs, and prepare for potential system attacks.

    The drive to implement AI security solutions that are able to respond to rapidly changing threats makes the need to secure AI itself even more urgent. The algorithms that we rely on to detect and respond to attacks must themselves be protected from abuse and compromise.

    Keeping up with regulations

    Because data input into AI systems might be stored, it could well fall under privacy regulations such as Popia, GDPR or CCPA. Moreover, AI integrations with platforms like Facebook can further complicate data privacy landscapes.

    This is why it is key to ensure data encryption and compliance with global data protection regulations. Entities need to thoroughly understand AI providers’ data storage, anonymisation, and encryption policies. Furthermore, because AI is such a rapidly evolving and complex field, security teams must stay abreast of all developments in this sphere. Understanding the challenges is the first step in protecting your organisation.

    Using AI services requires as much diligence as any online platform. This includes understanding license agreements, using robust passwords, and promoting user awareness. This is why cyber hygiene training needs to be prioritised, multi-factor authentication set up, and stringent password policies enforced.

    AI-as-a-service era

    Historically, businesses may have been complacent about data submissions due to a lack of awareness, limited regulatory consequences and the absence of high-profile data breaches. However, with the advent of AIaaS, data is being used more and more to train models, which amplifies the risks. As AIaaS becomes ubiquitous, safeguarding sensitive data is paramount to maintaining trust, ensuring regulatory compliance, and preventing potential misuse or exposure of proprietary information.

    All businesses should consider deploying data loss prevention tools to monitor and control data submissions to AI services. These can recognise and classify sensitive data, preventing inadvertent exposures.

    Businesses need to realise that the AI revolution isn’t on the horizon — it’s already here. As AI becomes more entrenched in our operational processes, we need to harness its power, yet navigate its risks judiciously. By understanding potential dangers and adopting holistic protection strategies, organisations can strike a balance between innovation and security.

    About Next
    Next DLP (“Next”) is a leading provider of insider risk and data protection solutions. The Reveal Platform by Next uncovers risk, stops data loss, educates employees and fulfils security, compliance and regulatory needs. The company’s leadership brings decades of cyber and technology experience from Fortra (previously HelpSystems), DigitalGuardian, Crowdstrike, Forcepoint, Mimecast, IBM, Cisco and Veracode. Next is trusted by organisations big and small, from the Fortune 100 to fast-growing healthcare and technology companies. For more, visit nextdlp.com, or connect on LinkedIn or YouTube.

    • Read more articles by Next DLP on TechCentral
    • This promoted content was paid for by the party concerned
    ChatGPT NeXT Next DLP OpenAI Samsung
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email
    Previous ArticleCallMiner named only leader in Conversation Intelligence for Customer Service
    Next Article Nvidia soars on AI optimism

    Related Posts

    Dimension Data to be renamed NTT Data

    27 October 2023

    DStv makes RWC final stream available for R19.95

    27 October 2023

    Karpowership gets green light for Richards Bay plant

    27 October 2023
    Add A Comment

    Comments are closed.

    Promoted

    Acsa aims for carbon neutrality by 2050

    27 October 2023

    iKhokha, Shopstar pave the way for simpler e-commerce

    27 October 2023

    Flutter vs React Native: a comprehensive comparison

    27 October 2023
    Opinion

    Big banks, take note: PayShap should be free

    20 October 2023

    Eskom rolling out virtual wheeling – here’s how it works

    4 October 2023

    How blockchain can help defeat the scourge of counterfeit goods

    29 September 2023

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2023 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.