Hackers of India

Generative AI and Cyber Crimes

 Brijesh Singh 

2023/08/07


Presentation Material

AI Generated Summarymay contain errors

Here is a summary of the content:

The speaker discusses the risks associated with large language models (LLMs) and generative AI. They highlight the existence of offline, uncensored models like Lama 2, Wizard, Vacuna, Falcon, and Mistral, which can be used to create harmful content, such as improvised explosive devices. The speaker emphasizes the importance of model alignment for society and general use, but notes that highly aligned models may sacrifice response quality.

The speaker also mentions other risks associated with LLMs, including:

  1. Prompt injection: manipulating the input prompt to exploit the model’s capabilities.
  2. Model poisoning: inserting malicious logic bombs or code into the neural network.
  3. Hidden prompts: using invisible font sizes or colors to hide instructions from humans but not from computers.
  4. Sequential communication attacks: using LLMs to engage in conversations, send emails, and even clone voices for phishing or social engineering attacks.

The speaker warns that these risks can lead to disinformation, fake news, and even election manipulation. They emphasize the need for guard rails around LLMs and generative AI, as well as an ethical framework, explainable AI, and human oversight in automated decision-making processes.