Growing reliance on AI will not likely result in any of the three most common views of how AI will affect our future. Each...
If you've ever been to an expensive restaurant and ordered a familiar dish like, say, lasagna, but received a plate with five different elements...
Recent events have confirmed that the cyber realm can be used to disrupt democracies as surely as it can destabilize dictatorships. Weaponization of information and malicious dissemination through social media pushes citizens into polarized echo chambers and pull at the social fabric of a country. Present technologies enhanced by current and upcoming Artificial Intelligence (AI) capabilities, could greatly exacerbate disinformation and other cyber threats to democracy.
Whether AI and the technologies it enables will reach their full potential depends on the workforce that will work alongside them. Yet the skills...
IntroductionTrustworthy vs Responsible AITrustworthy AIAttributes of trustworthy AI1.     Transparent, interpretable and explainable2.     Accountable3.     Reliable, resilient, safe and secure4.     Fair and non-discriminatory5.     Committed to privacy...
If you’ve read the many predictions about the future of AI, you’ve likely found them to be wildly different. They range from AI spelling...
Meta-attacks represent a sophisticated form of cybersecurity threat, utilizing machine learning algorithms to target and compromise other machine learning systems. Unlike traditional cyberattacks, which may employ brute-force methods or exploit software vulnerabilities, meta-attacks are more nuanced, leveraging the intrinsic weaknesses in machine learning architectures for a more potent impact. For instance, a meta-attack might use its own machine-learning model to generate exceptionally effective adversarial examples designed to mislead the target system into making errors. By applying machine learning against itself, meta-attacks raise the stakes in the cybersecurity landscape, demanding more advanced defensive strategies to counter these highly adaptive threats.
In simplest terms, a multimodal model is a type of machine learning algorithm designed to process more than one type of data, be it text, images, audio, or even video. Traditional models often specialize in one form of data; for example, text models focus solely on textual information, while image recognition models zero in on visual data. In contrast, a multimodal model combines these specializations, allowing it to analyze and make predictions based on a diverse range of data inputs.
Differential Privacy is a privacy paradigm that aims to reconcile the conflicting needs of data utility and individual privacy. Rooted in the mathematical theories of privacy and cryptography, Differential Privacy offers quantifiable privacy guarantees and has garnered substantial attention for its capability to provide statistical insights from data without compromising the privacy of individual entries. This robust mathematical framework incorporates Laplace noise or Gaussian noise algorithms to achieve this delicate balance.

Marin’s Statement on AI Risk

The rapid development of AI brings both extraordinary potential and unprecedented risks. AI systems are increasingly demonstrating emergent behaviors, and in some cases, are...
Cybersecurity strategies need to change in order to address the new issues that Machine Learning (ML) and Artificial Intelligence (AI) bring into the equation. Although those issues have not yet reached crisis stage, signs are clear that they will need to be addressed – and soon – if cyberattackers are to be prevented from obtaining a decided advantage in the continuing arms race between hackers and those who keep organizations’ systems secure.
Model Evasion in the context of machine learning for cybersecurity refers to the tactical manipulation of input data, algorithmic processes, or outputs to mislead or subvert the intended operations of a machine learning model. In mathematical terms, evasion can be considered an optimization problem, where the objective is to minimize or maximize a certain loss function without altering the essential characteristics of the input data. This could involve modifying the input data x such that f(x) does not equal the true label y, where f is the classifier and x is the input vector.
In 2013, George F. Young and colleagues completed a fascinating study into the science behind starling murmurations. These breathtaking displays of thousands – sometimes...
Neural networks learn from data. They are trained on large datasets to recognize patterns or make decisions. A Trojan attack in a neural network typically involves injecting malicious data into this training dataset. This 'poisoned' data is crafted in such a way that the neural network begins to associate it with a certain output, creating a hidden vulnerability. When activated, this vulnerability can cause the neural network to behave unpredictably or make incorrect decisions, often without any noticeable signs of tampering.
Data poisoning is a targeted form of attack wherein an adversary deliberately manipulates the training data to compromise the efficacy of machine learning models. The training phase of a machine learning model is particularly vulnerable to this type of attack because most algorithms are designed to fit their parameters as closely as possible to the training data. An attacker with sufficient knowledge of the dataset and model architecture can introduce 'poisoned' data points into the training set, affecting the model's parameter tuning. This leads to alterations in the model's future performance that align with the attacker’s objectives, which could range from making incorrect predictions and misclassifications to more sophisticated outcomes like data leakage or revealing sensitive information.