Header image

AI and Cybersecurity: For the Many or The Few

Is AI something that can be used by everyone in cybersecurity yet?

Talk of AI and its use in the workplace often makes me wonder how many businesses are doing this, and what is needed to ensure its use efficiently.

For example, this recent research found 91% of organisations are prioritising AI to enhance their security posture, and are also leveraging AI for proactive threat prevention. Also any visit to a conference will see vendor booths adorned with AI terminology, and promises of what can be achieved.

So last week, when SC UK was invited to the Kaspersky Next conference in Athens, this point came up. Marco Preuss, deputy director of the company’s Global Research & Analysis Team, made a comment that AI “is for the few and not for the many.”

Specifically, Preuss said that “AI promised a lot and it is not that accessible” as it is mostly in the hands of few not the many.

In particular, a few data centres run it, but to do AI requires “very expensive hardware” that is owned by a few companies, and hardware is needed that can be run at home, but at the moment “it is a very elite select area of technology.”

Bandwagon Jumping?

We had a chance to ask him to further expand on this point, in particular, why there are free tools being used, and if there is a danger of ‘bandwagon jumping’ in order to not miss the trend? He gave the example of everyone owning a horse in the past, and then the car arrived and you could not imagine anyone using a horse to get around any more - but own horses for pleasure instead: and usually it is those with a larger income who own a horse!

“Usually technology starts with a very small elite groups and then folds down through a classic pyramid,” he said, but pointed that ChatGPT is “owned by the few and not by the many.”

“So small businesses will use ChatGPT, but they don't own their users, and in most cases, there are even fears that will improve AI,” he said. “So who gets the bigger benefit from it? The small businesses using the large language model (LLM), or the owner of the LLM - because it gets basically trained for free with very relevant closed data. That's what I mean with few."

Essentially I understood him to mean that the technologies are being commandeered by larger businesses, and the results of what is produced from requests comes from those who provide the largest amount of training data. Ultimately, are we all just users of a company's product?

Preuss said after this consideration of ownership, then comes the complexity and the cost, “so It is a question of evolution until it gets to the point that it could be used by a broader scale of people rather than what we can get from the few.” He explained that the few in this case is the ownership, and the more people that are involved, then we have a better base for making decisions “in terms of regulations, in terms of security, to bring everybody aboard.”

One Possible Direction

Also speaking at the event, AI language expert Lilian Balatsou said of all the models that are used, “what we see is not the end state, it's one direction: it's formed and it's developed by and for specific targets and a group of people.”

These people can be ‘data hungry’ and therefore, they determine the shape of the AI model, that if “we think generationally and we develop all the types of technologies to address other people's needs, or to provide opportunities and solutions for other use cases, then we can say it will be able to help them for us and it will be more democratising.”

All of this is considered around the usage of AI, which is different from training AI, and different types of AI can produce different results based on the training that has been done.

Research from the start of this year by Kaspersky found that AI is already used by 54% of companies respectively, and one in three of the 560 senior IT security leaders surveyed plans to adopt them within two years.

Is this because AI is expensive, or untrustworthy, or because it is being built and trained by certain companies looking for a specific result from their AI inquiries?  Whoever uses AI, they are a product of it and it may be the case that these web tools are being used by the many, which are controlled by the few.

Dan Raywood Senior Editor SC Media UK

Dan Raywood is a seasoned B2B journalist with over 20 years of experience, specializing in cybersecurity for the past 15 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes. Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Dan Raywood Senior Editor SC Media UK

Dan Raywood is a seasoned B2B journalist with over 20 years of experience, specializing in cybersecurity for the past 15 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes. Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Upcoming Events

11
Jul

Beyond Cloud Security Posture Management:

Validating Cloud Effectiveness with Attack Simulation

image image image image