UK Prime Minister, Keir Starmer, has announced that digital ID will become mandatory to prove the right to work in the UK by 2029, triggering both ministerial praise and civil liberties concerns.
Interestingly, a petition on the UK Government’s site : https://petition.parliament.uk/ had attracted approaching three million signatures of people opposed to the bill, within a week of the announcement being made.
Rolled out by 2029
The Prime Minister has confirmed that a new digital identity scheme will be introduced across the UK by 2029, with every citizen and legal resident required to use a digital ID to prove their right to work.
Mandatory
The new ID will be free and optional for those not seeking employment, but will be compulsory for anyone taking up paid work. The government says it will replace paper documents and National Insurance numbers for right-to-work checks, with full implementation expected before the next general election. The government also says that, by law, this must take place no later than August 2029.
What Form Will It Take?
The government says the digital ID will be a secure, app-based credential stored on people’s mobile phones using the GOV.UK Wallet system. It will include core personal information such as name, date of birth, nationality or residency status, and a photo. The app will act as a proof of identity and legal right to work, with data encrypted and held directly on the user’s device.
The system has been designed to allow users to share only the information needed in each situation, for example, confirming eligibility to work without revealing unrelated personal details. If a phone is lost or stolen, the credential can be revoked remotely and reissued.
The government says this will replace the need to provide paper copies of documents such as passports or residence permits, and will become the standard method of proving work eligibility across the UK labour market.
Why?
The government says the scheme is designed to reduce illegal working, deter unauthorised migration, and improve the consistency of identity checks. Ministers argue that illegal employment remains a key draw for people entering the UK without permission, and that a digital system will make enforcement more effective.
The new ID is also framed as a broader tool for improving access to public services. It is hoped that over time, it could be used to simplify applications for childcare, benefits, driving licences, and tax records, although these uses will be optional, not mandatory.
In a statement issued through Downing Street, Prime Minister Keir Starmer said: “Digital ID is an enormous opportunity for the UK. It will make it tougher to work illegally in this country, making our borders more secure.”
However, some opponents believe the move is motivated more by political positioning than practical enforcement. For example, with pressure mounting over small boat crossings and immigration policy, privacy campaigners argue that the scheme could have been designed primarily to reassure voters rather than address the root causes of illegal working.
Previous attempts
It should be noted here that this is not the first time a UK government has proposed a national identity scheme. Back in the early 2000s, then-Prime Minister Tony Blair introduced plans for a physical ID card, which became law in 2006. The cards were intended to help combat terrorism, immigration abuse, and benefit fraud, and were linked to a central National Identity Register.
However, the scheme faced widespread opposition on civil liberties grounds and was criticised for being expensive, intrusive, and ineffective. In 2010, the incoming Conservative-Liberal Democrat coalition government scrapped the programme and destroyed the database. At the time, the Home Secretary called it a “high-cost, high-risk” scheme that offered little public benefit.
Although the new digital ID plan differs in format, with no central identity register and no requirement to carry or show ID in public, it seems that many of the same concerns about privacy and state overreach have re-emerged.
Encrypted
Although the digital ID will be held on a person’s phone in the form of a secure app-based wallet, similar to the NHS app or mobile payment cards, it will use encrypted, on-device storage so that if a phone is lost, the credential can be immediately revoked and reissued.
For Working Legally
Current right-to-work rules already require employers to check and retain copies of identity documents, such as passports or biometric residence permits, or to use the Home Office online service. Civil penalties for non-compliance can be up to £60,000 per illegal worker for repeat offences.
Ministers say the new digital ID will therefore reduce the risk of fraud, speed up hiring, and close off loopholes that currently allow the use of borrowed or forged documents. It is also intended to help enforcement agencies identify patterns of non-compliance across the labour market, including in casual and gig economy roles.
According to the Cabinet Office, “a new streamlined digital system to check right to work will simplify the process, drive up compliance, crack down on forged documents and create intelligence data on businesses.”
Border Security
The policy has also been presented by the Prime Minister as a key part of the government’s approach to tackling illegal migration (which has been much in the news lately). In a statement issued through Downing Street, he said: “Digital ID is an enormous opportunity for the UK. It will make it tougher to work illegally in this country, making our borders more secure.”
He added: “We are doing the hard graft to deliver a fairer Britain for those who want to see change, not division. That is at the heart of our Plan for Change.”
Ministers argue that access to informal work is a major incentive for people entering the country without permission. By requiring all legal workers to use digital ID, the government hopes to reduce the so-called “pull factor” of illegal employment.
What Is (And Isn’t) Required
The government says the digital ID will be required only for those seeking paid employment. There are no plans to require it for everyday activities such as accessing healthcare or public spaces, and people will not be expected to carry proof of identity at all times. For example, the government materials explicitly state that “there will be no requirement for individuals to carry their ID or be asked to produce it” outside of employment-related checks.
However, the digital ID is expected to become increasingly useful for other tasks, such as accessing childcare, welfare, or tax records. It’s understood these uses will be optional, with ministers presenting them as convenience features rather than legal requirements.
Access And Inclusion
While the system is designed primarily for smartphone use, ministers have also confirmed that physical alternatives will be made available for people who are digitally excluded. This may include older people, those experiencing homelessness, or individuals without regular access to internet-connected devices.
Consultation Planned
A formal public consultation will launch later this year, seeking input on how to design the system inclusively. The government says this will include engagement with charities and local authorities, as well as face-to-face outreach and support services.
The Cabinet Office says the aim is to create “a service that takes the best aspects of the digital identification systems that are already up and running around the world,” while ensuring it “works for those who aren’t able to use a smartphone.”
Used In Other Countries
Some other countries already have working digital ID schemes. Examples of these that the UK’s digital ID model draws on include Estonia, Denmark, Australia, and India. For example:
– In Estonia, citizens use a mandatory digital ID for voting, healthcare, banking, and education, supported by strong encryption and decentralised systems.
– In Denmark, a MitID credential is used for logging into government and banking services, though it is not compulsory for all citizens.
– Australia’s national Digital ID system allows residents to access public services through apps like myGov, with varying levels of identity strength depending on the use case.
– In India, the Aadhaar system assigns a unique biometric ID number to over a billion people, primarily to streamline welfare and reduce fraud.
Ministers say the UK version will focus on privacy by design, with data stored locally on the user’s device and shared selectively.
Public Reaction And Political Response
The announcement has triggered a divided response across the political spectrum. Supporters argue it will modernise outdated systems and improve national security, while opponents say it risks overreach and mission creep.
More than one million people have already signed a Parliamentary petition opposing the introduction of digital ID, with civil liberties groups warning of long-term consequences for personal freedom. For example, Big Brother Watch, a UK-based privacy campaign group, said: “Plans for a mandatory digital ID would make us all reliant on a digital pass to go about our daily lives, turning us into a checkpoint society that is wholly un-British.”
Also, Liberty, the human rights organisation, expressed concern, stating that the proposals raise “huge concerns about mass surveillance” and could increase barriers for vulnerable people trying to access work or support.
Opposition politicians have also criticised both the scale of the scheme and the lack of debate. For example, Conservative leader Kemi Badenoch has questioned the cost, saying the government should focus on better enforcement of existing laws. The SNP and Northern Ireland’s First Minister have also raised concerns about the implications for devolved powers and the rights of Irish citizens.
Employers And Service Providers
Businesses will need to adjust their onboarding and compliance processes once the new system is in place. The government says it will issue new guidance and offer integration options, but employers may face practical questions around adoption timelines, system compatibility, and staff training.
The Home Office is expected to update its employer toolkits and codes of practice during the rollout. Officials have said the changes will reduce red tape in the long term but acknowledge that transitional support may be needed.
There is no requirement yet for employers to take any action, but the digital ID scheme is likely to become the default verification method once legislation is passed. The Department for Science, Innovation and Technology has said it is working with industry groups and software providers to ensure compatibility and reduce disruption.
Security And Safeguards
In terms of security and privacy, according to the Cabinet Office, the digital ID will use “state-of-the-art encryption and user authentication to ensure data is held and accessed securely.” The information will remain under the control of the user, stored on their device and not in a centralised database.
The government says the system is designed to limit personal data sharing, with users able to present only the specific information required for a given situation. For example, an employer might only see proof of work eligibility without accessing unrelated personal details.
If a device is lost or compromised, the credential can be cancelled and reissued. The government says this offers better protection than paper-based documents, which are easier to forge or misuse.
Challenges And Unanswered Questions
Despite assurances around data security and voluntary usage beyond employment, it must be said that there remain some unresolved concerns about the scope and risks of the new digital ID system. For example:
– Inclusion will require careful planning and proper resourcing to ensure fair access for people without smartphones, stable housing, or standard documents.
– Privacy and data safety remain a concern, with campaigners warning that even encrypted systems are not immune to hacking or misuse.
– Cost and complexity are still unclear, as the government has not yet published a full estimate of programme costs or explained how the rollout will be phased.
– Public trust will be critical, especially given the level of opposition from civil liberties groups and the wider concerns already raised across Parliament.
What Does This Mean For Your Business?
If delivered effectively, it’s possible to see how a digital ID scheme could bring some long-term operational benefits to UK businesses, i.e. by reducing the administrative burden of right-to-work checks and making fraud harder to commit. A single, standardised credential could simplify hiring, especially in sectors where temporary or remote onboarding is common. Employers, however, will want clear timelines, technical support, and assurance that they won’t be exposed to new liabilities during the transition.
Public reaction to the scheme is likely to remain mixed. While those in work will be legally required to adopt the new system, others may choose to use it to access public services more easily. The success of the rollout will depend heavily on how well the government delivers inclusive access for people who do not have smartphones or consistent digital connectivity. Ministers have promised support and consultation, but this remains a key point of scrutiny.
However, it’s clear already that the wider political and civil liberties questions are unlikely to go away. Campaigners continue to warn of surveillance risks and creeping functionality, especially if the ID becomes more widely used in everyday life over time. The comparison with previous ID card proposals is unavoidable. Although this version is digital-only, decentralised, and limited in scope, it revives long-standing concerns about privacy and state control.
As with other large digital infrastructure programmes, the practical outcomes will depend on delivery, not just design. That includes building trust, preventing mission creep, and ensuring the system works reliably in the real world. For now, businesses and citizens alike will be watching closely as the consultation opens and the legislation begins its passage through Parliament.
Meta will let UK users pay a monthly fee to use Facebook and Instagram without adverts, introducing a lower‑priced “consent or pay” model in response to UK data protection guidance.
Users Offered A Choice
Meta has confirmed that UK users will soon be offered a choice, i.e., continue using Facebook and Instagram for free with personalised ads, or pay a monthly subscription to remove them. The subscription will cost £2.99 per month when accessed on the web, or £3.99 per month on iOS and Android. These rates will apply to a user’s first Meta account. If additional Facebook or Instagram accounts are linked via Meta’s Accounts Centre, extra accounts can be added to the subscription for £2 a month (web) or £3 a month (mobile). A dismissible notification will begin appearing to users in the coming weeks, giving adults over 18 time to review and decide.
When?
Meta has not provided an exact date for when the ad-free subscription will go live in the UK, but it has stated that it will begin rolling out “in the coming weeks” as of its official announcement on 26 September 2025.
How The Subscription Model Will Work
Meta (Facebook) says subscribing will essentially remove all ads from Facebook and Instagram feeds, Stories, Reels, and other surfaces. Meta says that subscriber data will no longer be used to deliver personalised advertising and the company has also stated that it is charging a higher price for mobile subscriptions due to Apple and Google’s in‑app transaction fees.
The subscription applies across all accounts linked to a user’s Meta Accounts Centre. This means that users managing both a personal and a business account, or other multiple accounts, can pay one primary fee and then add extra accounts at a reduced monthly rate.
People who choose not to subscribe will continue to see ads, but will retain access to existing tools such as Ad Preferences, activity-based targeting controls, and the “Why am I seeing this ad?” explainer.
Why Meta Is Making This Change
It seems that the subscription model is being launched in direct response to regulatory pressure in the UK. For example, Meta said the approach was developed following “extensive engagement” with the Information Commissioner’s Office (ICO), which has recently clarified that online personalised advertising should be treated as a form of direct marketing. Under UK data protection law, users have the right to object to their data being used in this way.
In a high-profile settlement earlier this year, Meta agreed to stop using the personal data of human rights campaigner Tanya O’Carroll for targeted advertising. The ICO publicly supported O’Carroll’s position and urged Meta to offer clearer choices to users over how their data is used. Meta now says the subscription offers a fair and transparent way for people to choose whether to consent to personalised advertising or pay to avoid it entirely.
The UK Regulatory Context
The ICO’s interpretation of data rights has shaped the new model. For example, its March 2025 statement emphasised that organisations must give people a way to opt out of their personal data being used for direct marketing, including targeted online ads. Following its settlement with Meta, the ICO confirmed that the company had significantly reduced the originally proposed subscription price and welcomed the introduction of the new model as an example of compliance with UK data protection obligations.
It should also be noted that the UK pricing tier is substantially lower than the EU equivalent, where Meta had introduced a similar subscription model in 2023 priced at around €9.99 per month. That model attracted regulatory criticism, fines, and calls for more privacy-friendly alternatives.
The European Backdrop
In April 2024, the European Data Protection Board published an opinion stating that “consent or pay” models must not pressure people into accepting data use. In their view, consent must be freely given and fully informed, and platforms like Facebook must offer real alternatives rather than a binary choice. Regulators have argued that due to Meta’s market dominance, users may feel they have no realistic option but to accept personal data tracking or start paying to keep using services that are widely embedded in social and professional life.
In April 2025, Meta was fined €200 million by the European Commission under the Digital Markets Act for failing to provide a compliant version of its subscription model across the EU. Meta is appealing the decision but has framed the UK rollout as an example of how “pro-innovation” regulatory engagement can lead to workable outcomes.
What It Means For Everyday Users
For individual users in the UK, the subscription appears to create a direct trade-off between privacy and cost. For example, those who do not want to see ads can now remove them for a relatively low monthly fee, particularly when compared to the higher pricing seen in Europe. The pricing structure may also appeal to users who manage multiple accounts, as they can cover all of them under one bundled subscription.
People who continue using the free tier will still see ads, but Meta says they will remain in control of how their data is used to shape ad experiences. Existing privacy tools will remain available, including options to turn off activity-based ad targeting and to manage interests and advertiser interactions.
And For Business Users?
UK business users who rely on Facebook and Instagram for customer engagement, lead generation, or ecommerce should not see significant disruption. The free tier remains intact, and most users are expected to continue using the platform without subscribing, at least initially.
However, business users who also use Facebook and Instagram for personal reasons may choose to pay for the ad-free experience. This could help reduce distraction, but it also raises questions for businesses managing multiple accounts. Meta’s Account Centre lets users link multiple profiles, but additional accounts incur a fee, potentially adding monthly costs for businesses using more than one profile across different functions.
Advertisers
The launch of the subscription model essentially introduces a new form of audience segmentation. People who pay for the ad-free experience will not be shown any ads and will also be excluded from data processing for advertising purposes. This means they will not be available for targeting, retargeting, or inclusion in lookalike audience models.
In practical terms, this could result in slightly smaller campaign reach, reduced effectiveness of retargeting strategies, and less data for ad performance optimisation. However, the actual impact will depend on how many people choose to subscribe. Meta has positioned the new subscription as a supplement rather than a replacement for its ad business, which continues to power most of its revenue and remains core to its UK economic contribution.
Competitors
The move follows broader industry trends, with other major platforms already offering ad-free tiers. For example, YouTube Premium removes all adverts across videos and music and charges more than Meta’s proposed rate. X (formerly Twitter) offers a Premium Plus plan to remove almost all ads, and Snapchat has experimented with removing ads from key surfaces in its Platinum plan.
Meta’s UK pricing is among the lowest, undercutting most other ad-free subscription options. This may give the company a competitive edge with privacy-conscious users and could create pressure on rivals to adjust pricing or introduce similar models.
A Compliance Measure … And An Opportunity
Meta has positioned the change as a regulatory compliance measure, but it also presents an opportunity to test new revenue streams and reduce legal exposure. By charging a relatively low price and tying it to UK-specific guidance, the company is attempting to avoid further fines and litigation while learning how users respond to a consent-based subscription model.
The pricing structure reflects wider industry dynamics, including the growing cost of mobile transactions and the limitations placed on data processing by new data laws. Meta has also used the announcement to promote the economic value of its advertising tools, saying its platforms supported over 357,000 jobs and £65 billion in UK economic activity in 2024 alone.
Others Who Will Be Watching Closely
Those likely to be most affected by or involved in the rollout include regulators, privacy campaigners, advertisers, and everyday users of the platforms. The ICO is expected to monitor how the subscription model works in practice and whether it meets legal standards for free and informed consent. Privacy groups may also be looking for evidence that Meta genuinely stops using subscriber data for advertising. Advertisers will be watching for any impact on campaign performance, particularly around reach and targeting. Rival platforms in the UK and beyond may also be studying how effectively Meta manages the balance between regulation, user experience, and revenue.
Concerns
Privacy experts have already raised some concerns that the model places a price tag on privacy, forcing people to pay to prevent their data being used for tracking and targeting. Critics argue that data protection rights should not depend on a person’s ability to pay. The ICO’s current position is that the subscription represents a valid approach to consent, but some legal observers suggest further scrutiny may follow if complaints emerge about how the choice is presented or how data is processed.
Campaigners also point out that a paid subscription will not necessarily solve deeper issues with surveillance advertising, including the scale of data collection and the risks it poses to vulnerable users. Others have noted that people in low-income groups, young users, and those with limited digital literacy may be less able to make informed decisions or afford the subscription, reinforcing digital inequality.
What Does This Mean For Your Business?
Meta’s new ad-free subscription introduces a clearer line between paid privacy and free access, but it also raises significant questions about fairness, regulation, and business impact. For UK businesses, the ability to continue reaching a large audience on Facebook and Instagram remains largely unchanged in the short term. However, if a growing number of users pay to avoid ads, the addressable audience for paid campaigns may begin to shrink, thereby making it harder for small firms to rely on low-cost, highly targeted advertising. Meta’s economic contribution to UK advertising is significant, but maintaining that value depends on how many users continue opting into the ad-supported model.
The low UK price point is likely to encourage adoption compared to similar schemes in the EU, and it gives Meta a way to meet regulatory demands without heavily disrupting its business model. It also gives other tech firms a benchmark for what regulators might accept in similar contexts. For regulators and privacy advocates, the coming months will be a test of whether offering a paid alternative is enough to uphold the principle of free and informed consent.
For users, the offer may feel fairer than being given no choice at all, but the framing still forces a trade-off that not everyone will find acceptable. For competitors, the low pricing could trigger reassessments of their own ad-free offerings. For campaigners, the subscription will not address wider concerns about surveillance-based business models, and for Meta, the rollout could either become a blueprint for future compliance or a flashpoint if uptake leads to new scrutiny.
A new iPhone app that pays users for their call recordings to train AI systems rose rapidly in late September. However, it then went offline after a security flaw exposed user data.
What Neon Is And Who Is Behind It?
Neon is a consumer app that pays users to record their phone calls and sells the anonymised data to artificial intelligence companies for use in training machine learning models. Marketed as a way to “cash in” on phone data, it positions itself as a fairer alternative to tech firms that profit from user data without compensation. The app is operated by Neon Mobile, Inc., whose New York-based founder, Alex Kiam, is a former data broker who previously helped sell training data to AI developers.
Only Just Launched
The app launched in the United States this month (September 2025). According to app analytics tracking, Neon entered the U.S. App Store charts on 18 September, ranking 476th in the Social Networking category. Amazingly, by 25 September, it had climbed to the No. 2 spot, and reached the top 10 overall ! On its peak day, it was downloaded more than 75,000 times. No official launch has yet taken place in the UK.
How Does The App Work?
Neon allows users to place phone calls using its in-app dialler, which routes audio through its servers. Calls made to other Neon users are recorded on both sides, while calls to non-users are recorded on one side only. Transcripts and recordings are then anonymised, with personal details such as names and phone numbers removed, before being sold to third parties. Neon says these include AI firms building voice assistants, transcription systems, and speech recognition tools.
Users are then paid in cash for the calls, credited to a linked account. The earnings model actually promises up to $30 per day, with 30 cents per minute for calls to other Neon users and lower rates for calls to non-users. Referral bonuses are also offered. While consumer data is routinely collected by many apps, Neon stands out because it offers direct financial incentives for the collection of real human speech, a form of data that is more intimate and sensitive than most.
The Legal Language Behind The Data Deal
Neon’s terms of service give the company an unusually broad licence to use and resell recordings. This includes a worldwide, irrevocable, exclusive right to reproduce, host, modify, distribute, and create derivative works from user submissions. The licence is royalty-free, transferable, and allows for sublicensing through multiple tiers. Neon also claims full ownership of outputs created from user data, such as training models or audio derivatives. For most users, this means permanently giving up control over how their voice data may be reused, sold, or processed in future.
Why The App Took Off So Quickly
Neon’s rapid growth appears to have been driven by a combination of curiosity, novelty, and, of course, cash and referral-led incentives. Many users were drawn in by the promise of payment for something they do every day anyway, i.e., talking on the phone. The idea of monetising phone calls is also likely to have appealed particularly to users who are increasingly aware that their data is being collected and sold elsewhere.
Social media posts promoting referral links and earnings screenshots also seem to have really helped fuel viral growth. At the same time, widespread interest in AI tools has normalised the idea of systems that listen, learn, and improve through exposure to large datasets.
What Went Wrong?
Unfortunately, it seems that shortly after Neon became one of the most downloaded apps in the U.S., independent analysis revealed a serious security flaw. The app’s backend was found to be exposing not only user recordings and transcripts but also associated metadata. This included phone numbers, call durations, timestamps, and payment amounts. Audio files could be accessed via direct URLs without authentication, creating a significant privacy risk for anyone whose voice was captured.
Neon’s response was to take the servers offline temporarily. In an email to users, the company said it was “adding extra layers of security” to protect data. However, the email did not mention the specific details of the exposure or what user information had been compromised. The app itself remained listed in the App Store, but was no longer functional due to the server shutdown.
Legal And Ethical Concerns Around Recording
Neon’s approach raises a number of legal questions, particularly around consent and data protection. For example, in the United States, phone call recording laws differ by state. Some states require consent from all participants, while others allow one-party consent. By only recording one side of a call when the other participant is not a Neon user, the company appears to be trying to avoid falling foul of two-party consent laws. However, experts have questioned whether this distinction is sufficient, especially when metadata and transcript content may still reveal personal information about the other party.
In the UK, where GDPR rules apply, the bar for lawful processing of voice data is much higher. Call recordings here are considered personal data, and companies must have a lawful basis to record and process them. This could be consent, contractual necessity, legal obligation, or legitimate interest. In practice, UK organisations must be transparent, inform all parties at the start of a call, and apply strict safeguards around storage, retention, and third-party sharing. If the recording includes special category data, such as health or political views, the legal threshold is even higher.
Why The Terms May Create Future Risk
The app’s terms of service not only cover the use of call data for AI training, but also grant Neon the right to redistribute or modify that data without further input from the user. That includes the right to create and sell synthetic voice products based on recordings, or to allow third-party developers to embed user speech in new datasets. This means that, once the data is sold, users have no real practical way of tracking where it ends up, who uses it, or for what purpose. That includes the potential for misuse in deepfake technologies or other forms of AI-generated impersonation.
Trust Issue For Neon?
The exposure of call data so early in the app’s lifecycle does seem to have caused (not surprisingly) a major trust issue. While the company has said it is fixing the security problem, it will now be subject to much higher scrutiny from app platforms, data buyers, and regulators. If Neon wants to relaunch, it may need to undergo independent security audits, publish full transparency reports, and add explicit call recording notifications and consent features. Commercially, the setback may impact deals with AI firms if those companies decide to distance themselves from controversial datasets.
What About The AI Companies Using Voice Data?
For companies developing speech models, the incident highlights the importance of knowing exactly how training data has been sourced. For example, buyers of voice datasets will now need to ask more detailed questions about licensing, user consent, jurisdiction, and security. Any material flaw in the source of data can invalidate models downstream, especially if it leads to legal challenges or regulatory action. Data provenance and ethical sourcing are likely to become higher priorities in due diligence processes for commercial AI development.
Issues For Users
While Neon claims to anonymise data, voice recordings generally carry an inherent risk. For example, voice is increasingly used as a biometric identifier, and recorded speech can be used to train systems that replicate tone, mannerisms, and emotional expression. For individuals, this could lead to impersonation or fraud. For businesses, there is a separate concern. If employees use Neon to record work calls, they may be exposing client conversations, proprietary information, or regulated data without authorisation. This could result in GDPR breaches, disciplinary action, or reputational harm. Companies should review their mobile and communications policies and block unvetted recording apps from use on managed devices.
Regulators And App Platforms
The rise and fall of Neon within a matter of days really shows how quickly new data models can go from idea to mass adoption. Platforms such as the App Store are now likely to face more pressure to assess the privacy implications of data-for-cash apps before they are allowed to scale. Referral schemes that incentivise covert recording or encourage over-sharing are likely to be reviewed more closely. Regulators may also revisit guidance on audio data, especially where recordings are repackaged and resold to machine learning companies. Voice data governance, licensing standards, and ethical AI sourcing are likely to become more prominent areas of focus in the months ahead.
Evaluating Tools Like Neon
For organisations operating in the UK, the launch of Neon should serve as a prompt to tighten call recording policies and educate staff on data risk. If a similar service becomes available locally, any use would need a clear lawful basis, robust security controls, and transparency for all parties involved. This includes notifying people before recording begins, limiting the types of calls that can be recorded, and putting strict controls on where that data is sent. In regulated industries, the use of external apps to record voice data could also breach sector-specific rules or codes of conduct. A risk assessment and DPIA would be required in most business contexts.
What Does This Mean For Your Business?
The Neon episode shows just how fast the appetite for AI training data is reshaping the boundaries of consumer tech. In theory, Neon offered a way for users to reclaim some value from a data economy that usually runs without them. In practice, it seems to have revealed how fragile the balance is between innovation and responsibility. When that data includes private conversations, even anonymised, the margin for error is narrow. Voice is not like search history or location data because it’s personal, expressive, and hard to replace if misused.
What happened with Neon also appears to show how little control users have once they opt in. For example, the terms of service handed the company almost total freedom to store, repackage, and resell recordings and outputs, with no practical ability for users to track where their voice ends up. Even if users are comfortable making that trade, the people they speak to may not be. From an ethical standpoint, recording conversations for profit, especially with people unaware they are being recorded, raises serious questions about consent and accountability.
For UK businesses, the risks are not just theoretical. If employees start using similar apps to generate income, they could unintentionally upload sensitive or regulated information to unknown third parties. That creates exposure under GDPR, commercial contracts, and sector-specific codes, and may breach client trust. Businesses will need to move quickly to block such apps on company devices and reinforce clear internal rules around recording, call handling, and use of AI data services.
For AI companies, the lesson is equally clear. The hunger for diverse, real-world training data must be matched with rigorous scrutiny of how that data is sourced. Datasets obtained through poorly controlled consumer schemes are more likely to carry risk, not only in terms of legality but also model quality and future auditability. Voice data is especially sensitive, and provenance will now need to be a standard consideration in every procurement and development process.
More broadly, Neon’s brief rise exposes the gap between platform rules, regulatory oversight, and the speed of public adoption. App marketplaces now face growing pressure to vet data-collection models more stringently, particularly those that monetise content recorded from other people. It also raises a wider challenge: how to build the AI systems people want without normalising tools that trade in privacy. As interest in AI grows, the burden of building that future responsibly will only increase for every stakeholder involved.
In this Tech Insight, we look at how a new generation of digital platforms and community initiatives is rising to meet the growing UK demand for meaningful friendship, tackling loneliness through apps, events, and innovative social design that prioritises connection over dating.
A Growing Demand (And Rising Cost)
Loneliness in the UK is no longer just a personal struggle, but is now a public health issue. For example, according to the government’s 2023–24 Community Life Survey, around 3.1 million people in England report feeling lonely “often or always.” The Office for National Statistics puts the broader figure closer to 1 in 4 adults when occasional loneliness is included.
It seems that young people are among the most affected. Adults aged 16–24 are consistently more likely to report high levels of loneliness than any other age group. The same is true for people living in deprived areas, those with disabilities, and individuals whose gender identity differs from their sex registered at birth.
In health terms, the consequences are serious. For example, prolonged loneliness has been linked to increased risk of heart disease, depression, cognitive decline, and even early death. In fact, former US Surgeon General Vivek Murthy called loneliness “a greater threat to health than smoking 15 cigarettes a day.”
Why now?
Several factors have created new urgency, and opportunity, for digital tools focused on friendship, such as:
– Post-pandemic social gaps. Covid disrupted many people’s social routines. Friendships thinned out, and some never recovered.
-Life transitions. Moving for work, returning from university, post-divorce, all create social disconnection.
– Dating app fatigue. Many younger users are burned out by ghosting, mismatched intentions, or pressure in romantic spaces.
– Desire for real-world connection. There’s growing appetite for platforms that lead to shared experiences, not just online chat.
– Social infrastructure decline. Pubs, churches, clubs, and libraries aren’t what they were. New tools are stepping in to fill the gap.
Friendship-First Apps Gaining Ground
This growing demand has meant that several new and emerging apps are now tackling platonic connection head-on, with different approaches to solving the problem. Examples of these include:
Clyx
London-based, launched in 2023, and aimed directly at building real-life friendships. It scrapes event listings (from Ticketmaster, TikTok and others), then lets users see who’s attending and suggests potential matches based on shared interests. The app recently raised $14 million and is gaining traction with young adults looking for local events and new faces.
Gofrendly
This has a growing UK user base, especially among women. It focuses on interest matching, local discovery, and verified profiles to encourage safe, meaningful friendships, not dating. It’s one of the more community-led platforms in this space.
Bumble BFF
A mode within the main Bumble app that lets users connect platonically. Benefits from scale and user familiarity, but some users still report confusion about intentions, as the app straddles both friendship and romance.
Peanut
Originally created for new mums, Peanut now supports women across life stages, including those navigating menopause or fertility. It blends interest-based communities with discussion boards, making it a more supportive and topic-led experience.
Patook
This app is strictly platonic, with rules that penalise flirtation. It’s aimed at people who want clarity about the nature of their connections.
Hey! VINA
Marketed as “Tinder for girlfriends,” this app is designed to help women find new female friends, often during life transitions or moves to new cities.
Friender
A more traditional matching app that connects users based on shared activities, from walking to photography.
Timeleft
Focuses on time-based group meetups, e.g. 7 strangers meeting at 7pm. Aims to reduce the awkwardness of one-to-one planning.
Wyzr Friends, Les Amís, Pie, Meet5, BFF
Other platforms with varying levels of UK presence. Many focus on events or interest groups, but success often depends on having enough users in each area.
vTime XR
A UK-developed app offering avatar-based conversations in shared 3D virtual spaces. This is an example of more experimental social design, perhaps appealing to more tech-savvy users.
Other Ways To Connect Digitally
Obviously, not all digital friendship-building happens on dedicated apps. For example, some people find new friends in forums, comment sections, local Facebook or WhatsApp groups. Others use platforms like Reddit, Discord or Meetup to join interest-based spaces that lead to real-world interaction.
These alternatives may lack the structure of a matching app, but often feel more organic, and have the advantage of existing community momentum.
UK Initiatives Tackling Loneliness
It should be noted that the UK also has a growing number of non-app projects that support social connection in different ways. A few that stand out include:
The Chatty Café Scheme
This encourages cafés to offer “chatter and natter” tables where anyone can sit down and talk. Over 600 UK venues have taken part. Low effort, high impact.
The Lonely Girls Club
A UK-based organisation helping women make friends. It runs walks, brunches and local meetups in cities including London, Manchester and Brighton. Over 145,000 members and growing.
Do It and local volunteering schemes
Volunteering is a tried-and-tested way to build friendships. Sites like Do It help people find causes to support locally, often leading to lasting connections.
NSPCC’s Building Connections
This pairs young people with trained volunteers via text chat to help tackle loneliness in under-19s. It’s a structured, supported approach that’s designed to build trust gradually.
Silver Line
A free helpline for older people who feel lonely or isolated. Offers conversation, advice, and links to services.
Social prescribing
GPs and healthcare providers can now regularly refer patients to non-medical services, like walking clubs or creative groups, to help combat isolation. It’s a growing part of NHS practice.
The Gaps
Even with momentum, it seems that friendship apps and digital schemes face some difficult challenges. These include:
– User density. Many platforms only work well in big cities. In smaller towns, there just aren’t enough local users.
– Safety and moderation. Users want reassurance that people are who they say they are, and that harassment will be taken seriously.
– Drop-off after first contact. Even if a match is made, many connections fizzle out. Apps that don’t lead to real interaction risk compounding the loneliness they aim to solve.
– Unrealistic promises. No app can guarantee friendship. When expectations aren’t met, users may feel worse, not better.
– Privacy and data. Platforms must be careful not to over-collect personal data, or create social graphs that users wouldn’t want shared.
Where Businesses Fit In
Friendship isn’t just a personal issue. For example, loneliness affects productivity, mental health and team cohesion. With this in mind, ways in which forward-thinking employers are starting to act include:
– Social clubs and interest groups. Walking, running, book clubs and other low-stakes gatherings can help staff connect.
– Peer matching. Pairing employees for coffee chats, especially across departments, builds new bonds.
– Sponsored meetups. Subsidised lunches, away days, and wellbeing events give employees time and space to talk outside of work tasks.
– Coworking support. Remote staff can be encouraged to work from shared hubs once a week, keeping them socially active.
– Onboarding support. Helping new joiners build a social network, especially those who’ve moved, reduces early drop-off and increases engagement.
– Leadership by example. When senior staff take part in informal social activities, others follow.
Helping people build friendships at work isn’t just “nice to have”, but can also be very good business. For example, when people feel seen, included and socially healthy, they stay longer, perform better, and support others more effectively.
What Does This Mean For Your Business?
The rise of friendship apps and local initiatives reflects a growing effort to redesign how people find and build social connection in everyday life. It seems that these tools are no longer fringe or experimental but are now part of a wider ecosystem responding to a social problem that governments, charities, employers and individuals all recognise. For some users, these platforms offer a lifeline out of isolation. For others, they may simply provide a way to expand social circles, build support networks or feel more rooted in a new place.
That said, the picture is far from complete. For example, many of the tools gaining attention still rely on user density, active moderation and effective onboarding to work well. Without enough people nearby, or a clear route from match to meeting, the experience can quickly disappoint. Also, while digital platforms play an important role, they cannot replace the value of shared activity, physical presence or community familiarity that real-world connection offers.
That’s why non-app initiatives remain essential. For example, programmes like the Chatty Café Scheme or The Lonely Girls Club don’t just reduce friction, they change norms. They make conversation with strangers feel less unusual and give people permission to reach out without awkwardness. These models, grounded in familiarity and low-pressure interaction, can succeed in ways algorithms sometimes cannot.
For UK businesses, this raises new questions. Employers have become more focused on wellbeing in recent years, but friendship and social support often remain under-addressed. A lonely employee may not present as unwell, but over time the impact can be felt in engagement, collaboration and retention. That means employers are not just well-placed to help, they may be expected to. Practical steps like encouraging interest-based groups, supporting social meetups and offering flexible coworking options are not just soft benefits, but they are investments in team cohesion and long-term workforce resilience.
Policymakers will also need to think carefully. While loneliness is now recognised at a national level, digital inclusion, funding for local groups and access to social infrastructure will all shape how far these efforts reach. That includes ensuring these tools and spaces are safe, accessible and open to all, regardless of postcode, age, ability or income.
Ultimately, friendship is hard to manufacture but easy to overlook. What this growing sector of tools, platforms and initiatives reveals is that the need is clear, the demand is growing, and the routes to connection must be many, not just digital, but human, local and shared.
The coronavirus
pandemic has changed the working landscape for everyone. Many people are working from home having set up makeshift offices in their dining room

But working
from home has its risks. In a Government daily briefing, Foreign Secretary Dominic Raab, highlighted the rise in cyber hackers looking to exploit vulnerabilities in an attempt to steal valuable information.
“Whilst the
vast majority of people have come together to defeat coronavirus, there will always be some who seek to exploit a crisis for their own criminal and hostile ends,” he said. “We know that cyber criminals, and other malicious groups are targeting individuals, businesses and other organisations by deploying Covid-19 related scams and phishing emails.
“We are working
with the targets of those attacks, with the potential targets and with others to make sure that they are aware of the cyber threat, and that they can take the steps necessary to protect themselves or, at the very least, mitigate the harm that could be brought against them.”
Here are SMY IT Service’s top tips for minimising the threat of a cyber-attack.
When you set up your home Wi-Fi network or receive your free router, did you change the default name and password for the admin console? If it still has the original details, your network is highly vulnerable. We also recommend you change the network’s name (sometimes referred to as SSID) and password to something unique which will prevent a cybercriminal from accessing your network. When carrying out sensitive tasks such as online banking, it is safer to connect via your mobile data than using public and free Wi-Fi connections.
More than 90% of all data breaches are caused by human error due to inadequate training in cybersecurity risks. One wrong click from an employee in a phishing email or fake website can bring down the most robust of IT systems. Therefore, employees should be the greatest security asset and act as a 'human firewall’ in being the first line of defence in preventing an attack. Those using the system need to understand the risks, what a cyberattack looks like and what they should do in the event of an attack.
Be as wary, if not more so of any email you receive remotely, especially those claiming to be from a manager or the boss where it can be harder to verify its authenticity.
Firewalls act as a defence to prevent threats accessing your system. They create a barrier between your device and the internet by closing off ports of communication. A strong antivirus programme acts as the next line of defence by detecting and blocking known malware. Even if malware does find a way onto your device, an antivirus can detect this and usually remove it.
You might find that regular software updates are a nuisance, but they are vital. Updates often include patches for security vulnerabilities that have been found since the previous software update was installed. The majority of the time, updates can be set to run automatically while you are on a lunch break or overnight.
We suggest checking the status under Settings\Update & Security for Windows or under System Preferences\Software Updates for Apple Mac and install any that are missing.
Your data is one of your business’ most prized assets, so it is imperative that it is backed up. Data can be lost in several ways including human error, physical damage to hardware or a cyberattack. One of the most convenient and cost-effective ways to store your data is in the cloud. This has the added benefit of allowing you to access your data remotely and on different devices.
Many users often save their files to their desktop of local PC for convenience, however this means that the file is no longer backed up by the server.
It sounds very simple, and it is. By using a password on your device, it prevents anyone from accessing the contents. If you have to work in a public space, or if you live with people who you cannot share work information with, it is important to lock your laptop, tablet, or other device when it is left unattended. For Windows users, this would be by pressing the Windows key and L. It is also advisable to avoid working on computers directly facing windows where people walking on the street can see your screen.
When choosing a password, ensure it is long and complex. We always advise clients to use a passphrase rather than password and for it to contain a mix of upper and lowercase letters, numbers and symbols as well as to change them every few months. You can simplify your computer security by using effective password management. Check out our top tips for a secure password in our World Password Day blog.
Encryption is a cybersecurity measure that protects computers and their content by basically scrambling the data. The data, whether it is a message, image, email or other file, is converted into an unreadable format. This means that the data is readable only to the person authorised with the physical encryption key, and not cybercriminals.
If you need advice on working from home securely, don’t hesitate to get in touch with us.
A study by security firm 'Proofpoint’ has revealed that 82 per cent of UK organisations whose systems were infected by ransomware in 2021 opted to pay the ransom.
Much Higher Than The Global Average
Despite cybersecurity and government agencies warning against paying, Proofpoint’s '2022 State of the Phish’ report states that this UK figure for 2021 is the highest in any region surveyed and is 40 per cent higher than the global average.
Phishing Attacks & Ransomware
Phishing attacks are one of the main ways that criminals deliver ransomware (and other malware) or direct victims to a site where they download the ramsomware that allows criminals to access their networks. Proofpoint’s report showed that more than three-quarters of organisations (78 per cent) saw email-based ransomware attacks in 2021 and 91 per cent of UK organisations reported facing bulk phishing attacks in 2021. In fact, In the first three quarters of 2021, 15 million phishing messages with malware payloads were linked to later stage ransomware. For example, these malware families included Dridex, The Trick, Emotet, Qbot, and Bazaloader.
Why Not Pay?
The National Cyber Security Centre (NCSC) states that “even if you pay the ransom, there is no guarantee that you will get access to your computer, or your files” and that “occasionally malware is presented as ransomware, but after the ransom is paid the files are not decrypted. This is known as wiper malware.”
Also, organisations that pay the ransom will still have infected computers, will be paying criminal groups allowing them to continue and bring suffering to others, and it makes organisations that are known to pay to be more likely to be targeted in the future.
What Does The Survey Say Happened To Those Who Paid?
As the Proofpoint study showed, 60 per cent of organisations chose to at least negotiate with the attackers, and 82 per cent paid. However, despite advice against paying, only 4 per cent of those organisations who paid a ransom were unable to retrieve their data. This is likely to be either because the key didn’t work properly, or the attackers had simply made off with the money.
Is No Backup A Reason To Pay The Ransom?
It would seem logical that a lack of an effective back up may be a reason why organisations would pay a ransom. A report by cyber security company Emsisoft (2020), however, showed that some victims of attacks have been capable of restoring their networks from backups but have still opted to pay the ransom.
It should also be noted that one tactic that ransomware attackers often use is to threaten to publish an organisation’s data if the ransom isn’t paid.
Protecting Your Business From Ransomware Attacks
Ways in which businesses can protect themselves from falling victim to ransomware attacks include:
– Educating staff about the risk of phishing emails and emails carrying malware, how to spot phishing/suspicious emails, and to never open emails that appear suspicious.
– Make regular backups of the most important files, keep them off-site (e.g., the cloud) and make multiple copies of files using different backup solutions.
– Make sure that the devices containing the backup are not permanently connected to the network, scan backups for malware before files are restored, and regularly patch products used for backup.
– Stop malicious content reaching company devices – e.g. by filtering to only allow file types you would expect to receive, blocking websites known to be malicious, actively inspecting content, and using signatures to block known malicious code.
– Prevent attacks via Remote Desktop Protocol (RDP), or unpatched remote access devices by disabling RDP if it’s not needed, enabling MFA at all remote access points into the network, using a VPN, and patching known vulnerabilities in all remote access and external facing devices.
– Prevent malware running on devices – e.g. by centrally managing devices to only allow trusted apps and disabling or constraining scripting environments and macros.
– Plug vulnerabilities in devices – e.g. by installing security updates as soon they are available and enabling automatic updates for operating systems, applications and firmware.
What Does This Mean For Your Business?
Making sure there are strong security measures in place (particularly where email is concerned) and checking data is definitely being backed up securely on a regular basis (and that it is accessible when needed) can help towards effective ransomware protection. Attackers can pressurise businesses into paying (e.g. by threatening to destroy and/or publish data), and an attack may simply come at a bad time for a business where a long disruption could seem less costly than paying. The fact is, however, that paying may not guarantee the return of data and may make a business more likely to be attacked again because they paid. Ultimately, businesses will, as the stats show, make their own decisions, but by their very nature, attackers can’t be trusted and paying now could lead to even bigger problems later, and will fuel the continuing cycle of attacks for others too.
A prototype of an award-winning robotic fish design that fish filters water to trap microplastics has now been tested in lakes as well as the lab.
Gilbert Wins
Eleanor Mackintosh's design for the glow-in-the-dark, water-filtering 'Robo-fish' named “Gilbert” won the University of Surrey’s public competition, the Natural Robotics Contest, which resulted in Gilbert being turned into fully 3-D printed working prototype.
How The Robo-Fish Works
The robo-fish has been designed to work in the following way:
- The watertight tail contains electric motors to power the fins that move the unit around. The head is designed to flood, and the gills either side contain a fine mesh that can filter two-millimetre (microplastic) particles out of the water.
- While swimming, the mouth opens (gills closed) as wide as possible.
- The mouth cavity fills with water, the mouth closes, and the gills open as the floor of cavity is compressed to force water over gills.
- The mesh catches microplastics and the water is ejected.
Other Advantages of The Robo-Fish Design
It was decided that the robo-fish should only use affordable off-the-shelf components and manufacturing techniques, so that the design is accessible to all. With this in mind, some of the other advantages of the robo-fish design are:
- It can be entirely 3D printed in ABS plastic (dipped in acetone to seal it) with a low-cost fused deposition manufacturing (FDM) printer.
- The modular design i.e., a sealed 'tail’ unit, onto which the 'head’ of the robot is attached via a snap-fit joint means that the head can be changed to be updated and meet different gill arrangements in the future.
Tested
A prototype of Gilbert the motor-driven, currently remote controlled robo-fish has been tested in an outdoor lake in Guildford (UK) and has demonstrated effective swimming and steering on the water surface.
However, although the prototype, which was developed from a simple sketched idea from the designer can currently swim, ingest, and retain particulates, it cannot yet distinguish between organic matter that is vital to the ecosystem such as plankton and 'marine snow,’ and harmful synthetic pollutants /microplastics. More development is needed, therefore, to enable the robo-fish idea to work as an effective tool for ocean clean-up and sampling. Also, the developers have suggested that the finished working robo-fish should be automated rather than remote controlled (as it is currently).
What Does This Mean For Your Organisation?
Although the robo-fish was developed from a simple sketch idea in the first iteration of a contest and needs more work to enable it to be effective, it demonstrates that there could potentially be many different ways to use technology help tackle the microplastic pollution crisis. In reality, the number of robo-fish needed to make even a dent in the level of microplastic pollution wouldn’t be feasible but some good could come from focusing thinking of developing effective filtration systems of biological solutions such as algae that can break down plastics. The fight is now on to find ways that different technologies can be combined to develop multiple solutions to tackle the existing problem, but real progress will be made when the use of non-biodegradable plastics is finally halted and replaced with a better solution for the environment.
In this tech insight, we look at what 'zero-day’ attacks are, then look at some recent high-profile examples and ultimately at what businesses can do to protect themselves from zero-day attacks.
Sophisticated Attacks That Highlight Vulnerabilities
In the ever-evolving landscape of digital threats and cyber warfare, one term often sends chills down the spines of cybersecurity professionals: Zero-Day Attacks. These sophisticated and stealthy cyber-attacks represent a significant challenge in today’s interconnected business world. They symbolise not just the advancement of cybercriminals’ tactics but also highlight the vulnerabilities that exist within our most trusted digital infrastructures.
Exploiting Zero-Day Vulnerabilities
Zero-day attacks are attacks by threat actors that exploit zero-day vulnerabilities. These are undisclosed software vulnerabilities (unknown to vendor or victims) that hackers can exploit to adversely affect computer programs, data, additional computers, or a network.
Vulnerabilities targeted in zero-day attacks can be found in operating systems, web browsers, Office applications, open-source components, hardware and firmware, and the Internet of Things (IoT).
Why “Zero-Day”?
The term “zero-day” comes from the fact that software developers and those in charge of digital security have zero days to fix the vulnerability because it is simply not known to them until the first attack. This means that attackers can exploit the vulnerabilities before developers become aware and are able to issue any patches or remediations.
How Big Is The Problem?
Although zero-day vulnerabilities fell by almost a third in 2022, it was still the second highest year on record (Mandiant research) with 55 zero-day vulnerabilities exploited and products from the three largest vendors (Microsoft, Google, and Apple) were the most commonly exploited (for the third year in a row).
What Can Happen?
Zero-day attacks commonly result in unauthorised data access, data theft, or service disruptions. These, in turn, can result in reputational damage, lost customers, fines (e.g. legal action by those affected an/or ICO fines), plus possibly the loss of the business itself if the attack is serious enough. Secondary attacks on the business and those affected by data theft could also come from the first attack,.e.g. malware, ransomware, phishing, social engineering attacks, and more.
Cybersecurity experts, therefore, continually work to discover these types of vulnerabilities before hackers do, to try and prevent potential attacks.
Vulnerabilities, Exploits, Then Attacks
After threat actors have discovered a zero-day vulnerability, the next stage is 'zero-day exploits’ – the blueprints that outline how these hidden flaws can be taken advantage of, often traded on the dark web. The zero-day attack itself is, therefore, the act of exploiting the flaw/vulnerability, using the guidance of the exploit, before a patch can be rolled out, leaving a digital system scrambling in the wake of the unforeseen breach.
Who?
These under-the-radar strikes are often orchestrated by advanced cyber criminals, state-sponsored hacking groups, or unscrupulous entities with nefarious motives. The objectives are as varied as the threat actors themselves. For some, it’s about monetary gains whereas for others, it’s a tool for intellectual property theft, infiltrating state secrets, or merely sowing seeds of chaos. Corporate espionage and political machinations are just the tip of the iceberg when it comes to reasons behind these attacks.
Recent High-Profile Examples
Some recent, high-profile examples of Zero-Day attacks include:
– In 2023, a critical vulnerability was uncovered in the secure managed file transfer (MFT) service provided by MOVEit, a transfer platform widely used by large companies in a variety of sectors including healthcare, government, finance, and aviation. The Russian-based Clop Ransomware group exploited the vulnerability and were able to steal data from eight UK organisations including BBC, British Airways, Aer Lingus, and Boots.
– In 2022 the CVE-2022-30190, a.k.a. Follina vulnerability in Microsoft Diagnostics Tool (MDST), was exploited and victims were persuaded to open Word documents which enabled attackers to execute arbitrary code. The government of the Philippines, business service providers in South Asia, and organisations in Belarus and Russia were all subject to the same zero-day attack.
– The notorious Microsoft Exchange Server hack in early 2021, widely believed to have been sponsored by a nation-state, exploited several previously unknown vulnerabilities in Microsoft’s email server software. The damage was widespread and profound, with tens of thousands of organisations worldwide left grappling with the aftermath before a security patch could be rolled out.
– Google’s Chrome suffered a series of zero-day threats in 2021, causing Chrome to issue updates. The vulnerability was a bug in the V8 JavaScript engine used in the web browser.
– A zero-day attack on video conferencing platform Zoom in 2020 where hackers accessed a user’s PC remotely if they were running an older version of Windows. The hackers targeted the administrator, allowing them to completely take over their machine and access all files.
– In 2020, the Apple iOS was attacked twice with zero-day vulnerabilities and one zero-day bug allowed attackers to compromise iPhones remotely.
How Businesses Can Protect Themselves
So, how can businesses protect themselves against the threat of zero-day attacks? Given their nature, these attacks pose a formidable challenge, but protective measures that can be taken include:
– Regularly updating software updates and staying up to date with patching.
– Employing advanced threat detection tools that utilise behaviour-based detection techniques to pinpoint anomalies and unusual activity in network traffic (often the first sign of a zero-day attack).
– Conducting regular penetration tests and vulnerability assessments. These proactive practices can unearth previously unknown vulnerabilities within systems, allowing businesses to patch them before they are exploited. Following the principle of least privilege – limiting user access rights to the bare minimum needed for their work – can also help reduce the extent of potential damage should an attack occur.
– Beyond technological defences, investing in comprehensive cybersecurity awareness training for employees is crucial. An informed team acts as the human firewall against cyber threats, understanding the risks, recognising signs of possible attacks, and knowing how to respond swiftly and effectively.
What Does This Mean For Your Business?
In the face of the ominous threat of zero-day attacks, businesses must adopt a proactive and comprehensive approach to digital security. A robust defence strategy isn’t a luxury but an absolute necessity in today’s digital age. It involves a constant balancing act of risk management, regular system updates, advanced threat detection, routine penetration testing, and vulnerability assessments, regular system audits, and maintaining a culture of security vigilance throughout the organisation.
A multi-layered security approach and a zero-trust model could, therefore, provide a solid foundation for defence although, because some vulnerabilities may still not be known until it’s too late, zero-day attacks remain an ever-present threat.
The potential devastation of zero-day attacks and their aftermath is unquestionable, but it is not an insurmountable challenge. By being as vigilant and proactive in defence measures as is realistically possible, businesses can steer through the murky waters of the cyber threat landscape, securing their digital assets, and upholding the trust of their customers and partners. The world of cybersecurity may be akin to a never-ending arms race, but with the right preparation and resilience, staying one step ahead must be an achievable goal.
In this, the first of a series of three articles explaining DMARC and email authentication, we look at why SPF, DKIM, and DMARC are the key pillars of email authentication.
The Issue
Businesses face numerous cyber threats, with email being one of the most common attack vectors. Phishing, spoofing, and malware are prevalent issues, making email security a top priority.
Effective email authentication mechanisms/protocols, therefore, like SPF, DKIM, and DMARC are ways to improve email security and are crucial in mitigating these threats, ensuring only authenticated emails reach their destination.
What Is SPF?
The SPF (Sender Policy Framework) email authentication protocol helps prevent email spoofing by allowing domain owners to specify which mail servers can send emails on their behalf, i.e. to verify the sender of an email message.
This is achieved by publishing SPF records in the domain’s DNS (Domain Name System). DNS is the internet’s system for translating domain names into IP addresses, enabling users to access websites by typing human-readable names instead of numerical codes.
When an email is sent, the recipient’s mail server checks this record to verify the email’s origin. If the server isn’t listed, the email could be rejected or marked as spam.
What Is DKIM?
DKIM (DomainKeys Identified Mail) adds an additional security layer by attaching a digital signature to outgoing emails. This signature, verified against a public key in the sender’s DNS, ensures the email’s content hasn’t been altered in transit. DKIM’s role in email authentication, therefore, strengthens the integrity and trustworthiness of email communication.
What Is DMARC?
DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. DMARC is essentially an email authentication protocol designed to give email domain owners the ability to protect their domain from unauthorised use, such as email spoofing. It does this by allowing them to specify and enforce policies on how their email should be handled if it fails SPF (Sender Policy Framework) and DKIM (DomainKeys Identified Mail) checks, and it provides a way for receiving email servers to report back to the sender about emails that pass or fail these authentication methods. Essentially, DMARC is a set of rules and reporting protocols added to a domain’s DNS records to improve and monitor the security of the email ecosystem associated with that domain.
DMARC, therefore, offers a way to unify SPF and DKIM’s capabilities, allowing domain owners to define how unauthenticated emails should be handled, and it provides detailed feedback on all emails sent from the domain, aiding in the detection and prevention of unauthorised use and email spoofing.
The Evolving Email Security Landscape – Recent Changes By Email Providers
In response to a surge in email fraud and to comply with global data protection regulations like the GDPR, major email platforms are tightening their email authentication policies. For example, Google and Yahoo recently (February) expanded their guidelines for high-volume emailers. Yahoo said: “Sending properly authenticated messages helps us to better identify and block billions of malicious messages and declutter our users’ inboxes.”
As an indication of how serious the problem is, it’s estimated that half of the 300 billion emails sent per day are spam … to reiterate, that’s 150 billion spam emails sent each day! Google, for example, says it blocks a staggering 15 billion unwanted emails every day (spam, phishing, and malware).
The regulatory landscape, demanding higher standards of data privacy and security, plus the sheer volume of spam/phishing/spoofing/malware emails have now catalysed action in the form of platforms trying to enforce stricter measures.
For UK businesses, therefore, adapting to these enhanced authentication standards is crucial to ensure emails reach their intended recipients and to maintain compliance with data protection laws, preventing emails from being lost to spam folders or blocked.
The Necessity for DMARC, SPF, and DKIM
For the reasons just outlined, implementing DMARC, alongside SPF and DKIM, has now transitioned from a best practice to a necessity, hence a sudden push by many platforms to verify domains. These protocols are fundamental in validating email sources, ultimately enhancing deliverability, and protecting against cyber threats. Although it can feel like an extra hoop for businesses to jump through, their adoption ensures that businesses maintain their credibility and that their communications are effectively received.
What Does This Mean For Your Business?
For UK businesses, the implications of not implementing these email authentication protocols can be significant. Without proper setup, domains are at risk of being used for email spoofing, leading to potential data breaches and loss of customer trust. Additionally, non-compliance with the updated policies of email providers can result in emails being undelivered, affecting operations and communications.
To navigate this landscape therefore, businesses must adopt a proactive approach, regularly reviewing and updating their SPF, DKIM, and DMARC configurations to combat evolving threats. This involves not only technical adjustments but also staying informed about the latest in email security practices and threats.
It’s important to remember that adhering to these email authentication standards is not merely about compliance, it’s about securing your digital communication channels. By implementing SPF, DKIM, and DMARC, businesses can significantly reduce the risk of cyber-attacks initiated via email, safeguard their digital assets, and ensure the integrity of their email communications.
Next Time ….
In this first of three in the series, we’ve looked at understanding the basics of email authentication and its significance in the digital age, i.e. looking at SPF, DKIM, and DMARC and their importance as business cybersecurity tools.
In the next week’s (second) in the three-part DMARC Diligence Tech Insight series, we’ll be taking a look at the critical but often neglected issue of securing multiple domains, including those not actively used for sending emails. It will emphasise the importance of applying DMARC policies to these “forgotten” domains to prevent them from being exploited in cyber-attacks, offering guidance on implementing comprehensive email authentication strategies across all owned domains.
Search technology has transformed significantly from text-based queries back in the nineties to now, where there’s a wide range of interactive methods like voice, visual, and AI-driven tools. Here, we look at how these advancements are reshaping the way we search, with a focus on the latest innovations and trends in the search landscape.
The Changing Landscape of Search
Search technology is currently undergoing a rapid transformation, driven by fierce competition between major tech companies like Google, Microsoft, and Amazon. While Google remains the dominant force (processing over 8.5 billion searches per day), other players are innovating and closing the gap by integrating advanced AI capabilities and new features. Microsoft’s Bing AI is increasingly incorporating AI-driven results to enhance search relevance, while Amazon focuses on evolving its product search and recommendation algorithms, positioning itself as a major contender in the e-commerce space.
The Way We Search Is Changing
The way we search is also evolving. Voice search is becoming more prominent, with predictions suggesting that it will account for 30 per cent of all browsing sessions by 2030. Simultaneously, visual search, powered by technologies like augmented reality (AR) and image recognition, is emerging as one of the fastest-growing areas in search technology. These innovations are fundamentally changing user behaviour, as people move from traditional text searches towards more interactive and immersive experiences.
Competition Driving The Change
The competition between these tech giants is intensifying, with each company striving to create the most seamless, intuitive, and user-friendly search tools. This has led to the development of AI-powered chatbots, AR search experiences, and personalised recommendations that are reshaping the way users interact with search engines.
Now, we’re going to take a brief look at the many types of searches currently available, each offering unique ways to access information and interact with the digital world.
Text Search
Text search remains the most widely used and traditional search method, where users input keywords or phrases into a search bar on platforms like Google. Whether on desktop, mobile, or the Google app, this type of search allows users to retrieve vast amounts of information based on specific queries. It’s the backbone of modern search engines and is complemented by additional tools that enhance precision, such as advanced search operators.
Voice Search
With the rise of smart devices, voice search has become increasingly popular. Users can activate searches using voice commands by simply saying “Hey Google” on Android devices or through other assistants like Amazon Alexa or Apple’s Siri. Voice search allows users to ask questions, perform searches, or control their devices completely hands-free. This is particularly useful when multitasking or when typing isn’t practical, and the technology has greatly improved around understanding natural language and context.
Visual Search with Google Lens
Visual search, led by Google Lens, allows users to search using their smartphone camera. By pointing their camera at an object, text, or scene, users can instantly receive information about it, find similar products, or even translate text in real-time. Google Lens has opened new possibilities, allowing people to search for objects they don’t know the name of but can see. For example, by scanning a plant or an animal, users can identify the species instantly. This tool reflects how search is becoming more intuitive and integrated into everyday experiences.
Google’s New Video Search
Google is further innovating with its new Video Search feature, which allows users to point their camera at an object or scene, ask a question about it, and receive search results in real-time. This feature enables deeper interaction with the physical world, allowing people to get information on what they are seeing, whether it’s a historical building, a piece of art, or even a consumer product. This development is part of a growing trend where the boundaries between the digital and physical worlds blur, making search more accessible and context-driven.
Image Search
Image search has evolved to allow users to perform reverse image searches, primarily through Google Images. Users can upload an image or drag and drop it into the search bar to find similar images, verify the source, or learn more about the content. This is particularly useful for identifying things like locations, people, or products based on an image alone. It’s a key tool for anyone needing to trace visual content across the web.
Multisearch (Combining Text and Image)
Multisearch is an innovative approach that allows users to combine text and image input into a single query. This is ideal for instances where an image alone doesn’t provide enough detail. For example, users can upload a photo of a product and then add specific descriptors such as colour, brand, or style to refine the search. This combination enhances accuracy, especially when searching for specific items or variations that aren’t immediately obvious from an image alone.
Video Search (YouTube and Google Video Tabs)
Search has become more multimedia-focused, and video search is a huge part of this shift. On Google, users can switch to the “Videos” tab to find relevant content from platforms like YouTube, Vimeo, or other video-based sources. YouTube itself offers an internal search function, supporting both text and voice searches, allowing users to locate tutorials, entertainment, or educational videos based on their interests. As more content shifts to video formats, this type of search is becoming an essential tool for users.
Maps Search
Google Maps supports location-based searches, allowing users to find businesses, services, and landmarks in a specific area. With text or voice input, users can search for restaurants, shops, or attractions, while accessing additional features like reviews, photos, and directions. This has become an essential tool for daily life, integrating geographic data with business information, helping people navigate their world with ease.
Hum to Search
If you’re thinking of a song, Google allows you to hum, whistle, or sing the melody, and it can identify the song for you. Note, we have largely focused on Google for this text, although there are other specific platforms that allow users to search for specific sounds to identify their origin, such as identifying a species of bird via its birdsong (e.g. BirdNET or ChirpOMatic). Other specialist platforms doubtless exist for other animals and/or sources of noise.
Shopping Search
Google Shopping is another popular tool, helping users compare products, prices, and store availability. This search method is increasingly tied to AR (Augmented Reality) tools, where users can visualise products in their space before purchasing. Shoppers can now search for items, filter results based on price, location, or store, and even see product reviews and specifications. The combination of search with AR enhances the shopping experience, making it more immersive and informed.
In-App Search on Mobile Devices
On Android devices, in-app search allows users to locate content within specific apps directly from the Google search app. This includes finding emails, documents, or social media posts without switching between apps. It’s an efficient way to manage information across multiple platforms, ensuring users can access relevant data without leaving the search interface.
AR Search for 3D Objects
Augmented Reality (AR) Search has become a notable development, particularly in fields like education and e-commerce. Using Google AR, users can view 3D models of search results, such as animals, historical artifacts, or even products. This type of search is highly interactive, allowing people to see life-like models of objects in their real environment, enhancing the depth of the search experience.
Discover (Content Recommendations)
Google Discover shifts the search paradigm by offering content without the need for a direct query. This feature curates articles, videos, and other content based on a user’s interests and search history, presenting it in a personalised feed. It’s a proactive search tool, constantly updating to present users with new and relevant content as their interests evolve.
AI-Powered Search Using Chatbots
AI chatbots, such as ChatGPT, Bard, and Bing Chat, have revolutionised search by offering conversational interfaces. Rather than simply retrieving links, these chatbots can generate detailed responses, summarise information, and offer personalised recommendations. For example, users can ask ChatGPT to find information on a specific topic and receive a coherent, natural-language answer, instead of browsing through multiple web pages. These AI tools are rapidly improving, offering new ways to search, especially for more complex, nuanced questions that traditional search engines might struggle with.
What Does This Mean For Your Business?
The ongoing evolution of search technology presents both opportunities and challenges for businesses aiming to stay visible in this increasingly diverse landscape. As the methods people use to search diversify, companies must adapt their strategies to ensure they can still be easily found across all platforms and search types. It’s no longer enough to rely solely on traditional SEO tactics focused on text-based searches. To maintain or enhance their visibility, businesses now need to consider how they are appearing in voice, visual, and AI-driven searches, as well as adapting to the rise of augmented reality and interactive search experiences.
Voice search, in particular, has significant implications for businesses. As more users turn to devices like smart speakers and mobile assistants to ask questions and perform searches, optimising for voice queries is becoming more important. Voice searches tend to be more conversational and question-based, which means businesses need to adapt their content strategies to capture these queries effectively. For example, having concise, easily digestible answers to common questions about their products or services can help businesses rank higher in voice search results.
Similarly, visual search tools like Google Lens and augmented reality searches are transforming how consumers discover products. Retailers and brands need to ensure that their product images are optimised for visual search. High-quality visuals, detailed metadata, and clear product descriptions can help ensure that when users point their cameras at a product, the brand’s offering appears in search results. Also, augmented reality features, such as those in Google Shopping, allow consumers to visualise products in their environment before purchasing. Businesses that invest in AR-ready content and experiences can now tap into a growing consumer base that values immersive, real-time interactions.
The growing importance of AI-powered chatbots in search also means businesses will need to rethink how they engage with potential customers. AI tools like ChatGPT and Bing Chat provide more in-depth, conversational responses, making it essential for businesses to have well-structured and informative content that these systems can draw from. This means producing detailed yet user-friendly content that provides value and can be referenced by AI systems to give consumers the answers they seek.
For businesses operating in local markets, optimising for Google Maps and local searches is critical. Consumers increasingly rely on location-based searches to find services, restaurants, shops, and more. Ensuring that business listings are accurate, up to date, and include reviews, photos, and essential details is key to capturing local search traffic. Furthermore, investing in local SEO strategies to appear in voice searches for location-based queries will become increasingly important as consumers use voice assistants to find nearby services.
As Europe faces a worsening housing shortage, a new generation of construction robots is being pitched as a solution, but how realistic is the idea, and what does it mean for sustainability, workers, and the industry as a whole?
Rethinking the Way We Build
Housing shortages aren’t new, but in parts of Europe (including the UK) they’ve now reached critical levels. Spiralling costs, strict planning rules, and a growing mismatch between supply and demand have pushed home ownership further out of reach for many. At the same time, the construction sector is facing a crunch of its own.
While other industries have embraced automation and innovation, it seems that construction has remained largely unchanged. For example, today’s building sites largely feature bricks, mortar, manual labour, just as they would decades ago (although there are modern hoists and plant machinery). Arguably, the result is that building is still relatively slow, and is suffering from higher costs and dwindling productivity.
One telling statistic is that, although since 1945 productivity in manufacturing has increased more than eightfold, in construction, it’s only risen by just 10 per cent, and in some cases, has actually gone backwards. For example, building a single-family home now takes longer and costs more than it did 50 years ago, even after adjusting for size. Labour shortages are also compounding the issue. In the UK, the number of bricklayers recently hit a 25-year low, with a third expected to retire within the next decade.
This stagnation is feeding into the wider housing crisis. The shortage of skilled workers delays projects and drives up costs. Meanwhile, urban populations continue to grow, and government targets, such as the UK’s pledge to build 300,000 new homes a year, are consistently missed.
It seems, therefore, that the response by some technologists to propose a different approach may be welcome at this point, i.e. rather than simply trying to build more with the tools that have always been used, suggesting that a total rethink on building is needed.
Robot Builders?
The newest suggestion by some scientists is that autonomous robots, guided by AI and precision software, could take on repetitive and labour-intensive tasks, e.g. laying bricks, moving materials, and even assembling entire walls.
The idea is that robots could help us build faster, more affordably, and with less waste. This is a vision that blends technological ambition with an urgent social need, but the real question is whether this kind of innovation can change things for the better, or whether it’s another idea that will get stuck at the planning stage.
Bricklaying
Amsterdam-based startup Monumental is among those exploring whether robotics could reshape construction. The company has developed a suite of autonomous, electric robots designed to handle one of the most repetitive and labour-intensive tasks on site, i.e. bricklaying.
The system combines:
– Ground-based electric robots that move materials around a site.
– Small crane-like arms that place bricks and apply mortar.
– Computer vision and sensors to track exact positioning.
– A software platform, called Atrium, that maps the environment and guides the robots with millimetre precision.
Each robot is connected to a central coordination system that plans movements, detects site changes, and ensures accuracy in real time. Before building starts, a full 3D scan of the site is taken and aligned with digital building plans. From there, the robots get to work, layer by layer and brick by brick.
Work Alongside Human Builders
It should be noted here that the system is actually designed to work alongside human builders rather than to replace them. For example, labourers still prepare the site, oversee quality, and step in where needed. Monumental calls its approach “software-defined construction”, aiming for flexibility and integration rather than wholesale automation.
Does It Work?
So far, Monumental reports that the robots have built house façades, retaining walls, and other real-world structures across the Netherlands. For example, in 2023, the system completed its first full-scale 15-metre wall, and the company says performance has improved significantly with each iteration, helped by rapid software and hardware updates based on field testing.
The real aim, according to co-founder Salar al Khafaji, is to lay the groundwork for much broader automation, i.e. rather than just bricks, the robots also being able to work with concrete blocks, window frames, door frames, roofing elements, and more.
For now, Monumental appears to be focusing on reliability and practical deployment. The system is offered as a service where clients simply specify the bricks and mortar, and Monumental delivers the finished wall.
Who Else Is Building With Robots?
Globally, construction robotics is actually gaining momentum. In the US for example, Built Robotics offers autonomous trenching and earthmoving systems for infrastructure projects. ICON, known for its 3D-printed homes, has built houses for disaster relief and was recently awarded a $57 million NASA contract to develop construction tech for the Moon.
In Japan, the Shimizu Corporation is experimenting with robots that can handle everything from interior finishing to welding. Closer to home, the UK’s Construction Innovation Hub is exploring off-site manufacturing techniques that integrate robotics for modular building components.
Each approach varies, but the end goal is to make construction faster, more precise, and less dependent on scarce labour.
What It Could Mean for Sustainability
As well as being slow and expensive, traditional construction methods are also environmentally costly. According to the Global Alliance for Buildings and Construction, construction and building operations are responsible for nearly 40 per cent of annual global carbon emissions!
Robotic construction could offer several environmental benefits, such as:
– Electric robots like Monumental’s generate zero on-site emissions and reduce noise pollution.
– Precision placement, which can reduce material waste and rework.
– Faster builds, thereby lowering the overall energy footprint of each project.
By reducing reliance on diesel-powered machinery and minimising disruption, robotic systems could also be better suited to urban infill projects, where sustainability and community impact are closely scrutinised.
That said, the broader carbon impact also depends on material choices, energy sources, and supply chain factors, which robots alone can’t fix.
How Ready Is the Technology?
Despite the progress, fully autonomous building sites remain a long way off. Most current systems (including Monumental’s) focus on specific, repetitive tasks such as bricklaying or trench digging. Complex structural work, finishing, and systems integration still require human expertise.
Performance metrics are still emerging, but Monumental’s field projects suggest the technology is edging closer to commercial viability. The company claims its robots can build continuously, avoid common errors, and scale up with multiple units on one site.
Crucially, it has opted to work within existing construction norms, using conventional bricks, mortar, and pricing structures. This has helped reduce resistance among cautious builders, though long-term data on cost savings and productivity is still limited.
Implications for the Industry and Workforce
With labour shortages biting across Europe (19 countries were reporting a bricklayer shortage in 2022), automation may fill some urgent gaps. In the UK, where one-third of bricklayers are due to retire in the next decade, demand is unlikely to ease.
However, using robots raises familiar questions around job displacement. Even if robots assist rather than replace workers, fewer may be needed on site. That could reshape the training landscape, shift demand towards tech-savvy roles, and put pressure on traditional trades.
For construction firms, although automation could help meet delivery targets, especially for large-scale housing projects, costs, reliability, and integration still weigh heavily. Monumental’s “robot-as-a-service” model, which avoids capital investment and ties pricing to output, is one attempt to lower that barrier. Whether others will follow remains to be seen.
Governments, Policy, and the Housing Crisis
In places like Monumental’s home country, the Netherlands, where the government has committed to building one million homes by 2030, robotic construction may offer a helpful lever, but not a panacea.
In the UK, housing policy remains politically fraught, and delivery targets have repeatedly been missed. If robotic systems can offer faster build times, safer sites, and lower carbon footprints, they could become part of the toolkit for councils and developers alike.
Still, regulation, standards, and public trust are likely to play a major role. Construction robots may be technically impressive, but mass adoption will depend on how convincingly they can be integrated into real, everyday projects.
What Does This Mean For Your Organisation?
It seems there’s no single fix for Europe’s housing crisis, but the slow pace and inefficiency of traditional construction methods have clearly become part of the problem. As this article has highlighted, robotic systems like Monumental’s offer one possible route towards building more homes, more quickly, and with fewer emissions. What’s striking is not just the innovation itself, but the way it’s being packaged, i.e. pragmatic, incremental, and designed to slot into existing workflows rather than disrupt them completely.
In the UK, developers under pressure to meet housing targets may find robotic services attractive, particularly for repetitive or labour-intensive parts of the build. Construction firms willing to engage with these tools early on could gain a competitive edge, especially as skilled labour becomes harder to find. Also, tech providers, equipment suppliers, and training organisations may see growing demand for systems integration, on-site support, and workforce upskilling.
That said, the adoption curve is unlikely to be smooth. Much depends on how well these technologies perform under real-world pressures, how quickly costs come down, and whether builders, regulators and insurers are willing to adapt. Jobs will change (i.e. some may go, others will evolve) and this raises big questions for education, employment policy, and worker protections.
For policymakers and local authorities, there will need to be a balance between embracing robotic construction to help unlock stalled housing developments and support sustainability goals, and rethinking procurement, planning frameworks, and public trust in new technologies. If done carefully, it could support a more resilient and forward-looking housing system. If rushed or poorly managed, it may risk further complicating an already difficult landscape.
What’s clear is that the conversation has moved on from theoretical hype to practical possibility where, although robots aren’t going to replace the construction industry, they may quietly start rebuilding how it works.
Just under half of all cyber-attacks are aimed at small to medium-sized businesses but, the risk isn’t limited to just those organisations. It is a risk that everyone faces, even national and international brands.
This week, budget airline company, EasyJet, fell victim to a cyber-attack. Around nine million people’s travel information and contact details were hacked in the breach along with 2,208 customers’ credit card details.
In a statement, EasyJet clarified to its customers that “issues of security are taken extremely seriously” and customers who have had their credit card details accessed are being contacted.
The news of the cyber-attack came just days after UK Foreign Secretary Dominic Raab highlighted the rise in cyber hackers looking to exploit vulnerabilities and steal valuable information during the coronavirus pandemic.
EasyJet is not the only high-profile organisation to fall victim to such highly sophisticated cyber-attacks. The likes of the NHS, British Airways and cleaning company ISS World have all been at the centre of huge hacks or data breaches.
As defined by the National Cyber Security Centre, cyber-attacks are “malicious attempts to damage, disrupt or gain unauthorised access to computer systems, networks or
devices, via cyber means”.
Cyber-attacks can come in many guises. Being aware and taking preventive steps against them are the best ways to protect business from an attack.
There are two different categories of cyber-attack; targeted and untargeted. Each category contains different ways that hackers can target an individual or organisation.
Untargeted attacks are not specifically aimed at any one type of person or organisation. They seek out multiple revenues for exploitation.
These include:
Phishing – whereby emails are sent out to a large number of people asking for personal data or containing fake links which often contain harmful material
Water holing – compromising a legitimate website or creating a fake one for users in order to exploit them and their personal details
Ransomware – a type of malware which criminals use to gain access to and lock users out of files. Files that have been locked will often be used as leverage for 'ransom’ to have the files returned
Scanning – searching a large area of the internet randomly to find sites to attack
Targeted attacks are aimed at an individual or organisation that has been singled out and often more thoroughly thought out
and damaging.
These include:
Spear phishing – similar to phishing however the emails are sent to targeted individuals
DDoS
extortion – distributed denial of service attacks are attempts to overwhelm a website by supplying it with a large amount of traffic. This typically results in a server crash. Criminals will contact organisations and threaten to subject them to a DDoS
Subverting
supply chain – this involves attacks on software or other suppliers that the organisation relies on
The variety of cyber-attacks and the ways in which hackers operate can be daunting, however you can protect yourself and your organisation. Bigger companies, such as EasyJet, are more at risk from more sophisticated and targeted attacks while SMEs are more likely to fall victim to untargeted attacks. SMEs can protect themselves against these types of attacks by taking preventative measures.
Not all measures to protect yourself from cyber-attacks have to be complex. Simple steps such as having secure passwords and installing security software all go a long way to protecting your computer or devices. Read our blog for six top tips for increasing your computer security.
During COVID-19 we are providing our clients with access to training, including topics such as cyber security, to pass onto their employees. Internal training on matters such as this can reduce the risk of attacks which include a decision made by an employee, for example, opening a scam email or attachment.
In a society where cyber threats are evolving at a rapid pace, the need to keep on top of cyber-security, is vital and even the most experienced computer users can run into issues. If you need advice, feel free to contact us.
In this article, we look at how, in addition to the devastating missiles, rockets, bombs, tanks and other weapons, Ukraine has also been the subject of cyber-warfare and we look at how these and other war-related issues could be cause for concern across Europe.
War In Ukraine
At the time of writing this article, as Ukraine has come under attack from Russian forces from the sea, ground and air, with reports indicating that:
– Russian troops are still trying to take Ukraine’s two biggest cities, Kyiv and Kharkiv.
– An estimated half a million refugees have left Ukraine.
– There are news reports that residential areas in Ukranian cities are now being hit with attacks such as cluster-bombs.
– The first round of talks about a ceasefire have been held.
– Satellite images have shown large columns of Russian armour and other military vehicles heading into Ukraine.
– Sanctions on Russia have caused the value of the Ruble to crash, leading to long queues at Russian banks.
Cyber Attacks – A Part of 'Hybrid Warfare’
State-sponsored cyber-attacks are now also very much an ongoing threat faced by all countries but, specifically in the case of Ukraine, they are being used against them as a weapon of war. Part military strategy, first proposed by Frank Hoffman, and highlighted in a NATO review last year, 'hybrid warfare’ is described as an “interplay or fusion of conventional as well as unconventional instruments of power and tools of subversion” which are “blended in a synchronised manner to exploit the vulnerabilities of an antagonist and achieve synergistic effects.” In short, it’s a combination of conventional and unconventional strategies, methods, and tactics which includes cyber-attacks. These cyber-attacks are now used to support the 'hard power’ of military action by disrupting vital services like power and communications to create more fear and confusion.
A Feature of Previous 'Hybrid’ Methods Believed To Have Involved Russia
Russia has been blamed for the use of cyber-attacks against states before, including Ukraine, especially during military conflicts. For example:
– Russia has been blamed for DDoS attacks on both Georgia and Crimea during the incursions in 2008 and 2014.
– In December 2015, Ukranian power stations were hacked and taken offline. It was also reported that the telephone lines had been disrupted so that the engineers couldn’t make calls. The result was huge disruption for hours for homes, businesses and other entities.
– In June 2017, the software used for Ukraine’s tax return filing system was hacked and companies were attacked with ransomware. The malicious software also spread to other countries, including the UK, as well as causing huge disruption to merchant shipping. The cost was estimated at $5-10 billion.
– In 2019, Russian military intelligence was blamed for cyber-attacks (DDoS) on 2000 websites in Georgia. The websites affected included the presidential website and the country’s national TV broadcaster.
Recent Cyber Attacks
The hard power of military attacks against Ukraine are reported to have been accompanied in recent weeks by cyber attacks. For example:
– In mid January, Ukraine blamed Russia for attacks on 70 government websites (the largest attacks on Ukraine in 4 years) including the Diia website. This system, linked to government services, is where personal vaccination data and certificates are stored.
– In mid-February, Ukraine reported that two state-owned banks, PrivatBank and Oschadbank, had been hit by large-scale DDoS attacks and other failures which interrupted banking services.
– Last week, there were reports of Distributed denial of service (DDoS) attacks and “wiper” attacks against Ukrainian organisations. These attacks have destroyed data on infected machines. Experts believe that the Wiper attacks may have been planned as far back as December.
– Ukraine’s Computer Emergency Response Team (CERT) has reported that hackers from the Belarusian military (a group code-named “UNC1151”) have been targeting the private email addresses of Ukrainian military personnel “and related individuals”. The attacks have involved using password-stealing emails to break into Ukrainian soldiers’ email accounts and using the compromised address books to send further malicious messages.
Defence – The Rapid Cyber Response Team
Countries have their own cyber protection units, usually linked to intelligence services/agencies, and the military. In terms of Ukraine’s defence against cyber-attacks, help could come from:
– The CRRT. Following a call for help from Ukraine, it has been reported that a rapid-response team (CRRT) is being deployed across Europe to help defend against Russian cyber-attacks which are accompanying (and preceded) the ground war. The team is reported to be made up of 12 experts, from Lithuania, Croatia, Poland, Estonia, Romania, and the Netherlands.
– Like the UK’s own Computer Emergency Response Team (CERT) which was set up in 2013, Ukraine has its own CERT-UA.
Should We Be Concerned About The Spread of The War?
While thoughts are of course with the people of Ukraine, there has been much speculation and some warnings which indicate how the war could spread. For example:
– Neighbouring countries are preparing for the possibility of attacks, invading forces, or events that could spill over into their territories, e.g. Poland, Latvia, Georgia, Azerbaijan, and even Finland.
– Russia’s president Putin said that he has put Russia’s nuclear force on high alert. This, however, has been dismissed by many as a distraction attempt.
Should We Be Concerned About The Spread of the 'Cyber War’?
At the beginning of February, oil facilities in Germany, Belgium and the Netherlands being targeted by cyber-attacks, thought to be of Russian origin, were seen as a way of Russia exerting pressure on Germany and came at a time when Russia was threatening to close its oil pipelines. Also, at the end of January, UK businesses were warned by the National Cyber Security Centre (NCSC) to bolster their cyber defences in case Russia widened its attack scope to NATO countries and/or because of the spread of malware related to attacks on Ukraine. The NCSC has given advice about how to prepare here: https://www.ncsc.gov.uk/guidance/actions-to-take-when-the-cyber-threat-is-heightened
What Does This Mean For Your Business?
In addition to the terrible consequences of war for Ukraine’s citizens, there is uncertainty and fear about what happens next, and what could happen to escalate the conflict. Also, with more than one-third of Europe’s natural gas coming from Russia there are, of course, concerns about how the conflict could begin affecting other countries and there are bound to be big knock-on consequences for supply chains and other industries across the world. In terms of technology, there are clear risks of more Russian cyber-attacks being launched against NATO countries and the US and, as NCSC has warned, UK businesses now need to pay special attention to strengthening their cyber defences, not least to protect against malware attacks. Large UK companies and organisations involved with vital UK infrastructure could now face serious cyber-attacks (e.g. DDoS attacks) and, if not properly protected, this could have wider effects across the country for businesses and homes.
Here we look at what Matter 1.0 is, its advantages for the IoT and setting up a smart home (or office), and what its current limitations are.
What Is Matter?
Released just recently this October, Matter 1.0 is the new open standard that resolves interoperability and connectivity issues between all the IoT devices in smart homes. This new single software standard and certification will mean that different IoT gadgets and devices from different manufacturers will be compatible and able to link together to create a smart home if they are Matter certified, communicating with a common standard. Up until now, for example, consumers trying to create a fully connected 'smart home’ (e.g. where the lighting, locks, heating, music, and home devices can be all be voice operated from a digital assistant) have faced compatibility issues, complications, and difficulties in setting up and managing lots of smaller micro-ecosystems.
Who?
The new Matter 1.0 standard and accompanying certification has been developed by the Connectivity Standards Alliance (CSA). Formed in 2002, it is comprised of an international community of more than 550 technology companies. The Alliance describes itself as “the foundation and future of the Internet of Things (IoT)” and its mission “to simplify and harmonise the Internet of Things (IoT) through open, global standards and by creating a place where companies can work together to create a more connected, accessible, sustainable, and equitable world.”
It is not compulsory for IoT device makers to get Matter 1.0 standard certification, but they risk being left behind if they don’t.
Underlying Technologies – Wi-Fi & Thread
In order to make Matter work effectively, underlying network technologies, like Wi-Fi and Thread were needed. The Wi-Fi Alliance and Thread Group partnered with the Connectivity Standards Alliance in the development of Matter. For example, Wi-Fi enables Matter devices to interact over a high-bandwidth local network and smart home devices to communicate with the cloud.
Thread is a low-power and low-latency wireless mesh networking protocol which solves the complexities of the IoT, addressing challenges such as interoperability, range, security, energy, and reliability. Thread essentially provides an energy efficient and reliable mesh network within the home.
How Is Matter Being Introduced?
Matter is to be rolled out as an update in early 2023 to current devices and their smartphone apps so owners of existing smart home set-ups can continue using them as normal. The CSA says this initial release of Matter will be “running over Ethernet, Wi-Fi, and Thread, and using Bluetooth Low Energy for device commissioning”.
Certification
For IoT device manufacturers looking to get Matter certification for their products it’s a case of being a CSA member and making sure that their products comply with the requirements of the new standard and submitting their product to a testing lab. All new product CSA certifications, for example, require product testing at a CSA Authorised Test Provider, followed by an application with the CSA in its Certification Tool.
The CSA’s global certification program includes eight authorised test labs that can test Matter, but also Matter’s underlying network technologies, Wi-Fi, and Thread.
The Certification Tool is an online web tool that allows CSA members to manage and submit product certification applications to the Alliance.
The Advantages of the Introduction of Matter 1.0
The many advantages of introducing Matter include:
– It will now be much easier for consumers to find and set up compatible smart home-tech. Matter-compatible products should integrate seamlessly and interact, e.g. with Google Assistant, Amazon Alexa, or Apple Homekit-powered setup.
– There will be more options for the gadgets that can be added to the home smart home system.
– Existing smart home setups will continue to work as well as ever and won’t require the cost and hassle of replacing straight away – they will (if new enough) receive the update automatically.
– Manufacturers/developers will be able to build truly compatible cross-platform devices using the standard. This could increase their market potential, share, and profits.
– Improved IoT device/gadget security. For example, Matter certification includes compulsory compliance.
– Less connectivity drop-off and disruption. This is because, although Matter is IP-based, the standard works as a layer on top of Wi-Fi, Bluetooth, and Thread so some smart home functions will still work even when the local internet goes down.
– Support and resources relating to Matter are available for developers, e.g. the open-source Matter code repository on GitHub.
Limitations/Drawbacks
There are, however, a few limitations and drawbacks to the introduction of the new Matter 1.0 standard, including:
– Not all devices will support it. For example, older smart speakers / smart devices won’t support Matter.
– Brands will differ in how they integrate their products with Matter, i.e., Support for the new standard may only come to newly released smart home gadgets from some brands, and even some fairly recent models from some brands won’t update to Matter.
– The initial release of the Matter 1.0 standard will only support a limited variety of common smart home products, including lighting and electrical, HVAC controls, window coverings and shades, safety and security sensors, door locks, media devices including TVs, controllers as both devices and applications, and bridges.
What Does This Mean For Your Business?
The vast and rapidly growing Internet of Things has presented many challenges. Smart speakers and digital assistants, however, brought the promise of actually being able to have a truly smart home, if it wasn’t for the fact that it required lots of time-consuming research and the frustration of gadget and device compatibility and interoperability issues.
Having one common standard, therefore, that can link many different IoT devices and gadgets together seamlessly and easily does sound like a significant breakthrough that could really open up the possibilities of the IoT and help consumers and developers alike. Matter’s introduction could mean more choice and less hassle for consumers, make the linking up elements of a smart home easier, less time consuming, and less costly, and could deliver more consumer confidence in the whole smart home area. This, in turn, could lead to more scope, sales, and a bigger market for developers and manufacturers. It could also bring new opportunities for smart home ideas, could help save home energy costs, and could (with the need for devices to be security compliant to be Matter certified) tackle the security problems that many IoT devices have posed until now. Matter, therefore, looks like it could be a real a proper breakthrough in gaining more control over the IoT and how it’s managed, operated and protected in a way that benefits individuals and businesses
The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), the US actors’ union with 160,000 members has gone on strike mainly over fears that AI will reduce earnings and damage their profession.
New Agreement Needed
One of the Union’s main roles is negotiating terms between actors and the studios, and the last agreement expired on 30 June (and was extended to 12 July), meaning negotiations were needed which have led to a disagreement and the strike. This is the first time the actors and writers have been on strike at the same time since 1960, when Ronald Reagan was president of the Screen Actors Guild.
Reasons
In addition to negotiating 'residuals’, the payments performers receive for repeat showings of films or TV shows (which has been complicated by streaming), who owns their likeness if reproduced by AI has now become a serious issue and a major sticking point.
The Screen Actors Guild union’s membership isn’t only actors in film and TV shows, it’s also made up video game performers, radio presenters, models, YouTube influencers, and more, and although the union is in the US, its reach, influence, and acts of solidarity with its members are global, meaning the strike is causing major disruption to the whole industry.
How Could AI Negatively Affect Actors And The Profession?
A recent proposal by The AMPTP (Alliance of Motion Picture and Television Producers), which represents studio bosses reportedly suggested that background performers could simply be scanned and paid one day’s pay, while their scanned image is then owned by film companies who can use the person’s image or likeness (reproduced with AI) for unlimited projects in the future without the performer’s consent and without compensation. It has been noted that this proposal resembles the plot of an episode of Charlie Brooker’s Black Mirror (a Netflix Sci-Fi series). As the SAG-AFTRA union president Fran Drescher said in a recent press conference, the fear from performers is, “We are all going to be in jeopardy of being replaced by machines.”
Stars Out
The SAG-AFTRA union has many well-known celebrities, many of whom have come out very publicly in support of the strike, for example, Meryl Streep, Charlize Theron, Jamie Lee Curtis, Olivia Wilde, Ewan McGregor, and George Clooney who said that change was required for “our industry to survive”. The fact that SAG-AFTRA has some very famous members is a source of power and leverage in the argument.
Equity In The UK
Comments by Liam Budd, of UK acting union Equity, have shed more light on the extent to which AI could threaten the pay and jobs of actors and performers. Mr Budd recently outlined how AI is being used for automated audiobooks, synthesised voiceover work, digital avatars for corporate videos, and how AI deepfakes are being used in films, all which have led to “fear circulating” amongst the Equity members.
Writers’ Concerns
The trade union representing writers for TV, film, theatre, books and video games in the UK, The Writers’ Guild of Great Britain (WGGB) has also expressed concerns about the encroachment of AI, such as:
– AI developers using writers’ work without permission, infringing writers’ copyright.
– AI tools don’t clearly show where AI has been used to create content.
– Increased AI will reduce the number of job opportunities for writers and reduce the level of writers’ pay.
– The contributions made by the creative industry to the UK economy and national identity could be diluted by AI.
That said, on the point about whether AI could replace writers, the WGGB says “AI systems are not yet sophisticated enough to produce works which accurately mimic the standard of writing produced by professional writers” and “the WGGB does not believe that AI will be able to replicate the originality, authenticity, enthusiasm and humanity that professional writers put into their storytelling.” The union does, however, accept that AI systems could be able to mimic writes’ work in the future.
What have The Studios Said?
The Alliance of Motion Picture and Television Producers’ union (AMPTP), which represents the studios and their interests, issued a statement highlighting the positive aspects of its proposal such as “historic pay and residual increases, substantially higher caps on pension and health contribution” and saying that “A strike is certainly not the outcome we hoped for as studios cannot operate without the performers that bring our TV shows and films to life.”
With regards to AI and using images and likeness of actors, the AMPTP has said that it has proposed measures to protect actors’ digital likenesses which include securing an actor’s consent to create and use a digital likeness or to digitally alter their performance, and that the use of digital replicas will be restricted to the specific motion picture for which the actor is employed. Also, it says any additional use would require that actor’s consent and further negotiation.
What Does The Strike Mean For The Entertainment Industry?
In summary, the results of calling the strike means:
– All production under the SAG-AFTRA TV and film contract being halted immediately, thereby bringing projects to a standstill both in the U.S. and around the globe.
– In the UK, with solidarity from the Equity union (and those who have joint cards), many members will stop work and be reluctant to accept work that would have been offered to striking colleagues. Also, co-productions of films and TV shows (US/UK) will be put on hold.
– Actors will no longer be able to promote shows and films they have already made, and this will extend to use of social media for promotion.
– Chat shows may be short of high-profile celebrities for the time being.
Ownership of Likeness
Ways in which famous actors normally protect their image in likeness, in addition to protection offered by union deals, can include:
– Right of Publicity – the main legal doctrine that celebrities use to control the commercial use of their name, image, voice, or persona.
– Trademark Law – registering their name, image, or signature as trademarks.
– Contract Law – when entering contracts with studios or other entities, actors may have contracts often include detailed provisions about how and when their image can be used.
– Copyright Law – to protect creative works that feature a person’s image.
– Defamation Law – to stop someone falsely uses a celebrity’s image in a way that harms their reputation.
– Privacy Law – used (in some jurisdictions) to protect against intrusive or misleading uses of a person’s image.
However, the rapid evolution of AI and AI tools has led to a blurring of the lines around ownership. For example, when an AI image generator like DALL-E has a likeness added to it to make an image, the new image is in public domain, free to use by anyone, and not protected by copyright law.
This, and the many arguments of the acting and writers’ unions point to the need for new regulations that address these many evolving issues.
What Does This Mean For Your Business?
The actors argue that AI gives studios the chance to slash costs and are clearly afraid that AI could be used replace them and their skills, could reduce pay, could lead to fewer acting jobs and job losses, could damage their industry, and devalue their profession and status. They also argue that there are serious issues to be addressed around the use of image and likeness and over matters of consent for the use of these and ownership. For the studios, film and program makers, plus their customers, the strike is likely to be costly, disruptive and damaging. Unfortunately, although AI can be used to help enhance film and programme making, the nature of the business lends itself well to automation. For example, actors images, voices and places can all be easily copied by AI tools (although still not perfectly), and generative AI tools can even be used to write scripts (albeit poorly according to Charlie Brooker, writer of Black Mirror). Of course, this is all part of negotiation between unions and studios that also covers other matters, e.g. the effects of streaming. However, it highlights much of the fear around AI and what many see as the alarming pace of development and the need for new regulation to keep up, how automation by AI could destroy jobs and for some, and even how AI could pose a threat to humanity itself.
It also highlights how generative AI tools are blurring hitherto clearer legal boundaries and how quickly AI can disrupt businesses and industries creating both opportunities and threats for those in them. Many will watch with interest how the dispute unfolds and how similar issues will affect/are affecting related industries going forward e.g., music and art.
Here we look at some of the latest WhatsApp updates and the value and benefits they deliver to users.
Search Conversations By Date For Android
The first of three new updates of significance for WhatsApp is the “search by date” function for individual and group chats on Android devices. Previously, this function had been available on other platforms (iOS, Mac desktop and WhatsApp Web).
As featured on Meta’s Mark Zuckerberg’s WhatsApp channel (Meta owns WhatsApp), WhatsApp users on Android can now search for a chat on a particular date (not just within a range). For example, one-on-one or group chat details can be date searched by tapping on the contact or the group name, tapping on the search button, and then tapping the calendar icon (right-hand side of the search box), and selecting the individual date. This feature is likely to deliver a better user experience by giving greater precision and control and potentially saving time in locating specific messages.
Privacy Boost From User Profile Change
Another potentially beneficial boost to the privacy aspect of what is already an end-to-end encrypted messaging app is (in the beta version) closing the loophole on sharing profile pictures without consent, impersonation, and harassment by preventing users from taking screenshots within the app. If users try to screenshot a profile picture, for example, WhatsApp now displays a warning message. Although the ability to download profile pictures was stopped 5 years ago, it was still possible to take screenshots. Closing this loophole in the latest update should, therefore, contribute to greater user privacy and safety.
Minimum Age Lowered To 13
One slightly more controversial change to WhatsApp’ T&C’s’s terms and conditions however is the lowering of the minimum age of users in Europe (and the UK) to 13 from 16. This brings the service in line with its minimum age rules in the US and Australia, and the move by WhatsApp was taken in response to new EU regulations, namely the Digital Services Act (DSA) and the Digital Markets Act (DMA), and to ensure a consistent minimum age requirement globally. The two new regulations have been introduced both to tackle illegal and harmful activities online and the spread of disinformation, and to help steer large online platforms toward behaving more fairly.
In addition to the minimum age change, WhatsApp is also updating its Terms of Service and Privacy Policies to add more details about what is or is not allowed on the messaging service and to inform users about the EU-US Data Privacy Framework. The framework is designed to provide reliable mechanisms for personal data transfers between the EU and the US in a way that’s compliant and consistent with both EU and US law, thereby ensuring data protection.
Criticism
However, although the minimum age change (which may sound quite young to many parents) will be good for WhatsApp by expanding its user base and good for users by expanding digital inclusion and family connectivity, it has also attracted some criticism.
For example, the fact that there’s no checking/verification of how old users say they are (i.e. it relies on self-declaration of age and parental monitoring) has led to concerns that more reliable methods are needed. The concern, of course, also extends to children younger than 13 accessing online platforms (e.g. social media) despite the set age limits.
In Meta’s (WhatsApp’s) defence, however, it already protects privacy with end-to-end encryption and has resisted calls and pressure for government 'back doors’. It has also taken other measures to protect young users. These include, for example, the ability to block contacts (and report problematic behaviour), control over group additions, the option to customise privacy settings, and more.
Competitors
Regarding compliance with new EU regulations, the European Commission has been actively engaging with large online platforms and search engines, including Snapchat, under the Digital Services Act (DSA). Also, given the widespread impact of these regulations on digital platforms and their emphasis on data privacy and security, it is likely that Signal (a competitor), and other messaging and social media platforms, are taking steps to align with these new requirements.
Some people may also remember that Snapchat came under scrutiny last summer from the UK’s data regulator to determine if it is effectively preventing underage users from accessing its platform. The investigation was in response to concerns about Snapchat’s measures to remove children under 13, as UK law required parental consent for processing the data of children under this age.
What Does This Mean For Your Business?
The latest WhatsApp updates, alongside the broader implications of new EU and UK regulations, herald potentially significant shifts for businesses, messaging app users, and the industry at large. These changes, encompassing enhanced search functionalities, privacy safeguards, and adjustments to user age limits, will reshape some user experiences and offer both challenges and opportunities.
The “search by date” function for Android users should enhance user convenience and accessibility, save time, facilitate precise and efficient message retrieval, plus improve user engagement and satisfaction. Businesses leveraging WhatsApp for customer service or internal communications, for example, could find this feature particularly beneficial, i.e. by enabling quicker access to pertinent information, and streamlined interactions.
The extra privacy enhancements essentially reflect a growing industry-wide focus on user security and digital safety and will strengthen individual privacy (always welcome). They also emphasise the importance of user-consent and control over personal information and should remind businesses of the need to prioritise and manage user data both in line with (evolving) regulatory standards and today’s consumer expectations.
The adjustment of WhatsApp’s minimum user age in Europe and the UK presents a bit more of a nuanced landscape. While aiming to broaden digital inclusion and connectivity, this change also highlights the complexities of age verification and online safety. Messaging and other platforms, however, must find ways to navigate these complexities, ensuring compliance while fostering a safe and inclusive digital environment for younger users.
The broader context of the DSA and DMA, along with similar regulatory efforts in the UK, signal the transformative period that digital platforms are now in and although we can all see the benefit of curtailing harmful online activities, there’s also an argument for resisting pressure to go as far as giving governments back doors (thereby destroying the privacy and exposing to other risks). Messaging apps and social media platforms, including WhatsApp and its competitors (e.g. Snapchat, Signal, and others) have known regulations were coming, probably expect more in future, and are now having to adapt to enable compliance and retain trust while introducing other features valued for users at the same time.
Businesses using apps like WhatsApp (which also has a specific business version) are likely to already value its privacy features, e.g. its end-to-end encryption, for data protection. As such, they are unlikely to oppose any more helpful privacy-focused, or improved user experience changes, as long as they don’t interfere with the ease of use of the app (or result in extra costs).
Following California Governor Gavin Newsom vetoing a landmark AI safety bill aimed at regulating the development and deployment of advanced AI systems, we look at the reasons why it was blocked and the implications of doing so.
What Bill?
US Senate 'Bill 1047’ relates to regulating AI systems with the focus specifically around frontier AI models (highly advanced/cutting edge and large) with the potential for large-scale impact.
California
The fact that California is home to major AI companies like OpenAI (which also partners with Microsoft), and its governor vetoed it, means there are implications for the future of AI governance and industry practices worldwide. For example, Gavin Newsom said in his statement about the bill “California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history”.
The Key Points
The key points of the bill were:
– Risk mitigation for frontier AI models. The bill targeted large AI systems, particularly those that required significant computational power to develop (at least 10^26 FLOPS). It required companies developing such systems to implement safeguards to prevent catastrophic harm, including the misuse of AI for creating weapons of mass destruction, committing serious crimes like murder, or launching cyberattacks that could cause significant damage (e.g. over $500 million).
– A “kill switch requirement”. Under the bill, developers would have been required to implement a “kill switch” mechanism to immediately halt the operations of AI models if they posed a threat, during both training and usage.
– Cybersecurity measures. Companies were required to have strict cybersecurity protocols in place to prevent the unauthorised use or modification of these powerful AI systems.
– Oversight and reporting. The bill proposed the creation of a “Board of Frontier Models”, a new state entity, to oversee the compliance of these companies with the safety measures. Regular audits and detailed reports on safety protocols were part of the requirements.
– Whistleblower protections. The bill also included protections for employees who reported non-compliance within their organisations.
Opposition From Big Tech Companies
Major tech companies, including OpenAI, Google, and Meta, strongly opposed the bill, arguing that the regulations could significantly slow down innovation and hinder the deployment of beneficial AI technologies. With these companies being invested heavily in the development of AI, viewing it as a key future revenue source, it’s perhaps not surprising that they saw the bill as a threat to that potential. Also, there were concerns within the tech community that open-source AI models, which often rely on collaborative, decentralised development, could face legal liabilities under the bill’s stringent requirements. This risk, they argued, could discourage further development of open-source AI, which has been an important driver of innovation in the field. The tech giants feared that the bill’s overly strict regulations could stifle growth and limit the industry’s ability to remain competitive globally.
Why Was The Bill Vetoed By Newsom?
In an official statement, the California state governor gave the following main reasons for blocking the bill:
– An overly narrow focus. Newsom argued that the bill only targeted large, expensive AI models based on their computational scale and costs, which could give a false sense of security. He pointed out that smaller, specialised models could pose similar risks but were not covered by the bill.
– A lack of adaptability. The governor emphasised that AI is evolving rapidly, and the bill’s framework was too rigid, not allowing flexibility to adapt to technological advancements. He stressed that regulation needs to be able to keep pace with innovation.
– It ignored deployment context. Newsom criticised SB 1047 for failing to consider where and how AI models are used, whether in high-risk environments or for critical decision-making, arguing that this oversight made the regulation less effective.
– The potential for stifling of innovation. He also expressed concern that the bill could curtail innovation by applying stringent standards to even basic AI systems, which may inhibit the development of AI technologies that benefit the public.
– A lack of empirical evidence. Newsom insisted that any AI regulation must be based on empirical evidence and analysis of AI systems’ actual risks and capabilities. He argued that SB 1047 lacked this necessary foundation.
– Preference for broader collaboration. Instead of a California-only approach, Newsom said he favoured working with federal partners, experts, and institutions to craft a balanced and informed AI regulatory framework.
The Response
Although Newsom’s blockage of the bill may have pleased the big AI companies, not everyone was happy about it. For example, California state Senator Scott Wiener, who represents the 11th district, encompassing San Francisco and parts of San Mateo County, has strongly criticised Governor Newsom’s decision to veto Senate Bill 1047. In a press release, Mr Wiener expressed deep concern about the implications for public safety and AI regulation, arguing that that the bill was designed to introduce commonsense safeguards to protect the public from significant risks posed by advanced AI systems, such as cyberattacks, the creation of biological or chemical weapons, and other harmful applications.
He emphasised that while AI labs have made commitments to monitor and mitigate these risks, voluntary actions are not enforceable, making binding regulation crucial. Senator Wiener said, “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers,” highlighting the lack of meaningful federal regulation as a critical issue.
Wiener also dismissed the claim that SB 1047 was not based on empirical evidence, calling it “patently absurd,” given that the bill was crafted with input from leading AI experts. Mr Wiener has made it clear that he views the veto as a missed opportunity for California to lead on innovative tech regulation, similar to past actions on data privacy and net neutrality, saying, “We are all less safe as a result.”
However, despite the setback, Wiener expressed hope that the debate has advanced the issue of AI safety globally and vowed to continue working towards effective AI regulation.
Ever-Present AI?
Newsom’s vetoing of the bill comes at the same time as Microsoft’s head of AI, Mustafa Suleyman, saying that he believes that (AI) assistants with a “really good long-term memory” are just a year away in development. Suleyman’s comments refer to “ever present, persistent, very capable co-pilot companions in your everyday life”, which aligns with the idea of the view of many that to make AI truly useful / to leverage the full benefits of AI, integration is necessary. For example, an AI assistant can only organise your schedule if it has full access to your diary and remembers past interactions.
This concept of deeply integrated AI assistants actually ties directly into the debate around Senate Bill 1047, which Governor Gavin Newsom recently vetoed. The bill sought to regulate advanced AI systems, ensuring safety protocols for powerful models. As ever-present AI systems become more common, the absence of legislation like SB 1047 leaves critical questions about how these systems will be governed. Newsom’s veto reflects ongoing concerns about stifling innovation, yet it also leaves unresolved issues around privacy, security, and the unchecked expansion of AI into daily life, which these emerging technologies are set to accelerate. It can be argued, therefore, that without comprehensive safeguards, the integration of AI into personal and professional spaces may pose significant risks, e.g. data security and privacy, not to mention the risk of AI tools giving incorrect information or advice or displaying inbuilt bias towards the user they are supposed to be helping.
Six-Fingered Gloves
In a strange but related aside, a Finnish startup, Saidot, recently sent ominous six-fingered gloves to global tech leaders (including OpenAI’s Sam Altman) and EU politicians (and the UK Prime Minister) as a symbolic warning of AI dangers, particularly highlighting how image generators sometimes produce flawed outputs, like extra fingers. The gesture was aimed at raising awareness about the fast-evolving and unpredictable nature of AI, which could lead to unexpected consequences. Saidot’s CCO and co-founder, Veera Siivonen, said: “AI is developing so fast that nobody can fully anticipate its impacts and the emerging risks” and “That’s why we want to highlight both the steps that have been taken forward for safer AI, as well as some of the steps that should be taken.”
Saidot’s point aligns with the concerns surrounding the vetoed Senate Bill 1047, which sought to regulate AI technologies to prevent potential harm. As AI continues to develop rapidly, the failure to enact regulatory frameworks could leave many dangers inadequately managed.
What Does This Mean For Your Business?
The veto of Senate Bill 1047 by California State Governor Newsom highlights the need for achieving a delicate balance between promoting technological innovation and ensuring public safety. While the bill aimed to introduce necessary safeguards for advanced AI systems, its rejection shows the tension between regulation and the tech industry’s drive for unfettered progress. Newsom’s decision reflects the belief (particularly by the AI companies themselves) that overly rigid laws could stifle the rapid advancements in AI, which are viewed as essential for maintaining California’s competitive edge in the global tech landscape.
However, this move has also left a significant gap in AI governance. With AI systems becoming increasingly integrated into daily life (e.g. with the prospect of 'ever-present’ AI as predicted by Microsoft), concerns about privacy, security, and potential misuse are mounting. The absence of comprehensive legislation leaves many of these issues unresolved, especially as the technology continues to evolve at an unprecedented pace. As argued by proponents of the bill, such as Senator Wiener, voluntary measures by AI companies may be insufficient and binding regulations are what’s really needed to protect society from potential harms, including cybersecurity risks and the creation of dangerous AI applications.
As AI continues to develop, the debate over how to effectively regulate it is far from over. The blocking of this bill may have slowed the momentum for immediate regulation, but it has also pushed the conversation forward. Looking ahead, policymakers, industry leaders, and experts will now need to collaborate on creating flexible yet effective frameworks that can both foster innovation and mitigate the risks associated with these powerful technologies.
For business users, the vetoing of Senate Bill 1047 and what would have been its wider effects means continued uncertainty around AI governance, leaving them reliant on voluntary safety measures from tech companies. While this may enable faster deployment of AI tools that enhance efficiency and innovation, for businesses it also raises risks. Without clear regulatory frameworks, businesses may face greater legal and ethical challenges, especially in areas like data security and AI accountability. For companies looking to integrate AI, the current absence of stringent safety measures could present both opportunities and risks as AI systems become more ingrained in business operations.
Nearly a third of office staff are secretly using AI tools at work, risking data breaches, compliance failures, and loss of intellectual property.
Ivanti’s latest Technology at Work report reveals that 42 per cent of employees now use AI daily, but many do so without approval. For example, 36 per cent believe it gives them a hidden edge, while others worry about job security or fear judgement from colleagues. Crucially, even 38 per cent of IT professionals admit to using unauthorised tools, despite knowing the risks.
This covert use of AI, dubbed 'shadow AI’, is raising red flags across the industry. As Ivanti’s legal chief Brooke Johnson warns: “Employees adopting this technology without proper guidelines or approval could be fuelling threat actors”. Also, a separate study by Veritas found over a third of UK staff had fed sensitive data into chatbots, often unaware of the potential consequences.
Several major firms, including Apple, Samsung and JP Morgan, have already restricted workplace AI use following accidental leaks, but Ivanti warns that policy alone isn’t enough i.e., businesses must assume shadow AI is already happening and act accordingly.
To reduce the risk, companies should enforce clear AI policies, educate staff, and monitor real-world usage. Without visibility and oversight, AI could turn from productivity tool to security liability.
SMY IT Services has expanded with the addition of an extra engineer to support the rising number of home workers in the region.
The company, which offers a range of services to businesses who rely on computer and phone systems, has seen business boom in the last year as more and more people chose to work remotely.
And as a result, it has now recruited Steve Hawley, as a frontline support engineer.
Managing director Jonathan Smy said: “We are thrilled that Steve has joined us.
"His skills are a fantastic addition to the team and our combined expertise will continue to benefit so many of our customers.
“It has been an exceptionally strange year and we are expecting the trend for home working to continue well into 2021.
“Our business has been called upon to make this transition for many companies in and around East Anglia. Many of these now benefit from a smoother and more streamlined operation as a result.
“It’s been a pleasure to help so many to thrive in difficult circumstances.”
Steve has more than 20 years’ experience in the IT sector and boasts qualifications in Office 365 and VOIP Telephony.
He said: “I started working life in the army but have always been interested in IT. Even while serving, I dabbled and fixed a couple of problems with equipment.
“After I left the army I started as an Avaya telecoms engineer and I learned different skills so that I was able to work on both IT and telecoms.
"I now specialise in telecoms and physical networks.”
SMY IT is a high-quality IT support, cloud and consultancy provider and offers a range of services. For more information, visit www.smyservices.com or call 01473 557203.
Tesco and Asda have announced that they are to trial the use of a coating for fruit that’s made from the same materials found in peels, seeds, and pulps as a way to extend shelf life, fight food waste, and reduce the need for packaging.
Tackling Food Waste
The UK throws away a staggering 6.6 million tonnes of household food waste a year! 400,000 tonnes of that is fruit. For example, each day we throw away an average of 720,000 whole oranges. Much of this fruit is thrown away because it has perished, e.g. gone mouldy or started to decay. This is why there is room for a solution that can cut down on food waste but doesn’t involve extra packaging.
Tackling The Challenge of Plastic Pollution From Food Packaging
Also, the results of a study by The Waste and Resources Action Programme (WRAP), has led to some recommendations related to fresh fruit and vegetable retail packaging. The recommendations are that:
– Fresh fruit and vegetables should be sold loose where possible, unless it is shown that plastic packaging reduces overall food waste.
– Unless it can be shown that a 'Best Before’ label reduces overall food waste, date labels should be removed. WRAP says this would prevent 14 million shopping baskets worth of food from going to waste and 1,100 rubbish trucks of avoidable plastic simply by allowing people to buy what they need.
– Customers should be helped to understand the benefits of storing appropriate fresh produce in the fridge, set at the right temperature (i.e. below 5°C). This could help prolong the life of fresh fruit and veg and help reduce food waste.
WRAP has also called for the removal of more unnecessary and problematic single use plastic items under The UK Plastics Pact, including wrapping on multi-packs of tinned food and sauce sachets in restaurants.
Apeel Coating Trial
It is with tackling these issues in mind that Tesco and Asda have agreed to trial a new coating for fresh fruit and vegetables. The invisible, tasteless, odourless coating called 'Apeel’ is made from plant-derived materials, lipids, and glycerolipids that exist in the peels, seeds, and pulp of all fruits and vegetables. Coating fruits (and vegetables) in Apeel (spraying, dipping, or brushing) is claimed to slow spoilage by helping to keep moisture in and oxygen out. It is also claimed that Apeel will reduce reliance on refrigeration, thereby increasing its green credibility. The makers of Apeel claim that the coating makes produce lasts twice as long.
Asda has announced that it will soon be using the Apeel coating on citrus fruit and avocados in more than 150 stores. Tesco has announced that it will be using Apeel to coat oranges and lemons sold in 80 stores in the Peterborough area and will be studying the difference that the coating makes to the fruits’ shelf life.
Promising
Sarah Bradbury, Tesco Group quality director, said “Apeel could be a powerful tool in helping us cut waste in our supply chain and help customers reduce it in their homes”.
Asda’s senior director, Dominic Edwards, said “During this programme, we will be learning more about the benefits of longer-lasting produce for our customers, and we are looking forward to seeing what further developments this could lead to in the future”.
What Does This Mean For Your Organisation?
The UK produces far too much food waste and there is a cogent argument that UK supermarkets are still selling products with too much unnecessary (plastic) packaging, all of which is bad for the environment. If Apeel, which is made from natural ingredients anyway, really can make fresh produce last twice as long, this could be one great way to tackle three big problems at once – reducing the need for packaging, reducing the need for refrigeration, and reducing fresh food waste. This will be good news for the supermarkets in terms of lowering costs and helping them to meet their environmental targets. It would also be good news for consumers by reducing their shopping costs (less waste, food lasting longer), and giving a safe, environmentally friendly choice in their shopping (if they were made aware of the benefits of the coating). For other fresh food businesses this coating is likely to be of interest, and now it remains to be seen if the claims match up to the results as noted by Asda and Tesco at the end of the trial.
In this insight, we look at how you can use voice commands to carry out tasks in Windows, plus how speech recognition technology can be used for voice control of different systems and devices.
How Does Speech / Voice Recognition Work?
Voice commands can be used to carry out tasks in Windows, plus Android and iOS, and in other situations due to the use of speech/voice recognition technology. Speech recognition works by combining a range of technologies, tools, and algorithms such as machine learning, natural language processing (NLP), and contextual awareness as conversational AI which facilitates a real-time dialogue between a human and a device or system.
How To Set Up Speech Recognition In Microsoft Windows
Microsoft Windows Speech Recognition enables you to use voice commands to operate Windows on your computer. It is supported by Windows 8.1, 10, and 11 and allows you to control your PC with your voice and dictate text, i.e. Speech Recognition transforms your voice into text on the screen in different Windows apps and enables you to write emails in Microsoft Mail using just speech.
How To Set Up Speech Recognition In Windows
Speech recognition in Windows doesn’t work by default so, to switch it on:
– Type 'Settings’ in the start field or press the Windows key+ I.
– Click on 'Time & Language’ and 'Speech’ (left-hand side menu).
– Click the 'Get Started’ button under the 'Microphone’ heading to test for any microphone issues and follow any trouble-shooting instructions.
– Use the Windows key + Ctrl + S to open the 'Set Up Speech Recognition Tool’ to train Windows to recognise your voice by following the instructions.
– Answer the questions about the microphone / microphone headset you’re using and its positioning and click 'Next.’
– Choose whether to disable 'Document Review’ (the way the system improves its understanding of your way of speaking), click 'Next’ and choose whether to opt for manual or voice activation (manual is the default). To choose manual recognition, press the Windows key + Ctrl or click the on-screen microphone.
Activating It
Once Speech Recognition setup has been completed and it is enabled in Windows, to activate it for manual operation, say “start listening” or click on the on-screen microphone, or for voice activated operation say, “start listening.” To stop it from operating say “stop listening”.
Sending An Email In Windows Using Your Voice
It is possible to compose an email in Microsoft’s 'Mail’ app, for example by:
– Saying “launch mail”. Alternatively, you can click on the '+’ in the left-hand side bar (create a new message) and use voice commands from there.
– Say “show numbers”. This divides the email operation into numbered sections that you can then refer to (saying 'OK’ for the right ones) in your voice commands to construct and send the email.
– Write the email address by spelling out the letters individually, e.g. say “press a”, “press n”, etc, and “press @ sign”.
– Dictate the message and use commands such as “new line” and “new paragraph”.
– Find the numbered section for the send button, then say “OK”.
Commands
Microsoft’s Speech Recognition supports many commands. A full list of commands can be seen here.
Sending Texts From Android and iPhone
Voice commands can also be used to send texts from Android and iPhones. For example:
– For Apple, follow the steps to set up voice controls here and how to dictate text can be found here:
– For Android, follow the steps for setting up voice control here.
Others Popular Uses For Voice Recognition Technology / Speech Recognition
Many of us have used / are regularly using voice commands to activate services and carry out tasks. Popular examples include:
– Digital assistants / virtual assistant technology used for smart speakers and phones, e.g. Alexa, Google Assistant, and Siri (Apple).
– Speech recognition / voice chatbots used for multiple purposes, e.g. customer service automation, company websites, WhatsApp, Facebook Messenger, and more.
What Does This Mean For Your Business?
Many businesses have been reaping the benefits of saving time/costs and being able to offer 24/7 assistance by enabling customers to use voice activated systems, e.g. in customers service (online chatbots and phone systems) and to operate aspects of their service.
Microsoft, Apple, and Google also offer users the capability and convenience of using voice activation for their operating systems, which can help businesses by saving time, simplifying tasks, and enabling multi-tasking. Speech recognition and voice commands can also help businesses (like Microsoft and others) offer wider accessibility. Although these systems inevitably don’t always operate as smoothly as they should (as anyone with a smart speaker will know), the hope is that these technologies along with how they work together will improve over time to the point where we are not so tied to a manual keyboard, mouse, and screen-touching to carry out work and personal tasks.
Page 1 of 83
Prominent is an award-winning PR, marketing and events company based in Suffolk. The company works across a variety of sectors including construction, education, legal, hospitality and the public sector.

“Prominent has worked with SMY IT Services for two years now, and we could not be happier with the service provided to us. The SMY team are always responsive when we need them and there has not been a problem encountered that they have not solved.
“As a team of creatively minded people, IT is crucial to the success of our business, but it is not something we have time to take control or solve ourselves. So, we need a team who we can completely outsource to – and SMY IT provides us with this service.
“Whether it is a simple issue or something more complex and business critical, the SMY team are always happy to help with a smile. Being contactable by both email and telephone means we can get either an immediate solution or we can schedule in work for a more convenient time – we get the best of both worlds.
“They look after everything for Prominent, from managing email signatures to computer technicalities; from purchasing equipment to server issues. There is not a problem too big or too small for them. They are incredibly knowledgeable on everything IT-related and provide second to none customer service. They appreciate that they often talk to staff who are not IT savvy and adjust the technical language accordingly.
“If you are a business that does not have the time of the inclination to worry about IT, and you need a ‘partner’, then I would fully recommend SMY IT.
“I trust them implicitly for both of my businesses, and I would not consider going anywhere else for IT support services.”
Helen Rudd
Managing Director, Prominent
Fenton Civil Engineering Ltd are a groundworks company based in Chesham, Buckinghamshire. The company works in the civil, residential and commercial sectors of the construction industry.

“SMY have been providing IT advice and support to Fenton Civil Engineering Ltd since September 2019. Their services were recommended to us by a friend based in Watford who had been using SMY to provide their IT support.
“We needed urgent help after the person who had been delivering our IT support left suddenly. We were left in limbo and had no-one with IT knowledge in the office to help us in the interim.
“Thankfully, SMY came to our aid. They have essentially turned our whole IT service around and are providing help across a broad spectrum. When Jonathan and Carl initially came in to talk to us about our needs, they gave us an estimate as to how long it would take to get us back up to speed. It took a couple of months as there was no transitional phase with our previous IT expert and we literally did not even have passwords to provide SMY with.
“They really had to start from basics and were working from old laptops to try and gain access to passwords and crucial information.
“One of the other problems we faced was our domain name. We have a .com website address, which was coming up for renewal but, again, we did not have the log-in details. It meant there was a chance that we were going to have to change over to a co.uk domain. Fortunately, SMY saved the day and we didn’t have to change as it would have caused a few issues.
“SMY have truly been exceptional for us. Both Jonathan and Carl have done an amazing job and I can’t thank them enough. They are quick to deal with any urgent issues and there is not one question that they have been unable to answer. Their IT knowledge is supreme. I’m also learning on a daily basis thanks to their expertise.
“I wouldn’t hesitate to recommend SMY IT Services to any other businesses who may find themselves in the same situation that we were in.”
Vicki Pryer
Office Manager, Fenton Civil Engineering Ltd
Pure Resourcing Solutions are professional recruitment specialists for the East of England. They boast specialisms in accountancy, human resources, technology, marketing and office support.

“Jonathan and Carl have been providing IT advice and support to Pure for over 10 years. They are incredibly knowledgeable on everything IT and Telecoms and due to this are often involved in many of our technology projects whether it’s simply to ask their opinion, advice or to handle the installation/implementation of our hardware/software. What they don’t know in their field, in our opinion, isn’t worth knowing!
“They are exceptionally quick at handling urgent or business critical issues which goes a long way when you’re under pressure internally to deliver a good level of resilience with IT systems.”
Ian Walters
Chief Executive Officer, Pure Resourcing Solutions
Abacus Employment Services are a company focused on delivering excellent recruitment solutions. As well as offering 24/7 hour support to their clients, Abacus Employment Services also benefit from being the industry leader for both permanent and temporary staff.

“Having become disillusioned with our incumbent IT provider we took the decision to move to a new provider. Following meeting with Jonathan it was immediately evident that he understood our frustrations and he took the time to understand what we were looking for moving forward.
“From the initial idea to professional design and smooth installation of a brand new cloud IT infrastructure, SMY IT Services has enabled us to deploy the applications and tools we need to run our business and compete at the highest level. They always provide excellent advice and support so we have total confidence in their ability no matter the challenge we set them. Quite simply, SMY IT Services have never let us down.
“The service that we receive is of the highest standard and we are completely happy that we have made the right decision to move our business to SMY IT Services.”
Chris Addis
Managing Director, Abacus Employment Services
Sanctuary Personnel is a leading recruitment specialist with over 250 employees at their head office in Ipswich as well as offices nearby and in London.
“As a leading recruitment company, it is integral that our IT systems are consistently working to the highest level.
“Jonathan and his team have been absolutely fantastic in ensuring all of our needs are met and that the very best solutions were delivered.
“He is on hand 24/7 for anything we might need and he has an excellent knowledge base of all things IT.”
Andrew Pirie
Marketing Director, Sanctuary Personnel
Cowells Arrow provides high quality gaming products and reliable service and pride themselves on being industry leaders for over 50 years.
“Jonathan and his team are amazing, amazing customer service, problems are always resolved in an extremely timely manner without being baffled by technical jargon.”
Steven Pink
Financial Controller, Cowells Arrow
Warren Anthony Estate Agents was set up in 2003 and they have over 75 years combined experience.
“It is costly for us to have our systems down and really appreciate the speed that your team respond to any issue we have. I don’t believe we have had any problems which you have not been resolved.”
Warren Patmore
Lettings Director, Warren Anthony Estate Agents
“Having had some real bad experiences with IT companies in the past it has been a breath of fresh air to have you and your team assisting all of my staff with any issues that have arisen.”
Tony King
Sales Director, Warren Anthony Estate Agents