INSIGHTS

AI in Smart Cities Survey: Citizens Have Trust Issues

An IoT Solution for Water Loss
11 minute read

Jun 3

Current Smart Meter Adoption

Urbanization, driven by population growth and the pursuit of opportunities, is straining city resources. Utilizing data and AI offers a potential solution by optimizing processes and enhancing urban efficiency, ultimately improving the quality of life in cities. This global trend gave way to the concept of smart cities. But how much do U.S. citizens know about AI in smart cities, and are they onboard with AI in smart cities?

Sand Technologies surveyed 2000 people in the U.S. to learn what they knew about AI in smart cities. This article, part of a series exploring the survey takeaways, examines citizens’ trust in the use of AI in smart cities. It highlights which demographic groups exhibit more skepticism, investigates the factors driving this skepticism, and provides actionable advice for enhancing population trust to support AI in smart city initiatives.

Do Citizens Approve of AI-Powered Smart Cities?

According to the survey, most people believe AI-powered smart cities have strong potential to enhance overall city life. Yet significantly fewer trust local governments to handle AI responsibly. This gap between optimism and trust jumps out from the survey results.

The graphs below illustrate the overall optimism surrounding AI versus the degree to which people trust the institutions implementing it — a sort of reality check.

The respondents were asked, “Do you think AI can have a positive impact on city living?”

Drilling down further, participants were asked, “Do you trust your local government and utilities to use AI responsibly?”

Who trusts (or doesn’t trust) AI governance?

People’s perspectives on AI in smart cities vary widely across demographics due to differing priorities, levels of exposure to technology and socioeconomic factors. The diverse viewpoints highlight the need for inclusive education and dialogue to ensure AI solutions address the concerns and aspirations of all groups. 

  • Age: Distrust grows steadily from age 45 and up, with the 65+ group the most skeptical. Individuals aged 40-65 may be hesitant to trust AI use in government due to their past experiences with rapid technological advancements. This age group has witnessed the emergence and rapid evolution of technologies, often accompanied by concerns about data privacy, ethical dilemmas and insufficient regulation. Many may recall when technological advancements had unintended consequences, such as job displacement or data breaches. These experiences can make them more cautious and skeptical, particularly in areas as sensitive and impactful as government infrastructure. The combination of limited transparency and concerns about misuse of AI tools likely magnifies their hesitations, fueling a greater demand for accountability and oversight in its implementation.
 
  • Education: Distrust in AI use in government was highest among people with college degrees. This demographic may exhibit more distrust due to their exposure to critical thinking frameworks and an understanding of the technology’s potential drawbacks. Higher education often fosters a deeper awareness of issues such as data privacy, algorithmic bias and ethical governance, making graduates more skeptical about the possible misuse of AI. Additionally, their familiarity with societal structures and complex systems might lead them to question whether implementing AI could exacerbate existing problems, such as inequalities or a lack of transparency. This combination of knowledge and critical analysis likely fuels a cautionary stance, urging a balance between technological advancement and ethical oversight.
 
  • Income: Individuals earning between $100,000 and $125,000 showed the highest level of trust. Lower-income groups were more skeptical. A key reason individuals with lower incomes often distrust government use of AI is the concern over bias and misuse. Many fear that AI systems, if poorly designed, may perpetuate existing inequalities, favoring wealthier communities while neglecting the needs of underserved populations. Additionally, a lack of transparency in how AI makes decisions can deepen skepticism, leading to concerns that the technology could monitor or unfairly target vulnerable groups, rather than benefiting them. Building trust will require governments to prioritize inclusivity, fairness and open communication about the goals and safeguards of AI initiatives.
 
  • Geography: Urban residents leaned more toward trust, but most participants were from urban areas due to the digital nature of the survey. Rural and micropolitan residents were more skeptical. However, there were fewer responses from rural areas to the survey. Counties that are outside cities were the most skeptical group by geography. People living closer to cities often have greater exposure to technology and its applications in everyday life, which may make them more trusting of AI use in government. Additionally, the diverse and high-paced urban environments require scalable and efficient solutions, characteristics that AI technologies often fulfill. This familiarity fosters a sense of acceptance and trust, as residents see AI contributing to smoother operations, quicker decisions, and accuracy in urban management.

What’s driving the skepticism in Government Use of AI?

The survey asked people to rank five concerns about local governments using AI. Respondents were asked, “What is your greatest concern about AI use in cities? Rank from most to least concerning.” Here are the top concerns:

Main Concern

Percentage

Loss of jobs

41.7%

Loss of privacy

29.7%

Higher taxes or government expenses

11.2%

Other

11.1%

Waste of the city budget or resources

6.3%

What stands out?

    • The top concern was job loss. The biggest concern people have about AI in smart cities is the potential for job losses. As AI takes on tasks traditionally performed by humans, such as data analysis, administrative work, or even driving, people fear these workers will lose their jobs. While AI offers efficiency and cost savings, it also highlights the urgent need for upskilling and reskilling programs to help workers transition into new, tech-focused opportunities within the evolving job market.
 
    • Loss of privacy wasn’t always the top pick, but it consistently appeared in people’s top two or three concerns, indicating that privacy is a concern across the board. With AI systems collecting and analyzing vast amounts of personal data, including location, habits, and even conversations, people worry about how this information is stored, used and who can access it. The lack of transparency in data handling raises fears of over-surveillance and misuse, making it essential for smart cities to strike a balance between innovation and protecting individual privacy.
 
    • While picked less often, higher taxes and wasting city resources are higher priorities among older participants (55+), rural and micropolitan residents and people with lower income or education levels. Many worry that large-scale investments in AI technology could increase taxpayers’ financial burdens, mainly if projects are poorly managed or fail to deliver promised outcomes. Many may feel financially vulnerable or underserved by city projects, making them more cautious about initiatives that could strain public budgets without providing clear, tangible benefits to their communities. 
 
    • Concern over the waste of city budget and resources was equal across all age groups. Worry about the misuse of city budgets and resources often stems from a fear that investing in AI for smart cities could lead to overspending or misaligned priorities. People don’t want public funds directed toward flashy technology projects that fail to address immediate community needs, such as affordable housing or infrastructure. 

Strategies to Build Public Trust in AI Smart Cities

What does all this mean, and how can city governments close that trust gap? Building public trust in AI is crucial for successfully integrating smart city technologies. When citizens understand how and why AI is being used, and believe their privacy and data security are protected, they are more likely to support its adoption. Governments play a key role in fostering this trust by prioritizing accountability in AI-driven initiatives.  

Governments must leverage clear communication, community involvement and robust regulatory frameworks to bridge the gap between innovation and public confidence, ensuring AI-driven smart cities become inclusive and beneficial environments for all. There are several strategies city governments can use to build public trust in AI. Here are a few:

  • Be transparent: Building public trust in AI smart cities starts with government transparency. Show what data is collected and how AI makes decisions. When citizens are informed about how AI systems operate, the data collected and the safeguards in place to protect privacy and security, they are more likely to feel confident in the technology. Openness fosters accountability, ensuring AI implementation aligns with public interests and ethical standards.
 
  • Focus on communication: Don’t assume people know what a “smart city” is — explain what benefits they’ll see. One of the main takeaways from the Smart City Sentiment Survey was that many people did not fully understand the concept of a smart city. Governments must clearly describe how AI systems work, their benefits and the measures to safeguard privacy and data security. By actively engaging with citizens through open dialogue, accessible resources and regular updates, governments can address concerns and foster confidence in the technology shaping urban life. 
 
  • Involve the public: Engage communities in the planning and decision-making processes. Trust in systems increases when users have a say in the process. Governments must educate citizens on the benefits of AI, encourage participation in setting priorities and allow communities to voice their concerns. This collaborative approach demystifies technology, ensuring that AI systems align with the values and priorities of the people they serve.
 
  • Invest in areas that matter to people: Building trust begins with prioritizing what matters most to citizens. Governments should focus their investments on areas that have the most significant impact on citizens’ daily lives, such as traffic management and public safety. By addressing these tangible concerns first, authorities can demonstrate the real value of AI, showing how it enhances efficiency and quality of life. This approach fosters a sense of confidence and collaboration between citizens and their leaders.

Changing Public Trust in AI

As urban populations continue to grow, the role of AI in smart cities will inevitably become increasingly significant. From optimizing public transportation to managing energy consumption and streamlining waste management, AI can create cities that are efficient, sustainable and adaptive to the needs of their residents.  

Trust is a significant missing piece in the AI conversation. Building public trust in these technologies is critical to unlocking their full potential. Even when people are open to new technology or smart solutions, it doesn’t mean they’re comfortable with how it’s introduced. If cities and providers want to gain buy-in, they’ll need to listen more, explain better and prove that the tools used are fair, helpful and respectful of people’s concerns.

By focusing on the ethical implementation of AI, transparency and accountability, city governments can prepare for AI to enhance urban living and address the complex challenges posed by rapidly growing populations. The cities of the future will thrive on innovation, and with the right approach, AI can be the trusted backbone of this transformation.

Let’s Talk About Your Next Big Project