World Wellbeing Panel

Artificial intelligence and wellbeing

Feb. 9, 2025

Professor Chris Barrington-Leigh

with

Doctor Tony Beatton, Professor Paul Frijters, Professor Arthur Grimes

In February 2025, members of the World Wellbeing Panel were asked for their views on two statements relating to the impact of innovations in artificial intelligence (AI) on human wellbeing.

The two statements were as follows:

Statement 1: Advances in artificial intelligence since 2000 have, overall, improved human wellbeing.

Statement 1: Strong government regulation of AI is critical to ensuring positive future impacts on wellbeing.

Response options for each statement were: “completely agree”, “agree”, “neither agree nor disagree”, “disagree”, “completely disagree”.

Below are the distributions of these categorical responses, followed by a discussion. You can click where indicated to see respondents' detailed written comments.

Advances in artificial intelligence since 2000 have, overall, improved human wellbeing.

  •  Professor Ori  Heffetz

    Professor Ori Heffetz

    Associate Professor of Economics, Cornell University and Hebrew University
    Neither agree nor disagree
    Who knows? On the one hand, there are clear examples where it has. On the other, there are clear examples where it facilitated crime, warfare, fraud, online scams, etc and created other challenges, e.g., in education setup. There's also the global AI arms race.

  •  Professor Mark  Fabian

    Professor Mark Fabian

    Associate Professor of Public Policy, University of Warwick
    Neither agree nor disagree
    I think it's too hard to tell. A lot of people use machine learnings products and benefit from them in myriad ways. However, these technologies are also an enormous draw on environmental resources, seem impossible to train without engaging in intellectual property theft, have enshitified the internet in many ways (one has to type -ai into every Google search now to avoid getting giving misleading information at the top of the search results), and have enabled and accelerated a wide range of nefarious practices like malware and deep fake pornography.

  •  Professor Mohsen  Joshanloo

    Professor Mohsen Joshanloo

    Associate Professor (Psychology), Keimyung University, South Korea
    Neither agree nor disagree
    Overall, I have a positive view of the impact of AI, as I do of other technological advances. However, its impact on public well-being is complex and difficult to assess in isolation, given its complex entanglement with broader technological, economic, and societal changes since 2000. I see AI much like social media: it is neither universally beneficial nor detrimental to well-being, but its impact depends on how it is used and by whom. For example, while AI has increased productivity and improved certain jobs, it has also led to revenue losses in other jobs.

  •  Professor David  Blanchflower

    Professor David Blanchflower

    Professor of Economics at Dartmouth
    Neither agree nor disagree
    I am afraid I don’t know there is little evidence either way

  •  Professor Martin  Binder

    Professor Martin Binder

    Professor of Socio-Economics at Bundeswehr University Munich
    Neither agree nor disagree
    I'm not aware of research that can help answer this question. Each time I converse with some bot now at "customer service", I would scream that AI does us no good, but that is just reflective of the fact that most of my interactions with AI (student theses written by AI, refereeing being done by AI) have been negative.

  •  Professor Mariano  Rojas

    Professor Mariano Rojas

    Professor of Economics, Universidad Popular Autónoma del Estado de Puebla
    Disagree
    As is the case with most new technologies, as well as with older tools, the problem does not lie in the tool itself, but rather in its use and, above all, the motivation driving its creation and influencing its design. It becomes necessary to identify the force that guides both the development of AI and its design and application, which is basically the generation of profits for the corporations developing it. Motivation is not interested in solving the underlying factors threatening well-being, but rather in taking advantage of problems for profit. For instance, we could have a companionship chatbot that collects all sorts of information about a person. The companionship provided is clearly not genuine or selfless; instead, it will serve the purpose of commercializing the underlying problem -loneliness- and extracting the greatest possible profits, even if this results in dependency or, in extreme cases, addiction. The chatbot would be marketed as a solution to the growing problem of loneliness, but in reality, it would merely serve as a palliative that takes advantage of deeper social problems which are neither recognized nor addressed. The incorporation of AI into productive activities may be economically efficient, but it may end up reducing humans’ access to their sources of income. Furthermore, the incorporation of AI into the military industry could lead to increasingly complex and costly arms races, in what is evidently a negative-sum race, diverting resources away from areas that genuinely promote well-being. Of course, one can imagine positive uses for new technologies, such as the development of new vaccines, personalized educational support, and traffic coordination in cities, among others.

  •  Professor Gigi  Foster

    Professor Gigi Foster

    Associate Professor and Undergraduate Coordinator, School of Economics, UNSW Business School
    Disagree
    Human wellbeing is produced most from local inputs that in general cannot be delivered by AI as they are human-provided: the quality of our human relationships and our social status (we care where we sit in the hierarchy of humans, not the hierarchy of machines plus humans). Our physical health is an area where some AI-delivered improvements are possible, such as via products and services that support people developing or sticking with healthy exercise, diet, or sleep regimens, but i think the causal impact of AI on wellbeing through these channels is marginal and there are also costs to using technology in these areas rather than just getting on with it oneself: in particular, reliance on such crutches misses an opportunity to develop one's own determination, stamina, and sense of being capable of surmounting obstacles. The same is true for the use of AI in generating text or other academic work: it is a crutch that in the long run will not make anyone happier. AI-mediated medical diagnosis holds significant promise to make people better off, but what we have seen in this area to date is not accessible to most people. The introduction of AI into production processes and potential impacts on the labour market ("the robots will take our jobs") hasn't improved production efficiency enough to measurably increase wellbeing through that channel, and all the talk about how everyone is going to be replaced by machines stresses people out, which is a direct negative for wellbeing. On the whole then i don't see a convincing argument that AI has improved wellbeing and i see a few arguments that would point to a reduction in wellbeing due to the introduction of AI - so far at least.

  •  Doctor Tony  Beatton

    Doctor Tony Beatton

    Visiting Fellow, Queensland University of Technology (QUT)
    Neither agree nor disagree
    Most artificial advances to date appear to be in technical areas, e.g. medical AI technologies detecting cancers etc. It is too early to determine the extent to which AI will disrupt or benefit the labour market, and therefore the wellbeing of the people.

  •  Professor Daniel  Benjamin

    Professor Daniel Benjamin

    Associate Professor of Economics, University of Southern California
    Completely agree
    Advances in artificial intelligence since 2000 have increased productivity, which generally increases economic output and improves human wellbeing.

  •  Professor Paul  Frijters

    Professor Paul Frijters

    Professorial Research Fellow, CEP Wellbeing Programme, London School of Economics
    Disagree
    Tough question on which we can only guess an answer. Since the macro-picture on wellbeing is a decline since 2010 whilst AI is only reaching fruition after that, an initial guess is that AI has not helped. The applications of AI sofar have been towards greater consumer convenience, which is rather unimportant for overall wellbeing: we are more quickly given what we want, leaving more time to want more. The military, marketing, surveillance, and propaganda applications have made us less safe, more caged, and more dependent on the thinking of AI than our own.

  •  Professor Martijn  Hendriks

    Professor Martijn Hendriks

    Associate Professor, Erasmus University Rotterdam & University of Johannesburg
    Disagree
    Artificial intelligence is undoubtedly transforming lives. However, its net impact on well-being appears relatively minor, as its current benefits and drawbacks seem to balance out. Among AI’s key advantages is its role in driving innovation, such as in drug discovery and medical diagnosis, improving health outcomes. AI also enhances productivity, supporting economic growth while automating tedious tasks, freeing individuals to engage in more meaningful and happiness-inducing work activities. Additionally, AI helps address labor shortages in aging societies, particularly in developed countries. Other benefits include increased convenience and easier access to knowledge. On the downside, AI fuels the rise of online social media, which relies on AI to maximize engagement through highly personalized and often addictive content. This significantly impacts time consumption, especially among younger generations, who spend hours daily on mobile devices and social platforms. The constant exposure to addictive content deprives the brain of necessary moments of rest while the constant exposure to curated, idealized lifestyles fosters social comparison. These issues contribute to rising mental health issues such as anxiety, depression, burnout, sleep disturbances, reduced attention spans, social isolation, and low self-esteem. Moreover, excessive screen time promotes sedentary behavior, increasing the risk of obesity-related diseases.

  •  Professor Eugenio  Proto

    Professor Eugenio Proto

    Alec Cairncross Professor of Applied Economics and Econometrics, University of Glasgow, Adam Smith Business School
    Agree
    Before Chat GPT the influence has been less visible, but I think that in general the effect gas been positive, especially in medical diagnosis

  •  Professor Arthur  Grimes

    Professor Arthur Grimes

    Chair of Wellbeing and Public Policy, School of Government, Victoria University of Wellington
    Neither agree nor disagree
    To date, advances in AI appear to have had only trivial effects of subjective wellbeing (SWB) which is mostly determined by relationships with family and friends, health, material living standards, and societal freedoms. None of these has been materially affected by AI since 2000.

  •  Professor Ada  Ferrer-i-Carbonell

    Professor Ada Ferrer-i-Carbonell

    Professor of Economics, IAE-CSIC
    Neither agree nor disagree
    The most important impact is in fact unknown: we will probably experience some social disruption that we cannot predict. For example, in the long term it might generate dependence, human intelligence reduction, increase inequalities and concentrate further the political and social power in fewer hands (of those who control the expensive AI). On more short-term effects, AI has positive as well as negative things. On the positive side, AI has helped (some) individuals to be more productive (increase efficiency) as well as it has allowed humans to do things we could not do before (for example, image or tele-detection). Some of these improvements however may have also negative effects. For example, improved social media algorithms might increase addiction to social media and political polarization. On the short-term negative effects, AI might destroy human creativity and intelligence (one can already see this when evaluating project grants, which all look written by the same person). It is however still to see whether cognitive ability is complementary or substitute of AI and whether in the long term, AI has a positive or a negative impact on human intelligence.

  •  Doctor Kelsey J  O'Connor

    Doctor Kelsey J O'Connor

    Researcher in the Economics of Well-being
    Neither agree nor disagree
    The answers to whether artificial intelligence (AI) improved well-being depend on the definitions and unfortunately on ideological predispositions, I believe. We need to work with a common precise definition and to conduct more research. Speculating, AI includes labor augmenting and replacing technology, which has often been used to improve human lives. However, without oversight this technological development concentrates power, both economic and informational, in the hands of very few actors - in which case human well-being would decline. With oversight the development could be used improve lives. Anyone with access to a smart phone has unprecedented computing and information power compared to the year 2000, which has been facilitated in part by machine learning and AI (e.g., Google search). Even with more complete research, the answer will be nuanced. AI likely helps some populations and harms others, and even within a population, the impacts likely vary across life domains.

  •  Professor Darma  Mahadea

    Professor Darma Mahadea

    Associate Professor and Honorary Research Fellow at the University of Kwazulu-Natal, South Africa
    Neither agree nor disagree
    Theoretically, one can expect improved productivity and more time free time available, but mainly to those who stay in employment. The application of AI is fairly new. Already empirical research indicates that new technology and robotics are seen to have positive effects on efficiency or productivity in certain industries, but also adverse in others (Aghion et al, 2016; Dekker et al, 2017; Hinks, 2024). While AI can have certain positive effects in areas, such as education, academic research and health care, the negative effects of stress, job loss and fear of misfit to new technology have to be factored in. The overall effects on human wellbeing can be established by further empirical research in each industrial, geographical and locational (country) context. If AI overtakes human capability to make independent decisions, this would impair our ability to think deeply and connect to others. The loss of human contact may cause loneliness, alienation and engender stress, as we spend more time connecting via technology, smart phones and social media rather than interacting physically. Maintaining good physical relationships in the family, among friends, colleagues and the community are critical for improved human wellbeing. Advances in human and social capital are as important to wellbeing as AI.

  •  Doctor Antje  Jantsch

    Doctor Antje Jantsch

    Researcher, Leibniz Institute of Agricultural Development in Transition (IAMO)
    Neither agree nor disagree
    I am certainly not an expert in this field, but the literature shows that AI can have both positive and negative effects on well-being (e.g. Pataranutaporn, Pat, et al. "AI-generated characters for supporting personalized learning and well-being." Nature Machine Intelligence 3.12 (2021): 1013-1022.; Li, Han, et al. "Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being." NPJ Digital Medicine 6.1 (2023): 236.) To answer this question comprehensively and generate empirical knowledge, it would be necessary first to systematize the different applications of AI. I am thinking, for example, of its use in the workplace, in education as well as in medical diagnostics, but there are many more areas where AI is applied. The next step would be to identify the potential positive effects of AI use while simultaneously assessing its risks. Only then would it be possible to quantify these effects in any meaningful way. A general answer is difficult because potential negative effects could outweigh or even cancel out the positive ones. Moreover, to gain a clearer understanding of whether AI influences well-being positively or negatively, it would also be necessary, in my view, to consider different dimensions of well-being.

  •  Doctor Anthony  Lepinteur

    Doctor Anthony Lepinteur

    Research Scientist, University of Luxembourg
    Neither agree nor disagree
    AI has clearly brought important benefits to human wellbeing through better healthcare - improving diagnosis accuracy, helping discover new medicines, and creating personalized treatment plans. These advances have saved lives and improved health for many people. In everyday life, AI assistants and automation tools have made information more accessible while AI in education has made learning more available to different types of students. But these benefits haven't reached everyone equally. Many workers face career uncertainty as AI changed the job market. The "digital divide" means that technology benefits mostly go to people who already have resources and education. AI used for surveillance and in social media has created serious privacy problems and potential mental health issues. Unsurprisingly, research from the Oxford Internet Institute shows that how technology affects life satisfaction depends greatly on how people use it and their personal circumstances. When AI systems are created with clear wellbeing goals and input from affected groups, they tend to have positive effects. This matches what Calvo and Peters suggested in their "Positive Computing" approach (2014), which recommends designing technology specifically to support psychological wellbeing. When AI is created mainly for profit or efficiency without considering human impacts, problems often result.

  •  Professor Jan  Delhey

    Professor Jan Delhey

    Professor of Sociology, University of Magdeburg
    Neither agree nor disagree
    Advances in artificial intelligence have made people more productive and many things easier - which is good for well-being. However, these benefits are unevenly distributed and tend to benefit knowledge workers. A negative effect: those who do not work with AI tools are probably afraid of being left behind.

  •  Doctor Francesco  Sarracino

    Doctor Francesco Sarracino

    Economist, Research Division of the Statistical Office of Luxembourg -STATEC
    Neither agree nor disagree
    AI has offered growing opportunities since the year 2000, but it has also created unprecedented problems. Since most readers are familiar with the opportunities, I focus on some of the downsides. AI allows to monitor every aspect of the life of millions of individuals at a very low cost: by keeping track of their interactions on online social media, their use of mobile devices, their movements, their physical activity, AI enables a minority to accumulate huge power and control over others. AI has also been used to improve algorithms of online social networks to maximize users' engagement. This, however, favored the spread of hate speech, violent contents, plot theories, and fake news because AI figured out that engagement is higher the more a post is controversial. In some cases, this has had dramatic consequences. For instance, a report by Amnesty International found that Facebook amplified hate ahead of the Rohingya massacre in Myanmar. By manipulating the messages that attract more traffic, the results of searches on search engines, and by tracking and matching individual information at very low marginal cost, AI represents a huge threat for human well-being. The net effect of the costs and benefits of AI for human well-being is therefore not clear.

  •  Professor Talita  Greyling

    Professor Talita Greyling

    Professor, School of Economics, University of Johannesburg
    Agree
    I mostly agree that artificial intelligence (AI) has significantly enhanced human well-being and increased efficiency across various domains. However, AI developments also pose challenges to human well-being, such as privacy and ethical considerations; I am also concerned that AI bots such as ChatGPT may foster complacency. We are all familiar with AI's advancements in various dimensions of wellbeing, for example, in the health profession and its educational benefits through search engines, online tutorials, and websites such as GitHub, Coursera, and YouTube. AI-driven applications also assist in selecting leisure activities on platforms like Netflix and YouTube. Large Language Models (LLM) also cater to academic needs. Tools such as Grammarly, Research Rabbit, Scite AI, and R Discovery have vastly increased our access to relevant and new research and assist with writing. Beyond these apparent ways in which AI has benefited human well-being, there are also indirect benefits through the accessibility to data to inform decision-making. The combination of big data extracted from social media and various internet sources, as well as AI-driven methodologies, has filled a void in well-being data. This data is real-time, easily accessible, and less expensive than traditional survey methods. See, for example, Martin & Salomon-Ermel (2024) and Greyling & Rossouw (2025) for well-being data constructed using Google Trends data.

  •  Professor Stephanié  Rossouw

    Professor Stephanié Rossouw

    Associate Professor, School for Social Science and Public Policy, Auckland University of Technology, New Zealand
    Agree
    For me, two of the best examples of how advances in AI since 2000 have, overall, improved human wellbeing are in the areas of healthcare and economic growth and productivity. AI is transforming diagnostics by rapidly analysing medical data, identifying diseases early, and reducing human error. For example, Qure.ai (the world's most adopted healthcare AI company) uses deep learning to interpret radiology images and detect conditions like Tuberculosis, lung diseases, and strokes. For the years 2023-2024, they positively impacted over "20 million lives through their advanced AI-driven screening solutions in X-rays and CTs and screened over 1 million patients for Tuberculosis in 24 African countries". In terms of economic growth and productivity (and sustainability), you need only look at the agricultural sector. Blue River Technology (that partnered with John Deere) developed, with the help of AI, smart tractors that take images, identify which plants to remove, spray them, and verify the accuracy and performance of the system, all in real-time. This kind of technology reduces chemical use, optimises farming techniques and promotes sustainability. In saying the above, I am cognizant of the areas where AI has negatively impacted sectors through job displacement. For example, lawyers (I have few in the family) can't compete with the likes of ChatGPT when it comes to analysing legal cases and precedents in 30 seconds or less. Or in the manufacturing industry, where humanoid robots and AI have transformed automotive manufacturing and production processes, making it more profitable, but has seen mass lay-offs and has an estimated "Potential reduction on wages around $3 trillion by 2050".

  •  Doctor Conal  Smith

    Doctor Conal Smith

    Principal, Kōtātā Insight
    Neither agree nor disagree
    I'm not aware of any strong evidence on the impact of artificial intelligence on human wellbeing. The term artificial intelligence is, itself, somewhat unclear. I take it here to refer to the use of large language learning models which is the sense in which the term is often used in the media. Extending the term to the impact of machine learning and algorithmic-driven business models in social media I tend towards pessimism as I think there is fairly robust evidence that the combination of social media plus ubiquitous smartphone availability is bad for youth mental health (e.g. Ebi-Jaoude et al, 2020).

  •  Professor John  Helliwell

    Professor John Helliwell

    Professor Emeritus of Economics, University of British Columbia
    Neither agree nor disagree
    Like many past cases of rapid technical change, there have been better and worse uses, by people and institutions with better and worse motivations and methods. There are signs, especially from the United States, that the scientific gains are being overshadowed by the societal losses.

  •  Professor Daniela  Andrén

    Professor Daniela Andrén

    Senior Lecturer, Örebro University School of Business
    Neither agree nor disagree
    Empirical evidence demonstrates several positive impacts of AI implementation. In healthcare, AI technologies have achieved measurable improvements through enhanced diagnostic accuracy (McKinney et al., 2020) and shown positive effects on educational outcomes when thoughtfully integrated with human instruction (Reich, 2020). Professional productivity has increased in specific contexts, though these gains often require substantial organizational adjustments (Brynjolfsson et al., 2022). AI has also contributed to improved outcomes in transportation and healthcare, such as enhanced diagnostic imaging and intelligent traffic management. Mental health support through AI chatbots shows promise in controlled settings, though it still lacks robust evidence of sustained long-term benefits, and the anticipated productivity gains from AI implementation have shown uneven distribution patterns across different population segments (Acemoglu et al., 2023). However, significant negative impacts warrant serious consideration. Research has identified troubling patterns in social media algorithms that amplify harmful content related to eating disorders, significantly impacting vulnerable teenagers. Content recommendation systems intensify body image issues among adolescents through continuous exposure to idealized and manipulated images. Algorithmic biases and the digital divide exacerbate inequities, limiting access to AI benefits and reinforcing social and economic disparities (Eubanks, 2018). Of particular concern are instances where algorithmic acceleration of harmful content has been linked to increased rates of self-harm among young users (Murray et al., 2023), and the first case where AI chatbots failed to recognize and appropriately respond to explicit suicidal ideation. Implementation of AI technologies has also raised significant privacy and autonomy concerns. References Acemoglu, D., Autor, D., Hazell, J., & Restrepo, P. (2023). Automation and the Workforce: A Firm-Level View from the 2019 Annual Business Survey. AEA Papers and Proceedings, 113, 12-16. Brynjolfsson, E., Rock, D., & Syverson, C. (2022). The productivity J-curve: How intangibles complement general purpose technologies. American Economic Journal: Macroeconomics, 14(1), 333-372. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., ... & Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89-94. Murray, S. B., Anderson, L. K., & Cusack, A. (2023). Digital self-harm among adolescents: The role of algorithmic amplification. Journal of Clinical Child & Adolescent Psychology, 52(1), 143-156. Reich, J. (2020). Failure to disrupt: Why technology alone can't transform education. Harvard University Press.

  •  Professor Chris  Barrington-Leigh

    Professor Chris Barrington-Leigh

    Associate Professor, McGill University
    Disagree
    It's tough to take a stand on this beyond admitting insufficient knowledge to tally an overall causal impact. However, my suspicion is that if we don't know the answer, it might be that the negatives dominate. AI can surely be given some credit for widespread productivity increases, which must have had broad benefits. However, AI equally strengthens the "dark side of capitalism": just like our capitalist system is remarkably successful at filling niches when and where a need arises, it is equally able to create fake needs whenever there exist human weaknesses that can be exploited. The market is certain to be just as good at that task, which hurts people but makes profit, as it is at the task of helping people while making a profit. There are now numerous enhanced algorithms in social media tuned to hijack primitive reward circuitry in our brains to seek increasingly simplistic, shallow, and polarizing messages. In fact, the 200-year-long but accelerating increase in the rate of information we are bombarded with is likely responsible for steadily decreasing attention spans, which means we are losing the collective ability to manage debate and democratic deliberation (so says Johann Hari in "Stolen Focus").





Strong government regulation of AI is critical to ensuring positive future impacts on wellbeing.

  •  Professor Ori  Heffetz

    Professor Ori Heffetz

    Associate Professor of Economics, Cornell University and Hebrew University
    Neither agree nor disagree
    Again, not sure what "strong regulation" would look like. We have examples where strong regulation has been successful in promoting wellbeing, and examples where it was mostly effective in promoting stagnation and suffering. We certainly want government oversight, and to create an updated legal framework that can effectively fight the more sinister uses of AI.

  •  Professor Mark  Fabian

    Professor Mark Fabian

    Associate Professor of Public Policy, University of Warwick
    Neither agree nor disagree
    What does strong regulation look like in this domain? The technology is changing much faster than static regulation (e.g. laws and legislation) could keep up with, and the technology is extremely mobile and thus outside a single jurisdiction. One also worries about regulation benefiting incumbents, who are already moving rapidly to consolidate their monopoly against a vibrant open-source and low-cost movement. There is a body of regulatory theory called 'the regulatory craft' that bears on this issue. It essentially involves empowering a regulator made up of people with deep subject matter and regulatory expertise to respond dynamically and with discretion and judgement to changing conditions (as in central banks, for example). That's a hard sell right now because of the political high-tide of anti-elite, anti-technocratic sentiment that characterises contemporary politics. One also worries that subject matters experts are few and far between, but that is likely just a question of salary.

  •  Professor Mohsen  Joshanloo

    Professor Mohsen Joshanloo

    Associate Professor (Psychology), Keimyung University, South Korea
    Disagree
    I believe excessive government regulation of AI is unnecessary and may have unintended consequences. Beyond ensuring compliance with existing privacy and data security regulations, additional restrictions should be justified by clear evidence of harm. Focusing on large language models, which are widely used today, strong government regulation could set a precedent for broader control over the internet, the primary source from which these models learn.

  •  Professor David  Blanchflower

    Professor David Blanchflower

    Professor of Economics at Dartmouth
    Neither agree nor disagree
    Unclear

  •  Professor Martin  Binder

    Professor Martin Binder

    Professor of Socio-Economics at Bundeswehr University Munich
    Completely agree
    While I agree with the statement, I think we need to be realistic that any regulation always comes after the fact so that I am not holding my breath on how likely it is that regulation will be able to steer developments in AI.

  •  Professor Mariano  Rojas

    Professor Mariano Rojas

    Professor of Economics, Universidad Popular Autónoma del Estado de Puebla
    Completely agree
    Artificial intelligence -and more specifically, its many applications- is a complex product, with the intrinsic capacity, even the need, to collect a vast amount of information and process it rapidly. When the product is complex, when the user has little information about the underlying functioning of the product and, especially, when the product gathers almost full information about the user, it is likely that the assumption of consumer sovereignty does not hold. Thus, it is not expected for people’s choices to be well-being enhancing. A product of this nature, within a context where its producers and providers are focused on maximizing profits, can pose a risk to the well-being of users. By its very nature, it disrupts the assumption that the pursuit of profit is associated with the greater well-being of users. The provider can exploit the vast amount of information gathered to gain greater benefits. Additionally, the provider can leverage the abundant information at its disposal about users to subtly and powerfully influence demand through persuasive techniques. In many applications, this could result in a rat race, leading to a zero-sum game where it becomes essential to adopt new technologies simply to avoid falling behind. For these reasons, it is necessary to consider strong regulation in the design and application of AI. It is also crucial to ensure that there is no regulatory capture and to prevent the revolving door between the corporate world and regulatory agencies.

  •  Professor Gigi  Foster

    Professor Gigi Foster

    Associate Professor and Undergraduate Coordinator, School of Economics, UNSW Business School
    Neither agree nor disagree
    I don't have a lot of confidence in government regulation being effective in this complex area (governments these days often can't even deliver their basic functions well, much less craft wise and effective regulations about how to handle a fast-developing and powerful technology). I think there is a real chance that attempts to regulate in this area will provide temporary psychological soothing for people (perhaps increasing wellbeing a bit in the short run) without actually directing AI towards uses that would help people and away from uses that would harm people, and while crowding out attention to other areas where government decisions actually could make a difference to wellbeing. I think societies will eventually figure out how to make positive use of AI themselves, without a government telling them what to do (and how does the government know better than markets, anyway?), but that process will take time. Instead of government regulation i would suggest that civic organisations host public discussions and debates about the advantages and dangers of AI, in an effort to bring awareness of these into the consciousness of the public whose individual decisions and moral compasses will collectively lead to the ways in which AI is deployed in society.

  •  Doctor Tony  Beatton

    Doctor Tony Beatton

    Visiting Fellow, Queensland University of Technology (QUT)
    Neither agree nor disagree
    The last thing we need is Government picking winners. What Government may be able to assist with are the AI-related negative externalities associated with undesirable social effects, e.g. youths posting criminal activities on X or Facebook. But one could argue that this is perhaps not an AI issue but one of international Internet-based communications not being governed by an institution/Government that sets rules acceptable to all societies, a goal that is probably not achievable.

  •  Professor Daniel  Benjamin

    Professor Daniel Benjamin

    Associate Professor of Economics, University of Southern California
    Agree
    To date, AI has been largely complementary with human capital, but there are risks that going forward, AI could become substitutable for human capital. That would pose risks to society. Government regulation, if done well, might help guide the development of AI in the most socially beneficial direction.

  •  Professor Paul  Frijters

    Professor Paul Frijters

    Professorial Research Fellow, CEP Wellbeing Programme, London School of Economics
    Completely disagree
    I do not believe there is an authority that can do much about the development of AI, irrespective of its effects on wellbeing. I know there are lots of governments and 'experts' calling for regulation, but those are flees trying to direct the elephant. It cant be done because AI can be developed anywhere in the world, making enforcement impossible. Also, if our enemies use AI to its full potential, so must we. Calls to regulate AI are partly the normal theatre of politics, partly pure marketing by AI developers ('my product is sooo good that is reallllyyy dangerous'), and partly due to the marketing of AI industrialists who are hoping to push governments into regulating their opponents.

  •  Professor Martijn  Hendriks

    Professor Martijn Hendriks

    Associate Professor, Erasmus University Rotterdam & University of Johannesburg
    Agree
    AI will change our societies, bringing both many strengths and risks. Strong international regulation is needed to manage these risks, such as ensuring transparency about how AI collects, stores, and uses data to prevent misuse. Another example is the need for strict rules governing AI in critical areas, such as the protection of democratic principles (see the case of the recent Romanian election), self-driving cars, and so forth.

  •  Professor Eugenio  Proto

    Professor Eugenio Proto

    Alec Cairncross Professor of Applied Economics and Econometrics, University of Glasgow, Adam Smith Business School
    Agree
    This is a breakthrough, there will be big changes ahead, as it is always the case for major breaks, some regulation to assure an orderly development will be important

  •  Professor Arthur  Grimes

    Professor Arthur Grimes

    Chair of Wellbeing and Public Policy, School of Government, Victoria University of Wellington
    Disagree
    In any fast-moving technological space, government regulation is always a few steps behind the frontier. It will be largely futile for government bureaucrats to try to regulate use and applications of AI when they don't know what is about to arrive the following week. At best, governments might consider regulations against discriminatory and violent use of AI.

  •  Professor Ada  Ferrer-i-Carbonell

    Professor Ada Ferrer-i-Carbonell

    Professor of Economics, IAE-CSIC
    Neither agree nor disagree
    Government cannot do anything to regulate AI. It is too late. AI is already in the hands of very few powerful and rich people. Unless computer capacity becomes cheaper (and less energy consuming), governments are left with no power. The investments of google in nuclear energy, for example, are a proof of this. It is still to see whether the alternatives appearing will indeed be cheaper in terms of energy consumption. This is the only hope we have.

  •  Doctor Kelsey J  O'Connor

    Doctor Kelsey J O'Connor

    Researcher in the Economics of Well-being
    Agree
    Artificial intelligence (AI) is a hugely disruptive technology, with the speed of change exceeding that of traditional human (re)training. What is more, it does not operate in a perfectively competitive market that can self regulate (in theory) -- it creates and benefits from network effects, meaning larger AI networks will generally outcompete smaller ones. This process will concentrate development in a small number of organizations, among whom, the private ones are not incentivized to support well-being. Economic theory is clear that regulation is recommended in such instances, and I believe other social sciences would offer even stronger recommendations for government oversight.

  •  Professor Darma  Mahadea

    Professor Darma Mahadea

    Associate Professor and Honorary Research Fellow at the University of Kwazulu-Natal, South Africa
    Agree
    While AI aids production and decision-making, it should be used to enhance individual’s flexibility and skills set, without causing detriments to their leisure time, data privacy and job security. To achieve this, some government intervention is necessary, as there is a human dimension to the use of AI, particularly in regulated industries such as finance, banking, manufacturing and medicine. During the industrial revolution, while there was progress on enhancing wellbeing, but there were also regulations on labour employment conditions and working hours. These regulations helped drive competition among businesses, to benefit both labour and firms. Similarly, with AI, there may be higher quality experiences for employees and entrepreneurs, that impact on enhancing overall wellbeing. Practically, research indicates that AI could lead to a 30% decrease in happiness and some loss of employment. To counter the effects, some government regulation of AI is critical. However, this would also entail reskilling and retraining of labour on AI and big data, changing their technological capabilities to adapt to the digital economy and to rapidly different market conditions. Some private firms may not voluntarily do this, unless government regulations are imposed.

  •  Doctor Antje  Jantsch

    Doctor Antje Jantsch

    Researcher, Leibniz Institute of Agricultural Development in Transition (IAMO)
    Neither agree nor disagree
    The question of whether strong government regulation of AI is critical to ensuring positive future impacts on well-being is both complex and philosophical. If the policy goal is to improve societal well-being, and the impact of AI on well-being is empirically proven, then regulation may indeed be justified. However, this may require a party in power that has no vested interest in consolidating their own power. Furthermore, ethical issues such as the effective regulation of the use of AI require public discourse to build consensus around shared values (see Habermas, J. (1991). Erläuterungen zur Diskursethik (Suhrkamp Taschenbuch Wissenschaft, vol. 975). Frankfurt am Main: Suhrkamp). Moreover, the challenge is not only to decide whether regulation is necessary, but also how it should be designed. From a broader perspective, Stiglitz et al. (Stiglitz, J. E., Sen, A., and Fitoussi, J.-P. (2010) Mismeasuring Our Lives: Why GDP Doesn't Add Up, New York: New Press.) argue that both civil society and governments should focus on preserving and enhancing the 'stocks' of human, social and natural resources that are essential to sustain the livelihoods of future generations. This is in line with Sen's capability approach, which emphasises the real freedoms of individuals to achieve a life they value. Rather than framing the debate in terms of strict regulation, the focus should be on securing and expanding human capabilities and opportunities, and ensuring that AI as a tool has the potential for societal well-being.

  •  Doctor Anthony  Lepinteur

    Doctor Anthony Lepinteur

    Research Scientist, University of Luxembourg
    Agree
    Building on my response above, thoughtful governance does indeed play a crucial role in shaping positive outcomes - though the nature and implementation of that governance requires careful consideration. Regulatory frameworks can help address several key wellbeing challenges: 1) Regulations can help ensure more equitable distribution of AI benefits. Targeted regulation can promote accessibility and prevent the concentration of AI benefits among those already privileged. 2) Proactive governance can mitigate known harms to wellbeing. Regulations addressing algorithmic bias, data privacy, surveillance limits, and transparency requirements directly address factors that research shows negatively impact psychological wellbeing. 3) Proper regulatory frameworks can create innovation-friendly environments while establishing necessary guardrails. However, effective AI governance requires nuance beyond simply "strong regulation": 1) Given AI's rapid evolution, overly rigid regulations risk becoming quickly outdated or stifling beneficial innovation. Regulatory approaches that can evolve with technological developments are essential. 2) Context-sensitivity needs to be accounted for: different domains may require different regulatory approaches. Healthcare AI likely warrants different oversight than entertainment applications. 3) International coordination is key. Purely national regulations in a global technology landscape can create implementation challenges and competitive disadvantages. 4) Stakeholder participation is essential too. Regulations developed without input from diverse affected populations risk missing critical wellbeing considerations or creating unintended consequences. The most promising approach (while being also probably the most challenging) appears to be "strong principles with flexible implementation" - establishing clear wellbeing-centered regulatory objectives while allowing for contextual adaptation in how those objectives are achieved across different domains and technologies.

  •  Professor Jan  Delhey

    Professor Jan Delhey

    Professor of Sociology, University of Magdeburg
    Agree
    Whether “strong” or “some” regulation - regulation is necessary to prevent two things: Firstly, that technology grows over our heads like the sorcerer's apprentice. And secondly, to prevent AI companies from becoming too powerful - a concentration of techno-economic power harms society and people's wellbeing.

  •  Doctor Francesco  Sarracino

    Doctor Francesco Sarracino

    Economist, Research Division of the Statistical Office of Luxembourg -STATEC
    Completely agree
    To have a chance to maximize the benefits of AI and reduce its costs, it is necessary to align it with publicly desired goals. This decision cannot be left in the private hands of few, super rich individuals. The same holds for governments: the unprecedented opportunity offered by AI to control the daily lives of millions of people poses a huge threat to freedom and democracy. Hence, it is critical to have an international regulation, adopted at national level, of AI that sets the boundary for the use of this technology.

  •  Professor Talita  Greyling

    Professor Talita Greyling

    Professor, School of Economics, University of Johannesburg
    Agree
    AI should be regulated to mitigate the risks of privacy violations, ethical considerations and biased decision-making. However, further research is necessary to identify the most effective methods to regulate AI-driven models.

  •  Professor Stephanié  Rossouw

    Professor Stephanié Rossouw

    Associate Professor, School for Social Science and Public Policy, Auckland University of Technology, New Zealand
    Agree
    I agree that some regulation is necessary; however, I do not think there should be excessive (strong) government interference since it could potentially slow down AI development and limit its potential benefits in fields like healthcare, education, and business. And let's face it, technology evolves at neck-breaking speed - much faster than legislation proposals move. By the time laws are passed, they may already be obsolete. If we compare the way the USA, EU and China have gone about regulating AI, we can see they all follow different strategies. Who is doing it better? Definitely not the EU and China has very tight government oversight and strict censorship laws. With the USA trying to reach a balance between AI safety and security while also encouraging innovation, my money is on them. For now.

  •  Doctor Conal  Smith

    Doctor Conal Smith

    Principal, Kōtātā Insight
    Neither agree nor disagree
    I suspect that the key issue here is less with AI per se than with the broader social media/tech ecosystem. Probably the main vehicle for large language models to impact wellbeing at the moment is through their use in social media and search engines. Both areas are characterised by strong network effects, increasing returns to scale, and natural monopolies. There is a clear case to be concerned about the welfare implications of monopoly power in important industries and this applies strongly to the big ICT companies. However, this issue - although linked to AI - is not AI specific.

  •  Professor John  Helliwell

    Professor John Helliwell

    Professor Emeritus of Economics, University of British Columbia
    Agree
    Not 'completely agree' since what happens if the government itself has objectives other than the happiness of the residents off its, and especially other, countries?

  •  Professor Daniela  Andrén

    Professor Daniela Andrén

    Senior Lecturer, Örebro University School of Business
    Disagree
    While regulation of AI is essential to protect wellbeing, both theoretical and empirical research supports a targeted regulatory approach that addresses specific, documented harms while maintaining flexibility for beneficial innovation. Evidence from various sectors demonstrates different types of risks of insufficient regulation; e.g., environmental impacts in the form of significant carbon emissions from AI systems, particularly large language models and data centers (Rolnick et al. 2022), increased wage inequality and displacement of middle-skill workers (Acemoglu and Restrepo 2022). Such outcomes threaten both individual and societal well-being, and economic growth. However, research also reveals significant drawbacks of excessive regulation. Heavily regulated markets experience slower innovation rates, potentially delaying critical AI applications in areas such as climate change mitigation and healthcare optimization (Brynjolfsson et al. 2022). Strong regulations can create substantial barriers to market entry for smaller firms, leading to increased concentration of AI development among large corporations. This concentration not only reduces competition but also potentially slows the development of diverse AI applications that could benefit society. References Acemoglu, D., & Restrepo, P. (2022). Tasks, automation, and the rise in US wage inequality. Econometrica, 90(5), 1793-1830. Brynjolfsson, E., Rock, D., & Syverson, C. (2022). The productivity J-curve: How intangibles complement general purpose technologies. American Economic Journal: Macroeconomics, 14(1), 333-372.

  •  Professor Chris  Barrington-Leigh

    Professor Chris Barrington-Leigh

    Associate Professor, McGill University
    Agree
    I believe humans are pretty simple, and have no natural defences against customized, optimized algorithms which can manipulate the masses, or advise on manipulating the masses. Our strength is in collective action -- that is, we come together with ingenious solutions to collective threats, rather than relying on individual brains to solve them continuously and repeatedly, or to act in the collective interest by default. Thus, government regulation is critical in steering any profound innovation towards good outcomes. The problem is, we don't have governments/collective governance that is designed for fast-changing technology, and it's not clear how to regulate something (a) that advances profoundly every two months (b) the workings of which we don't even understand. However, here's an example of one urgent policy which could stunt some (profound but limited) negative channels of manipulation: Targeting ads to individuals’ profiles should simply become illegal. Displaying ads that are tailored to a particular product, such as a newspaper, is fine and is how things used to work. Displaying selective ads on a website based on the search term used to summon that particular website is also not terribly intrusive. But beyond that, the line should be drawn, so that ads may not make use of information from our past, our other online activity, or any other behaviour not related to the immediate moment (``Time to unfriend Facebook's ad strategy'', Toronto Star, April 2018, https://alum.mit.edu/www/cpbl/publications/Time_to_unfriend_Facebooks_ad_strategy_TheStar_20180412.html)

AI so far

The opening question posed in our poll was whether Advances in artificial intelligence since 2000 have, overall, improved human wellbeing.

The large majority of respondents (17 of 26) were in the middle, choosing "Neither agree nor disagree", yet most respondents had plenty to say. Some who were on the fence nevertheless focused on the downsides of AI (Heffetz, Binder, Fabian, Helliwell, Sarracino, Smith). Five respondents chose Disagree, while only four replied Agree or Completely Agree, making the answers almost perfectly balanced.

Enhanced productivity of bad things was the main fear: AI was predicted to increase crime, warfare, fraud, marketing, surveillance, propaganda, pornography, and manipulation through social media. In this direction, Rojas and Lepinteur pointed out that currently AI is created for profit rather than being designed specifically to support psychological wellbeing. Relatedly, Barrington-Leigh sees the "dark side of capitalism" being boosted by AI. Feared negative long-run impacts on the social and political system included reducing human intelligence, creativity, and connections, whilst increasing inequality and the concentration of social power (Ferrer-i-Carbonnell, Mahadea, O'Connor). Some mentioned the ecological cost of AI technology (Fabian, Andrén).

Given the broadness of the question, respondents had to define AI themselves. It was considered by most to encompass machine learning technologies far older than the recent large language model revolution. AI was thus given credit for learning (first at Facebook) that divisiveness increases engagement on social media (Sarracino). Social media algorithms were mentioned frequently, for instance for worsened youth mental health, eating disorders, and lower autonomy (Andrén, Smith).

Interestingly, some cited personal experience in their academic work as evidence of negative effects of AI: the recent homogeneity of style in research grants (Ferrer-i-Carbonell), automatically generated theses and research paper reviews (Binder), or the unwanted summaries at the top of Google search results (Fabian).

Commonly cited benefits from AI related to productivity and economic growth, drug and vaccine discovery, personalized education (Rojas, Mahadea, Jantsch, Lepinteur, Greyling, Andrén, Rossouw), research tools and data (Greyling), traffic management (Rojas, Andrén), and agriculture (Rossouw).

A number of respondents expressed the belief that AI is already in wide use in health care and diagnosis, beyond roles in drug discovery (Hendriks, Proto, Lepinteur), along with offering private personal advice in support of physical health. Chatbots focused on mental health support and promotion were also cited as a positive application (Andrén), even while the negative mental health impacts of AI were prominent in many responses.

While AI almost surely increases productivity in many places, Frijters, Foster, and Grimes mentioned that consumer convenience plays a relatively small role in human wellbeing, in comparison to the quality of human relationships of various kinds, thus again stressing that it is the negative effects on the social system that drive overall impacts.

Government regulation

The second provocative statement was that Strong government regulation of AI is critical to ensuring positive future impacts on wellbeing. For this question, the Agree (11) and Completely Agree (3) camp held the majority, though eight were on the fence again. Only four disagreed.

Market failures were cited often as a rationale for regulation. Rojas argues that in the domain of AI, assumptions of consumer sovereignty, wellbeing-enhancing individual choice, and the association between profit and consumer surplus, are all invalid in light of the enormous asymmetry of information and the complexity of the product. O'Connor and Smith both point out that due to network effects, AI provision is also not a naturally competitive market and may lead to concentration over time. Moreover, among those few most successful AIs, the private entities will not be incentivized to support well-being (O'Connor).

Jantsch suggests that government steering of AI may be more feasible if it is framed positively. That is, the objective could be to ensure that AI secures and expands human capabilities and opportunities, rather than framing it as defense against a threat.

Notwithstanding the market failures, Foster suggests that governments do not know better than markets, and prefers to rely on societies figuring out positive uses of AI without a role for government.

Regardless of the rationale or need for regulation of AI, many respondents did not believe that it is feasible. Many cited the rapid rate of technological change, implying that most regulation will be solving problems of the past (Binder, Grimes). "It is too late" writes Ferrer-i-Carbonell. Also, the global technology landscape makes regulatory success dependent on an unlikely degree of international cooperation (Lepinteur, Frijters).

Not all respondents gave examples of what kinds of regulations they had in mind to achieve the ends for which they advocated, or which they feared as overreach. In fact, while Heffetz and Fabian asked what regulation would look like, only one respondent gave any example of a specific policy prescription: Barrington-Leigh advocates for a ban on any advertising that targets individuals using profiles built from their past activity.