Focus

How to regulate AI by learning from the U.S.

Artificial intelligence is already part of everyday life and poses ethical and legal challenges that require multilevel regulation.

Gonzalo Meza-January 10, 2026-Reading time: 5 minutes
IA United States

© Immo Wegmann

Artificial intelligence conjures up diverse images: from robotic humanoids to scenes from Chaplin's ‘Modern Times’ to tools like ChatGPT that we use every day. But AI is already an everyday reality in the United States, present in multiple aspects of our lives. Andrew Ng pointed out that artificial intelligence is “the new electricity”, a tool that will permeate all human areas. This promise has captured the attention of investors: it is estimated that by 2026 investment in AI will exceed $500 billion. This raises ethical challenges and the urgency of establishing appropriate legal frameworks by sector and from the grassroots: local, state, national and international.

I will mention four areas where AI is integrated into everyday life in the United States and then point out the regulations specific to those sectors.

Transportation: Autonomous vehicles

    In several Californian cities, robotaxis, autonomous vehicles that transport passengers without a driver, are operating. Equipped with cameras, radar and learning systems, these vehicles are becoming increasingly common in Los Angeles and other areas of the country.

    Trade: Cashier-less markets, “Just Walk Out” (take it and go)

      In cities such as Washington DC and Los Angeles, there are markets managed by Amazon under the “Just Walk Out” concept. Users enter by identifying themselves with the palm of their hand, take their products (bread, milk, rice, etc.) directly by putting them in their bags or baskets and a system of multi-cameras and sensors automatically registers the purchases. At checkout, the customer receives the invoice by e-mail. There are no checkouts and no lines. Naturally, this requires pre-registration in the system with personal and financial data.

      Logistics: Distribution centers

        Amazon's mega-distribution centers represent perhaps the most spectacular interaction between AI and humans. The largest, located in Ontario, California, spans more than 400,000 square meters. These warehouses function as “living organisms” with thousands of mobile robots moving on highways to go back and forth between shelves bringing products to and from operators (humans). This AI system in distribution centers predicts traffic, optimizes inventories and collaborates with staff. I find this interesting and not to be lost sight of: an Amazon executive pointed out that the goal of AI is not to replace human labor but to facilitate it and create new jobs integrated into the system. 

        Education

          AI has deeply penetrated U.S. educational practices. A large part of the faculty, from elementary to higher education, uses artificial intelligence tools for class design, administrative management, didactic planning, performance analysis and the development of pedagogical resources. In the university context, 90% of students incorporate it in their learning processes.

          Health and wellness

            In the North American healthcare system, institutions use AI to support diagnostics -especially imaging-, refine analysis, process massive data and automate administrative tasks. For patients, there are everyday applications: health chatbots, online triage systems and wearables to monitor physical activities or vital signs.

            The challenges

            While these applications are positive, there are also dangerous uses of AI: development of autonomous lethal weapons, cyber-attacks, manipulation of information and violation of privacy.

            The need for ethical and legal regulations

            Given these realities, it is necessary to establish legal regulations and ethical guidelines for the use of artificial intelligence, from the local to the international level. Although it would be ideal to have binding international legislation, for countries such as the United States - the main developer and user of AI - a treaty of such scope is not very plausible. In any case, it would be just one piece of the regulatory machinery emanating from the local and national level.

            Examples of current regulation in the United States

            Regulation of autonomous vehicles

            There are specific rules for robotaxis. When one of these vehicles is involved in an accident, the National Traffic Safety Administration and the Department of Transportation require immediate notification in a nationwide registry, and in states such as California, Arizona, Texas or New York there are legal frameworks that regulate permits, terms of service and liability for robotaxi accidents. Also, in states such as California, Arizona, Texas or New York there are legal frameworks that regulate permits, terms of service and responsibilities in accidents involving robotaxis. Who is responsible in case of an accident? The company that manages the vehicles. In California there is a protocol for reporting incidents directly to the agency. These rules also extend to insurers. The costs of policies for autonomous vehicles are high, which forces companies to avoid violations. As it is AI, the machines are recording what is allowed and what is prohibited.

            Education

            Guidance and state regulations exist in the U.S. education arena. The Department of Education issued guidance on AI use in 2025 that calls for respecting privacy, civil rights, and academic integrity standards. Many states have issued official guidance. It is worth mentioning that, unlike in many countries, school districts are independent entities that develop their own policies in coordination with state and federal laws.

            California universities operate on the same principle: each defines its own regulatory framework. However, there is a national consensus: regulations against plagiarism extend to the use of AI. Institutions have adopted advanced tools that detect texts generated entirely by artificial intelligence. Their use is widespread.

            Health

            Although there is no single legal standard specific to AI in healthcare, there is a regulatory patchwork involving AI, for example, the Health Insurance Portability and Accountability Act (HIPAA), which protects patients' medical data and requires entities that handle it (hospitals, insurers, clinics) to comply with strict privacy and security rules.

            The regulatory path of AI is just beginning. But I believe that this should be done in each sector (education, health, finance) and from the bottom up: local, state, national and international. To think of a universal supranational law regulating AI is unthinkable, since many legal frameworks -particularly the US, which is one of the biggest investors and developers of AI. The US controls the models, the hardware (chips from companies such as NVIDIA) and the infrastructure (Google Cloud, AWS) that make AI possible, therefore, possible AI regulatory frameworks must come from the US and then, at another level, mesh with non-binding agreements at the international level. In that sense, what role can the Church play in such a regulatory effort? 

            Towards the creation, development and application of an ethical framework for the use of AI in the Church.

            The Church has been a pioneer in the development, promotion and use of an ethical framework for the use of artificial intelligence. This has been the case for at least two years. Some documents stand out, such as “Antiqua et Nova”, a note on the relationship between artificial intelligence and human intelligence from the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education of January 14, 2025.

            Also noteworthy are the interventions of the pontiffs Pope Francis and Pope Leo XIV on AI, such as Pope Francis' 2024 World Day of Peace message and Pope Leo XIV's various speeches on the subject, notably his message at the Second Conference on Artificial Intelligence on June 17, 2025.

            These recent interventions are based on the principles of the Social Doctrine of the Church which should be applied to the use of artificial intelligence, especially on issues of human dignity, common good and solidarity. These ethical norms on the use of AI could also be developed and applied at the level of each ecclesiastical jurisdiction especially in sectors where the Church exercises its functions such as Catholic schools or hospitals, seminaries, formation centers, etc. There are some dioceses that already have guidelines in this regard, for example, the dioceses of Biloxi (Mississippi), Orange (California), and the bishops of the Maryland Catholic Conference covering Baltimore, Washington and Wilmington.

            Towards the creation of multisectoral and multilevel legal frameworks

            At the international level, the Holy See can contribute decisively to the construction of a normative framework on artificial intelligence at the United Nations level. It is important to note that this framework should be a non-binding agreement since a binding treaty would face significant obstacles-both because of incompatibility with legal systems such as the U.S., and because of the need for differentiated responses according to sectors and jurisdictional levels. Thus, it seems to me more viable and effective to promote one or several non-binding agreements within the UN to guide the regulation of AI on a global scale, thus respecting the regulatory autonomy of each country.

            La Brújula Newsletter Leave us your email and receive every week the latest news curated with a catholic point of view.