Global Journal of Pharmaceutical and Scientific Research (GJPSR)
ARTIFICIAL INTELLIGENCE: ASSESSMENT, PRESENT DEVELOPMENTS, USES, AND OPPORTUNITIES
Ankul Tiwari¹, Dr. Piyush Yadav², Mohd. Wasiullah³*
Abstract
With several uses in the fields of healthcare, business, finance, education, and urban infrastructure, artificial intelligence (AI) has become a game-changing technology. Rapid developments in machine learning, deep learning, generative models, and multimodal intelligence during the past 20 years have increased the capabilities of AI systems, allowing computers to carry out challenging tasks including perception, reasoning, and decision-making. The underlying ideas, evaluation techniques, recent technological advancements, real-world applications, societal and economic effects, and potential future advances of artificial intelligence are all covered in this review. Examined are significant advancements in explainable AI, autonomous and adaptive systems, and AI-driven decision support, as well as the governance, legal, and ethical issues surrounding data quality, bias, privacy, scalability, and global standardization. The study emphasizes the need for sustainable, inclusive, and human-centered AI while highlighting research gaps and new technological frontiers. This review provides academics, practitioners, and policymakers with a comprehensive viewpoint by combining technology advancements with societal factors. It encourages the responsible development and application of AI to optimize advantages while minimizing hazards.
Keywords:Artificial intelligence, Machine learning, Deep learning, Generative AI, Explainable AI, Societal impact, Ethical AI, Autonomous systems
Corresponding Author
Ankul Tiwari, Research Scholar,
Prasad Institute of Technology, Jaunpur, UP
Received: 05/02/2026
Revised: 13/02/2026
Accepted: 27/02/2026
DOI: http://doi.org/10.66204/GJPSR.293-2026-2-2-5
Copyright Information
© 2026 The Authors. This article is published by Global Journal of Pharmaceutical and Scientific Research Copyright Author (s) 2024 Distributed under Creative Commons CC-BY 4.
How to Cite
Tiwari A, Wasiullah M, Yadav P. Artificial intelligence: assessment, present developments, uses, and opportunities. Global Journal of Pharmaceutical and Scientific Research. 2026;2(2):2–26. ISSN: 3108-0103. DOI: http://doi.org/10.66204/GJPSR.293-2026-2-2-5
1. Introduction
One of the most significant technical concepts of the twenty-first century is artificial intelligence (AI), which allows machines to carry out tasks like sensing, reasoning, learning, and decision-making that have historically needed human intelligence. AI research and implementation have surged due to rapid advancements in computer power, data availability, and algorithmic design, making AI a key component of digital transformation in a variety of industries. AI is changing how societies function and how value is produced, from financial analytics and smart infrastructure to healthcare diagnostics and autonomous systems.
AI has evolved over the last 20 years from rule-based and symbolic systems to data-driven learning models, especially deep learning and machine learning techniques. Significant performance gains have resulted from these advancements in fields including complex pattern analysis, natural language processing, and picture identification. Simultaneously, new issues pertaining to safety, justice, transparency, and the impact on society have been brought about by the growing autonomy and scale of AI systems. The necessity of methodical evaluation and appropriate governance has grown more apparent as AI systems are included into crucial decision-making procedures.
Despite the expanding corpus of AI research, the literature that is currently available frequently concentrates on discrete technical developments or domain-specific applications, providing little integration of fundamental ideas, assessment techniques, practical implementations, and wider societal ramifications. To provide a comprehensive overview of the present AI ecosystem, a thorough study that integrates technology advancements with evaluation frameworks, applications, opportunities, and open difficulties is still required. For researchers, practitioners, and politicians looking to maximize AI's potential while reducing related risks, such an integrated viewpoint is crucial.
This review's goal is to provide a methodical and critical overview of artificial intelligence, covering its theoretical underpinnings, current advancements, evaluation techniques, real-world applications, societal and economic effects, prospects for the future, and unsolved issues. This study seeks to enhance informed decision-making and aid in the creation of reliable, sustainable, and human-centered AI systems by combining insights from several disciplines and identifying new research possibilities.
2. Foundations of Artificial Intelligence
2.1 Evolution of Artificial Intelligence
Advances in computer power, algorithmic innovation, and data availability have all contributed to the fast development of artificial intelligence (AI) over the last 20 years. Even if early symbolic reasoning systems served as the conceptual forerunners of artificial intelligence, advances in machine learning and data-centric intelligence have greatly influenced how we currently conceptualize AI. More than explicit rule-based programming, modern AI is characterized by adaptive systems that may directly learn patterns and representations from data (Russell & Norvig, 2021).
Early in the new millennium, probabilistic and statistical learning frameworks replaced inflexible expert systems in AI research. This change made it possible for AI systems to manage noise, ambiguity, and real-world unpredictability more effectively. This change was expedited by the development of large-scale datasets and advancements in optimization methods, which resulted in the broad use of machine learning in both scientific and industrial fields (Jordan & Mitchell, 2015).
The emergence of deep learning in the 2010s was a significant turning point made possible by developments in neural network topologies, graphics processing units (GPUs), and massive annotated datasets. Rekindled interest and funding for AI research resulted from deep learning models' exceptional performance in perception-driven tasks including speech processing, image identification, and natural language understanding (LeCun et al., 2015). During this time, AI also moved from lab study to widespread use in the actual world.
Foundation models, generative AI, and multimodal systems—which may transfer knowledge across tasks and domains—have been the hallmarks of AI progress in more recent times. As a step toward more all-purpose AI systems, these models bring both new prospects and ethical, governance, and social impact issues (Bommasani et al., 2021; Zhang et al., 2023). As a result, the development of AI during the past 20 years shows a move away from limited, task-specific systems and toward intelligence that is scalable and flexible.
2.2 Core Paradigms and Learning Models
Several paradigms that specify how intelligence is modeled and operationalized form the foundation of contemporary AI. Despite being less common these days, symbolic techniques are nevertheless used in systems that enable decision-making, reasoning, and knowledge representation where interpretability and transparency are essential. However, their popularity in modern AI applications has diminished due to their inability to handle high-dimensional and unstructured data (Russell & Norvig, 2021).
Machine learning, the dominant paradigm in contemporary AI, views intelligence as an optimization process driven by data. Supervised, unsupervised, and reinforcement learning are the three main categories of machine learning techniques. While unsupervised learning allows pattern recognition and representation learning from unlabeled data, supervised learning is the foundation of most prediction and classification algorithms. By enabling agents to discover the best policies through interaction with surroundings, reinforcement learning facilitates sequential decision-making and control (Sutton & Barto, 2018).
In recent years, deep learning has emerged as the most significant learning framework. Hierarchical representation learning across images, text, audio, and video is made possible by neural architectures including transformer-based models, recurrent neural networks (RNNs), and convolutional neural networks (CNNs). By facilitating scalable attention-based learning, transformer models in particular have transformed generative AI and natural language processing (Vaswani et al., 2017; Brown et al., 2020).
The creation of hybrid and neuro-symbolic AI, which aims to combine data-driven learning and symbolic thinking, is a new trend. These methods seek to solve major shortcomings of solely statistical models, particularly in safety-critical and knowledge-intensive applications, by combining interpretability, reasoning, and generalization (Garcez et al., 2019). These paradigms work together to provide the intellectual underpinnings of modern AI systems.
2.3 Data, Algorithms, and Computational Infrastructure
The relationship between data, algorithms, and computational infrastructure essentially determines the scalability and performance of AI systems. AI models are primarily powered by data, and the quantity, variety, and quality of data have a significant impact on learning outcomes. Advances in AI performance have been made possible by large-scale datasets, but fairness and generalizability are seriously threatened by problems including bias, imbalance, and lack of representativeness (Halevy et al., 2009; Mehrabi et al., 2021).
The methods by which AI systems derive knowledge from data are provided by algorithms. Model stability, efficiency, and generalization have increased thanks to developments in optimization techniques, regularization tactics, and representation learning. Simultaneously, in order to improve trust and accountability in AI systems, more focus has been placed on explainable, reliable, and fairness-aware algorithms (Goodfellow et al., 2016; Arrieta et al., 2020).
Modern AI has been made possible in large part by computational infrastructure. Large-scale training and deployment are now possible because to cloud-based platforms and high-performance hardware like GPUs and tensor processing units (TPUs). Furthermore, real-time inference, lower latency, and improved data privacy are supported by edge and distributed computing paradigms, especially in autonomous systems and the Internet of Things (IoT) (Shi et al., 2016; Xu et al., 2021).
The technological foundation of modern artificial intelligence is made up of the cooperation of data resources, algorithmic innovation, and computer infrastructure.
Table 1: Key AI Paradigms and Learning Models
| AI Paradigm / Model | Description | Key Algorithms / Techniques | Typical Applications | Strengths / Limitations |
| Symbolic / Rule-Based AI | Knowledge represented as explicit rules | Expert systems, logic programming | Early decision support, diagnosis | Transparent, interpretable; limited scalability |
| Machine Learning | Models learn patterns from data | Decision trees, SVM, Random Forest | Finance, healthcare, recommendation systems | Generalizable; requires large datasets |
| Deep Learning | Multi-layer neural networks | CNN, RNN, GAN, Transformer | Image recognition, NLP, autonomous systems | High accuracy; computationally expensive, less interpretable |
| Reinforcement Learning | Learning by interacting with environment | Q-Learning, Deep Q-Network, Policy Gradient | Robotics, gaming, autonomous navigation | Adaptive and flexible; may require large exploration time |
| Hybrid / Neuro-Symbolic AI | Combines learning and symbolic reasoning | Logic + Neural Networks | Complex reasoning, explainable AI | Balances interpretability and performance; emerging field |

Figure 1: Conceptual Framework of AI Paradigms and Applications
3. Assessment of Artificial Intelligence Systems
3.1 Performance Metrics and Evaluation Techniques
Artificial intelligence (AI) systems are evaluated using quantitative and qualitative frameworks that gauge the efficacy, efficiency, and practicality of the models. The task domain and learning paradigm have an impact on performance measures. Accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC) are frequently used measures in supervised learning that together encapsulate predictive capacity and class-wise performance (Sokolova & Lapalme, 2009).
For regression and forecasting tasks, metrics such as mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MA
E) are frequently used to measure the deviations from predictions. In contrast, when labeled ground truth is not available, unsupervised learning models are frequently assessed using information-theoretic metrics, reconstruction error, or clustering validity indices (Hastie et al., 2009). Cumulative reward, convergence rate, and policy stability across episodes are commonly used to evaluate reinforcement learning systems (Dulac-Arnold et al., 2019).
Benchmarking has evolved into a crucial assessment technique in AI research, surpassing task-specific metrics. Comparative evaluation across models and algorithms is made possible by standardized datasets and challenges; yet, an excessive dependence on benchmarks may result in performance improvement that is irrelevant to the actual world (Recht et al., 2019). In order to guarantee practical reliability, new assessment frameworks place a strong emphasis on cross-dataset validation, stress testing, and real-world performance monitoring.
3.2 Robustness, Reliability, and Generalizability
When evaluating AI systems, robustness and dependability are crucial factors, especially in high-impact and safety-critical fields. The ability of an AI model to continue operating in noisy, imperfect, or hostile environments is referred to as robustness. Research has shown that high-performing models may be susceptible to adversarial perturbations, which raises questions regarding their reliability and stability (Goodfellow et al., 2015).
Consistency in model behavior over several runs, datasets, and deployment scenarios is all part of reliability. Over time, variables including concept drift, data drift, and shifting environmental conditions can all seriously impair AI performance. Consequently, post-deployment monitoring and ongoing evaluation have become essential elements of contemporary AI assessment methods (Gama et al., 2014).
The capacity of an AI system to function successfully on data that is not part of the training distribution is known as generalizability. Overfitting is still a significant problem, especially for deep learning models that were trained on big but limited datasets. To enhance generalization across tasks and domains, strategies including cross-validation, regularization, domain adaption, and transfer learning are frequently used (Zhang et al., 2021).
In order to assess whether AI systems can adjust to real-world variability, recent research has placed an increased emphasis on robustness benchmarks and out-of-distribution testing. These evaluation methods go beyond conventional accuracy measurements, emphasizing the significance of adaptation and resilience in reliable AI systems.
3.3 Ethical, Legal, and Social Considerations
AI systems are evaluated for their ethical, legal, and societal ramifications in addition to their technical performance. Because AI models trained on biased datasets have the potential to reinforce or magnify already-existing social inequities, algorithmic bias and fairness have become major problems. AI assessment pipelines are progressively including bias auditing systems and fairness-aware evaluation criteria (Mehrabi et al., 2021).
Evaluation of ethical AI also heavily relies on explainability and transparency. Deep neural networks in particular frequently operate as "black boxes," making it difficult for humans to comprehend how they make decisions. Improved interpretability and accountability are the goals of explainable AI (XAI) approaches, particularly in regulated industries like healthcare, finance, and law (Arrieta et al., 2020).
Legally speaking, AI systems bring up issues with liability, accountability, data security, and regulatory framework compliance. The need for a methodical assessment of legal risks and governance frameworks is highlighted by the introduction of AI-specific rules and laws (Floridi et al., 2018). The significance of multifaceted AI assessment is further underscored by social factors, such as public trust, human autonomy, and job displacement.
AI review must take a human-centered and responsible approach, combining technical rigor with societal values and regulatory compliance, according to ethical, legal, and social evaluation frameworks taken together.
Table 2: Assessment Metrics for AI Systems
| Metric | Definition / Calculation | Applicable AI Domains | Advantages | Limitations / Notes |
| Accuracy | Correct predictions / Total predictions | Classification tasks | Simple, intuitive | Can be misleading for imbalanced datasets |
| Precision / Recall | TP / (TP + FP), TP / (TP + FN) | Classification, NLP, healthcare | Balances false positives vs false negatives | Trade-off between precision and recall |
| F1-Score | 2 × (Precision × Recall) / (Precision + Recall) | Classification | Combines precision and recall | May not reflect absolute performance |
| ROC-AUC | Area under ROC curve | Binary classification | Measures discriminative ability | Less intuitive; requires probability outputs |
| Robustness | Performance under noisy/adversarial conditions | All AI domains | Evaluates reliability | Difficult to standardize across domains |
| Explainability | Degree of transparency / interpretability | ML/DL models | Improves trust and regulatory compliance | Often qualitative; hard to quantify |

Figure 2: AI Lifecycle and Assessment Framework
4. Present Developments in Artificial Intelligence
4.1 Advances in Machine Learning and Deep Learning
Improved model designs, optimization strategies, and scalable computational resources have all contributed to the significant evolution of machine learning (ML) and deep learning (DL) over the last ten years. Deep learning's ability to recognize intricate patterns in both structured and unstructured data has been greatly increased by architectures including residual networks, graph neural networks (GNNs), and attention-based models, which go beyond traditional neural networks (He et al., 2016; Wu et al., 2020). Advances in speech recognition, natural language comprehension, recommendation systems, and scientific research have been made possible by these advancements.
Self-supervised learning, which uses unlabeled data to develop generalizable representations without heavily relying on manual labeling, has advanced as a result of efforts to increase training efficiency and model robustness (Jing & Tian, 2020). Furthermore, by improving models' capacity to swiftly adjust to new tasks with few training instances, meta-learning and few-shot learning techniques have attempted to address data scarcity (Hospedales et al., 2020). When taken as a whole, these advancements have changed deep learning from task-specific models to more adaptable and flexible learning frameworks.
4.2 Generative AI and Natural Language Processing
One of the most significant advancements in current AI research is generative AI. Early advances in image synthesis and data generation tasks were made possible by models like variational autoencoders (VAEs) and generative adversarial networks (GANs) (Goodfellow et al., 2014; Kingma & Welling, 2019). The most significant recent development, however, has been the emergence of large language models (LLMs), which are transformer-based architectures that have been pre-trained on enormous text corpora and exhibit strong abilities in language comprehension, text generation, summarization, and translation (Vaswani et al., 2017).
Coherent text production, conversational AI, question answering, and multimodal reasoning have all been made possible by models like GPT, BERT, and their derivatives, which have significantly changed benchmarks in natural language processing (NLP) (Devlin et al., 2019; Radford et al., 2019). By learning joint representations from text, image, and even audio data, multimodal generative models expand these capabilities to serve tasks like cross-modal retrieval and image captioning (Lu et al., 2022). Thus, the focus of study has switched from predictive modeling to creative and generative understanding due to generative AI.
4.3 Computer Vision and Multimodal Intelligence
Deep convolutional neural networks (CNNs) and vision transformers (ViTs) have revolutionized computer vision. High-fidelity performance in image classification, object detection, segmentation, and video analysis has been made possible by CNN-based architectures (Krizhevsky et al., 2012; He et al., 2017). By modifying NLP's attention processes, vision transformers increased the scalability of visual learning and made it possible to capture long-range relationships more effectively (Dosovitskiy et al., 2021).
Multimodal intelligence, in which models comprehend and combine data from several sensory modalities like vision, language, and audio, is a crucial area of AI. In order to do cross-domain tasks such as visual question responding, speech-and-image reasoning, and embodied perception, multimodal learning frameworks integrate feature representations from several modalities (Baltrušaitis et al., 2019; Tsai et al., 2020). These systems facilitate more natural human-machine interaction and more accurately represent human perception in the real environment.
4.4 Explainable and Trustworthy AI
Transparency, interpretability, and reliability have become critical as AI technologies are used in delicate fields including healthcare, finance, and autonomous systems. The goal of explainable AI (XAI) research is to provide methods that reveal the "black box" of complicated models and offer comprehensible insights into decision-making, feature influence, and model reasoning (Samek et al., 2019). For post-hoc interpretation, techniques like SHAP, LIME, and layer-wise relevance propagation are frequently employed.
Trustworthiness includes fairness, robustness, privacy, and accountability in addition to interpretability. In order to ensure fair treatment for all demographic groups, fairness-aware machine learning aims to identify and reduce biases in training data and model predictions (Mehrabi et al., 2021). While privacy-preserving techniques like federated learning and differential privacy aid in protecting sensitive data, robustness research focuses on resilience to adversarial assaults and distribution shifts (Yang et al., 2019). These initiatives collectively seek to bring AI development into line with social, ethical, and legal requirements.
5. Uses and Applications of Artificial Intelligence
5.1 Healthcare and Life Sciences
Predictive, preventive, and customized medicine made possible by artificial intelligence have revolutionized the life sciences and healthcare industry. AI systems are being utilized more and more in clinical practice to predict outcomes, assist with diagnosis, and detect diseases early. Strong diagnostic accuracy for diseases like cancer, cardiovascular disease, and diabetic retinopathy has been shown by deep learning models trained on medical imaging data, such as radiology, pathology, and ophthalmology images (Rajpurkar et al., 2017; Shen et al., 2017). By increasing consistency and decreasing the labor associated with diagnosis, these systems help clinicians.
Beyond diagnoses, automated triage, patient monitoring, and electronic health record (EHR) analysis are some of the ways AI helps optimize clinical workflow. Decision-making and hospital efficiency are enhanced by the ability to extract therapeutically important insights from unstructured medical notes using natural language processing techniques (Johnson et al., 2016). Predictive analytics driven by AI can also help identify patients who are at risk of deterioration, readmission, or negative drug reactions.
AI is essential to systems biology, genomics, and drug development in the life sciences. Development timeframes are greatly shortened by machine learning algorithms, which speed up lead optimization, chemical screening, and target selection (Mak & Pichika, 2019). AI-driven genomic analysis promotes individualized treatment plans and makes it possible to identify variations linked to disease. Notwithstanding these developments, issues with clinical validation, model interpretability, and data privacy continue to be crucial for the appropriate use of AI in healthcare.
5.2 Industrial Automation and Smart Manufacturing
Industry 4.0 is made possible in large part by artificial intelligence, which supports data-driven optimization, adaptive control, and intelligent automation in production systems. In order to detect equipment failures and minimize unscheduled downtime and maintenance expenses, AI-based predictive maintenance models examine sensor and operational data (Carvalho et al., 2019). These solutions facilitate condition-based maintenance methods and improve asset reliability.
AI-driven robotics and computer vision systems enhance quality control, defect identification, and process automation in industrial settings. By adjusting to variations in materials, lighting, and manufacturing conditions, deep learning-based visual inspection systems perform better than conventional rule-based techniques (Wuest et al., 2016). By dynamically modifying control parameters, reinforcement learning techniques are also used to optimize manufacturing processes.
Cyber-physical systems and digital twins, which combine real-time data with AI-based simulation and optimization, are becoming more and more integrated into smart manufacturing. According to Leng et al. (2021), these technologies facilitate sustainable manufacturing methods, energy efficiency, and predictive analytics. Concerns about cybersecurity, personnel upskilling, and system interoperability are becoming more crucial as industrial AI develops.
5.3 Finance and Intelligent Decision Systems
The use of AI in finance has revolutionized decision automation, market analysis, and risk assessment. In order to increase loan approval accuracy and default prediction, machine learning algorithms are widely used in credit risk modeling, where they examine a variety of data sources (Bussmann et al., 2021). In a similar vein, artificial intelligence (AI)-powered fraud detection systems continuously scan transactional data to spot unusual trends instantly.
By analyzing massive amounts of structured and unstructured data, such as market indicators and textual information from news and financial reports, artificial intelligence (AI) aids algorithmic trading and portfolio management in capital markets (Sirignano & Cont, 2019). By capturing investor activity and market mood, NLP-based sentiment analysis improves predictions.
AI-driven decision-making systems are being utilized in insurance for automated customer support, claims processing, and pricing. However, explainability, accountability, and systemic risk are among the ethical and regulatory issues brought up by the opacity of sophisticated models. In order to secure regulatory compliance and uphold public trust, financial institutions are progressively using explainable AI approaches and governance structures.
5.4 Education, Agriculture, and Smart Cities
By examining learner behavior, preferences, and performance, artificial intelligence (AI) in education makes it possible to create personalized and adaptive learning environments. Intelligent tutoring systems enhance better learning outcomes and student engagement by offering personalized feedback, pace, and subject recommendations (Zawacki-Richter et al., 2019). AI-powered learning analytics also help teachers assess the success of their lessons and identify students who are at risk.
Through precision farming technologies, artificial intelligence (AI) facilitates data-driven decision-making in agriculture. To maximize agricultural productivity, irrigation, fertilization, and disease control, machine learning models evaluate data from satellites, drones, and soil sensors (Liakos et al., 2018). By lowering resource waste and environmental effect while raising productivity, these methods support sustainable agriculture.
AI enhances urban resilience and efficiency in smart cities by integrating data from public services, energy grids, transportation systems, and surveillance networks. Applications include emergency response optimization, environmental monitoring, intelligent traffic control, and predictive energy management (Yigitcanlar et al., 2020). Although AI-powered smart city solutions improve quality of life, issues with data privacy, surveillance, and governance underscore the necessity of inclusive and transparent urban AI policies.
Table 3: AI Applications Across Sectors
| Sector | Representative AI Techniques | Applications | Benefits / Impact | Challenges / Considerations |
| Healthcare | CNN, RNN, NLP, Reinforcement Learning | Diagnostics, drug discovery, predictive analytics | Early detection, personalized treatment | Data privacy, interpretability |
| Industry / Manufacturing | Predictive maintenance, Robotics, Digital Twins | Quality control, process optimization | Reduced downtime, improved efficiency | Scalability, integration with legacy systems |
| Finance | Machine Learning, NLP, Fraud detection algorithms | Credit scoring, algorithmic trading | Risk reduction, faster decision-making | Algorithmic bias, regulatory compliance |
| Education | Adaptive learning systems, NLP | Personalized learning, assessment automation | Improved student engagement, tailored content | Data representativeness, digital divide |
| Agriculture | ML, Drone/IoT integration, Computer Vision | Precision farming, disease detection | Higher yield, resource optimization | Sensor costs, environmental variability |
| Smart Cities | Computer Vision, IoT analytics, Reinforcement Learning | Traffic management, energy optimization | Sustainable urban planning, efficiency | Privacy, cybersecurity, standardization |
6. Societal and Economic Impact of Artificial Intelligence
6.1 Workforce Transformation and Economic Implications
By automating repetitive jobs, enhancing human talents, and facilitating new types of work, artificial intelligence is radically changing labor markets and economic systems. While concurrently changing skill needs and occupational profiles, AI-driven automation has boosted productivity in industries like manufacturing, logistics, finance, and services (Acemoglu & Restrepo, 2020). Roles that emphasize creativity, social intelligence, and sophisticated decision-making are more resilient to automation than tasks that require repetitive cognitive or physical activities.
Beyond displacing workers, AI helps create and alter jobs, creating need for new positions in data science, AI engineering, system maintenance, and AI governance. As a result, the benefits of AI are concentrated in areas and organizations with robust digital infrastructure and human resources, resulting in an uneven economic impact. If reskilling and inclusive growth policies are not put into place, this difference runs the risk of increasing regional economic gaps and income inequality (Brynjolfsson & McAfee, 2014).
Macroeconomically speaking, productivity increases brought forth by AI could improve long-term economic growth. The redistribution of economic value between labor and capital, employment precarity, and wage polarization are still issues, nevertheless. Coordinated policy initiatives centered on lifelong learning, education reform, and social safety nets are needed to address these issues and guarantee that AI-driven growth is sustainable and inclusive.
6.2 Human–AI Interaction and Trust
Trust has become a key factor in determining the uptake and efficacy of AI systems as they engage with people more frequently. Research on human-AI interaction highlights that trust is impacted by perceived intent, transparency, and dependability in addition to system performance (Hancock et al., 2011). Understanding the capabilities and limitations of AI systems increases the likelihood that users will rely on them, especially in high-stakes industries like healthcare, finance, and autonomous transportation.
In order to build trust, explainability and user-centered design are essential. In order to prevent both over-reliance and under-utilization of AI systems, interfaces that include relevant explanations, uncertainty estimates, and possibilities for human oversight allow users to appropriately calibrate their trust (Lee & See, 2004). When AI systems make probabilistic or context-dependent decisions, trust calibration is particularly crucial.
Human reactions to AI are further influenced by cultural, social, and psychological elements. Research shows that public adoption of AI technologies is strongly influenced by views of accountability, justice, and value alignment (Madhavan & Wiegmann, 2007). Designing for reliable human–AI collaboration rather than just automation has emerged as a major research and policy priority as AI systems get more independent and socially integrated.
6.3 Governance, Regulation, and Responsible AI
Growing interest in governance frameworks that guarantee moral, lawful, and socially acceptable use of AI has been spurred by its quick implementation. Fairness, accountability, transparency, privacy protection, and human oversight throughout the AI lifespan are all emphasized by responsible AI (Floridi et al., 2018). These guidelines seek to reduce risks such as discrimination, algorithmic bias, improper use of surveillance, and unintentional harm to society.
Globally, different legal traditions and societal values are reflected in the many regulatory approaches to AI. Particularly for high-risk applications, governments and international organizations have put out AI-specific regulations that cover data governance, risk assessment, and accountability procedures (Jobin et al., 2019). By encouraging reliable AI without impeding technical advancement, such frameworks aim to strike a balance between innovation and the general welfare.
Organizational procedures including impact analyses, model audits, and interdisciplinary monitoring involving technologists, ethicists, and legislators are also necessary for effective AI governance. Proactive governance and international cooperation are crucial to ensuring that AI development is in line with democratic principles, human rights, and long-term societal well-being as AI systems progressively impact society outcomes.
7. Opportunities and Future Directions
7.1 AI for Sustainable and Inclusive Development
By facilitating evidence-based decision-making in the social, economic, and environmental spheres, artificial intelligence presents enormous potential to promote equitable and sustainable development. Better climate modeling, forecasting of renewable energy, and resource management are made possible by AI-driven analytics, which support efforts for climate change adaptation and mitigation (Rolnick et al., 2019). In environmental monitoring, deforestation, biodiversity loss, air quality, and water resources are tracked in almost real-time using machine learning models that analyze sensor data and satellite photos.
From a societal standpoint, AI could lessen inequality by enhancing underprivileged areas' access to financial, healthcare, and educational resources. Where traditional infrastructure is lacking, AI-powered mobile health platforms, inexpensive diagnostic tools, and intelligent tutoring systems can fill in the gaps. To avoid exacerbating already-existing social injustices, inclusive AI development must address digital disparities, data representativeness, and algorithmic bias (Crawford & Paglen, 2021).
AI-for-good frameworks, participatory system design, and alignment with global development goals must be prioritized in future research. To guarantee that technological advancement results in fair and long-lasting societal outcomes, it will be crucial to incorporate ethical protections, transparency, and local context into AI solutions.
7.2 Autonomous and Adaptive Intelligent Systems
One important area in intelligent system design is the development of AI toward autonomous and adaptable systems. Systems that can function with little human involvement and adapt to dynamic and uncertain settings are made possible by developments in reinforcement learning, self-supervised learning, and continuous learning (Kober et al., 2013). Applications such as robotic exploration, driverless cars, smart grids, and large-scale infrastructure management depend on these capabilities.
It is becoming more and more required of adaptive AI systems to demonstrate resilience against hostile environments, tolerance to distributional alterations, and lifelong learning. These demands put conventional static training paradigms to the test and spur research toward hybrid human–AI control systems, online learning, and meta-learning (Parisi et al., 2019). As a means of achieving more dependable and interpretable autonomy, the combination of data-driven learning with symbolic reasoning is also attracting interest.
Autonomous systems create issues with safety, accountability, and human monitoring despite their potential. Formal verification techniques, human-in-the-loop design, and regulations that specify acceptable degrees of autonomy while maintaining human agency and control will be necessary for future advancements.
7.3 Research Gaps and Emerging Innovation Frontiers
Despite the quick advancement of AI capabilities, there are still a number of basic research gaps. Since most existing AI systems only function well in environments comparable to their training data, generalizing beyond specific tasks is a significant issue. Improvements in causal reasoning, abstraction, and transfer learning are necessary to reach robust general intelligence (Lake et al., 2017).
AI that is sustainable and energy-efficient is another developing field. Large-scale models' increasing processing requirements have sparked worries about their effects on the environment, which has led to research into edge AI, neuromorphic computing, and efficient architectures (Schwartz et al., 2020). Furthermore, rigorous evaluation of AI systems is limited by the absence of set metrics for robustness, fairness, and societal impact.
Future developments are anticipated in the nexus of AI and fields like social sciences, materials science, and neuroscience. In order to minimize unintended consequences and convert AI advancements into long-term social value, interdisciplinary research, open science methodologies, and responsible innovation frameworks will be essential.
8. Challenges and Open Issues
8.1 Data Quality, Bias, and Privacy Concerns
The quality, diversity, and integrity of the data utilized for training and deployment are critical factors that affect the efficacy of artificial intelligence systems. Real-world datasets are frequently uneven, noisy, or incomplete in practice, which results in biased model behavior and decreased generalizability. Particularly in delicate applications like recruiting, credit scoring, healthcare, and law enforcement, bias in training data can lead to discriminatory results across demographic groups (Mehrabi et al., 2021).
Another significant issue is privacy, since AI systems usually depend on vast amounts of behavioral and personal data. Unauthorized access, data leakage, and abuse are dangers associated with the gathering, storing, and processing of such data. Although strategies like secure multiparty computation, federated learning, and differential privacy have shown promise, they frequently result in trade-offs between model performance and privacy protection (Kairouz et al., 2021).
Standardized data governance procedures, bias auditing instruments, and open documentation of datasets and models are necessary to address data-related issues. The legal viability and social acceptance of AI systems are still in jeopardy in the absence of methodical approaches to data stewardship.
8.2 Scalability, Energy Efficiency, and Security
Scalability and energy efficiency have emerged as key technological and environmental issues as AI models continue to expand in size and complexity. Large models require a lot of processing power to train and implement, which raises energy and carbon emissions. The long-term viability of AI development is called into question by this tendency, especially as AI usage grows internationally (Strubell et al., 2019).
Scalability issues can affect system deployment, where real-world performance may be hampered by latency, dependability, and infrastructure constraints. By facilitating effective inference on devices with limited resources, edge AI and model compression techniques present viable alternatives. Nonetheless, maintaining uniform performance in dispersed areas is still a research challenge.
Scalability of AI is further complicated by security flaws. AI systems are susceptible to manipulation and abuse due to adversarial assaults, data poisoning, and model inversion threats, particularly in safety-critical domains (Papernot et al., 2018). Therefore, preserving confidence and operational integrity at scale requires the development of reliable, secure, and verifiable AI systems.
8.3 Policy, Ethics, and Global Standardization
The creation of logical policy and regulatory frameworks has lagged behind the quick speed of AI progress. It is difficult for policymakers to promote innovation while defending democratic principles, human rights, and public interests. Because many AI systems are complicated and opaque, it is still challenging to implement ethical principles like accountability, transparency, and algorithmic fairness (Mittelstadt et al., 2016).
AI governance is made more difficult by regional differences in regulatory strategies. The development of widely recognized AI principles and interoperability standards is hampered by disparities in legal standards, cultural norms, and economic priorities. The risk of regulatory fragmentation and unequal protection for users and impacted communities is increased in the absence of international coordination.
Multidisciplinary cooperation between governments, business, academia, and civil society is essential to future advancement. To guarantee that AI technologies are used responsibly, openly, and for the good of society as a whole, it will be essential to create shared standards, impact assessment frameworks, and cross-border governance systems.

Figure 3: Opportunities and Challenges of AI
9. Conclusion
In the fields of science, economics, and society, artificial intelligence has developed into a game-changing, all-purpose technology. The foundations of AI, recent technological advancements, evaluation techniques, practical applications, societal effects, upcoming prospects, and enduring difficulties have all been covered in this overview. When taken as a whole, these viewpoints demonstrate AI's dual function as a potent catalyst for innovation and a source of intricate technological, moral, and governance issues.
The breadth and efficacy of AI applications in healthcare, industry, finance, education, and smart infrastructure have been greatly increased by recent developments in machine learning, generative models, multimodal intelligence, and autonomous systems. However, problems with data quality, algorithmic bias, energy consumption, security flaws, and reliability highlight the shortcomings of existing AI systems and the dangers of uncontrolled implementation.
Artificial intelligence's long-term success will depend on responsible design, inclusive governance, and significant human monitoring in addition to algorithmic performance. To ensure that AI-driven advancement serves society as a whole rather than exacerbating already-existing disparities, workforce change, ethical accountability, and global standardization must be addressed. To close the gap between technological competence and societal readiness, future research must stress robustness, explainability, sustainability, and interdisciplinary collaboration.
In summary, artificial intelligence poses a shared obligation as well as an unparalleled potential. AI has the potential to be a driving force behind sustainable development, economic resiliency, and enhanced quality of life in the decades to come if technical progress is matched with moral standards, legal frameworks, and human-centered values.
10. Acknowledgements
The authors sincerely acknowledge the support of colleagues and peers who provided valuable insights during the preparation of this review.
11. Conflict of Interest
The authors declare that there are no conflicts of interest relevant to the content of this review.
12. References
| Article Type | Review Article |
|---|---|
| Journal Name | Global Journal of Pharmaceutical and Scientific Research |
| ISSN | 3108-0103 |
| Volume | Volume-2 |
| Issue | Issue-2, February-2026 |
| Corresponding Author | Ankul Tiwari¹, Piyush Yadav², Mohd. Wasiullah³* |
| Address | 1. Scholar, Department of Pharmacy, Prasad Institute of Technology, Jaunpur (222001), U.P., India 2. Academic Head, Department of Pharma: Chemistry, Prasad Institute of Technology Jaunpur (222001) , U.P., India 3. Principal, Department of Pharmacy, Prasad Institute of Technology, Jaunpur (222001), U.P., India |
| Received | 05 Feb, 2026 |
| Revised | 13 Mar, 2026 |
| Accepted | 27 Mar, 2026 |
| Published | 01 Mar, 2026 |
| Pages | 292-315 |