My experience on my daily works... helping others ease each other

Saturday, April 19, 2025

Google Gen AI 5-days Intensive Course - Capstone Project - AI Food Agent by Haszeli

🍽️ NakMakanApa – A Generative AI Companion for Personalized Meal Discovery

🧠 Introduction

Every Malaysian has faced the timeless question: “Nak makan apa?” ("What do I want to eat?"). Whether you're a busy parent, a health-conscious professional, or someone simply staring at a fridge of random ingredients, deciding what to eat can be surprisingly stressful.

NakMakanApa is a Generative AI-powered food recommendation system that turns that daily dilemma into an intelligent, personalized experience. Built as part of the GenAI Intensive Course Capstone Project, this solution blends natural language understanding, image recognition, and AI-driven reasoning to guide users toward delicious, healthy, and culturally relevant meals.

🌟 What Makes NakMakanApa Special?

This isn’t just a recipe finder. NakMakanApa is a full-featured AI agent capable of:

  • 🗣️ Understanding user prompts like “saya nak makanan pedas dan sihat untuk jantung” (I want something spicy and heart-healthy)

  • 🖼️ Interpreting images of fridge contents to identify usable ingredients

  • 🧠 Personalizing meal suggestions based on preferences, health goals, and local cuisine

  • 📦 Fetching recipes via vector search and fallback to Gemini LLM if needed

  • 🧾 Generating structured summaries and advice about the meal’s health benefits

  • 📄 Exporting everything as a clean PDF or TXT document


🔧 How It Works – The Process

The project unfolds in 8 major steps:

  1. Prepare a structured recipe dataset as a DataFrame (df_recipes)

  2. Install and configure required libraries (LangChain, FAISS, Gemini SDK, YOLOv8, etc.)

  3. Allow image input – users can upload a fridge/ingredient photo

  4. Detect ingredients via YOLOv8 and match them with user preferences (multilingual supported!)

  5. Search local FAISS vector index for best recipe matches

  6. Fallback to Gemini if no good match is found, auto-updating the dataset & vector DB

  7. Summarize recipe & highlight health benefits, giving users a clear reason for the recommendation

  8. Generate PDF or TXT output, with beautiful formatting, for sharing or saving offline

Example of the screen on kaggle



💡 GenAI Capabilities Applied

CapabilityApplication
📊 Structured Output / JSON ModeRecipes returned in structured JSON: title, ingredients, steps, tags
🧠 Image UnderstandingYOLOv8 used to detect food from photos
📚 Retrieval-Augmented Generation (RAG)Blend of FAISS vector search with Gemini fallback
🔍 Vector StoreEfficient similarity search on recipes using semantic embeddings
🤖 Agents (Bonus)The system behaves like an AI agent orchestrating input parsing, generation, and summarization seamlessly

🧪 Results & Achievements

✅ Users can receive personalized meal suggestions from both text and image input
✅ Recipes include clear, AI-generated summaries explaining their health value
✅ The system handles English and Malay prompts with ease
✅ Users can export results to PDF, making meal planning effortless
✅ Every process includes robust error handling for reliability


🚀 What’s Next?

This project is just the beginning. Future enhancements include:

  • 📱 A mobile app version for everyday usage

  • 🧮 Nutritional analysis (calories, macros, allergens)

  • 🛒 Smart shopping list generation

  • 🌏 Expansion to Middle Eastern and global cuisines

  • 👩‍🍳 Community recipes and feedback-driven learning


👨‍🎓 Final Thoughts

NakMakanApa is more than a technical showcase—it’s a vision for how Generative AI can enrich daily life in culturally meaningful and health-conscious ways. By combining NLP, computer vision, RAG, and AI reasoning into a seamless user flow, we created an experience that feels human, helpful, and very Malaysian.

This project proves that AI can do more than automate tasks—it can guide, support, and inspire better living.

Share:

Wednesday, April 16, 2025

Food Discovery AI Agent

 # 🍽️ Food Discovery AI Agent (Malaysian/Asian Cuisine)

This project is a **proof-of-concept AI agent** that recommends **what to eat** and **where to go** based on:

- ✅ User preferences and dietary restrictions  

- 🕒 Time of day (e.g., lunch, dinner)  

- 🌦️ Real-time weather at your current location  

- 📍 Your destination or current location  

- 🧠 LLM-powered reasoning (OpenAI GPT-3.5)  


### 💡 How It Works

1. **User Input**: Collects dietary preferences, time, and destination

2. **Weather Fetch**: Pulls real-time weather via OpenWeather API

3. **Place Discovery**: Uses Google Places API to find nearby restaurants or hawkers

4. **LLM Reasoning**: GPT analyzes the context and filters/summarizes results

5. **Visual Map**: Recommended spots are plotted on an interactive map

6. **Export**: CSV file download available

7. **Summary Report**: Optional AI-written recap of the session


### 🔐 API Keys Required

- OpenWeatherMap

- Google Places

- OpenAI (for summary)


Built with ❤️ for Malaysian/Asian food lovers 🍜🇲🇾  

Share:

Thursday, March 27, 2025

Secure By Design: Security in Mind

 



Introduction

Imagine this: You’ve just finished building your dream house. It’s beautiful, modern, everything you’ve ever wanted. But then, as you’re about to move in, you realize — oh no, there are no locks on the doors. Now, instead of enjoying your new home, you’re stuck trying to retrofit security into something that wasn’t designed with it in mind.

Sounds crazy, right? Well, guess what — that’s exactly how a lot of software gets built today. We focus so much on making things work and look good that we forget to lock the doors. And when bad guys come knocking (and trust me, they will), we’re left scrambling to fix the mess.

This is something I’ve thought about a lot. With my background in IT and software security — yep, I even have a Master’s degree in it — I’ve spent years studying how vulnerabilities happen and how we can stop them before they cause trouble. What I’ve learned is simple: Security isn’t something you tack on at the end. It’s something you build in from the start.


The Evolution of Secure Software Development

Let’s rewind a bit. Back in the early 2000s, Microsoft was getting hammered for all the security flaws in its products. People were frustrated, and Microsoft knew they had to do something. So, Bill Gates sent out a memo to his teams saying, “Hey, from now on, trustworthy computing is our top priority.” That memo led to the creation of the Security Development Lifecycle (SDL) — a process that made security a core part of every step of software development.

And guess what? It worked. Over time, Microsoft not only reduced the number of vulnerabilities in its products but also set a new standard for secure software development. Even Linux, which has always been seen as super secure, struggled to keep up. The lesson here? If you bake security into your process from the beginning, you save yourself a ton of headaches later.


Modern Approaches to Security in Development

So, how do we make sure security is part of the process? Let me break it down for you.

1. The Three Pillars of Software Security

There are three main ways we test software for vulnerabilities:

Static Analysis: This is like proofreading your code before it goes live. You check for mistakes while the code is still sitting there, untouched.
Dynamic Analysis: This happens when the code is running. It’s like watching someone use your app in real-time and seeing if anything breaks or looks suspicious.
Hybrid Analysis: This combines the best of both worlds — static and dynamic testing — to give you a complete picture of your software’s security.

But tools alone won’t cut it. What really matters is the mindset. When you’re writing code, you need to think, “How could someone misuse this?” That’s what we call Secure by Design—building security into the DNA of your software.

2. SecDevOps: Making Security Everyone’s Job

Now, let’s talk about DevOps. If you’re not familiar with it, don’t worry — it’s just a fancy way of saying, “We’re going to build, test, and release software faster.” But here’s the problem: In traditional DevOps, security often gets left behind. Developers are racing to push features out the door, and security becomes an afterthought.

That’s why we have DevSecOps— where security is integrated into the DevOps process. Some people even prefer the term SecDevOps, which flips the order to show that security comes first. I like this idea because it reminds us that security isn’t just one team’s job — it’s everyone’s responsibility.

To make this work, we focus on two key practices, on top of CI/CD:

Continuous Testing: Running security checks at every stage of development, not just at the end.
Continuous Security: Keeping an eye on security throughout the entire lifecycle of the software.

By shifting security “left” (earlier in the process), we catch problems before they become big, expensive disasters.


Balancing Security and Business Demands

Here’s the tricky part: Developers are under pressure to deliver features fast. Businesses want results yesterday. But if we rush too much, we risk leaving the doors wide open for attackers. So, how do we find the balance?

It comes down to risk management. Instead of trying to fix every single issue, we focus on the biggest risks first. For example, if a vulnerability could expose customer data, that’s a top priority. If it’s something minor, maybe we can address it later.

The goal is to move fast without breaking things. Security shouldn’t slow you down — it should help you go faster by preventing costly mistakes.


Conclusion: Secure First, Deploy Smart

Here’s the bottom line: Security isn’t something you can slap on at the end — it has to be built in from the start. Whether your team is using Agile methodologies, adopting SecDevOps, leveraging program analysis tools, or following frameworks like Microsoft’s Security Development Lifecycle (SDL), the key is simple: Ensure it’s secure by design.

Think of it this way — no matter what tools or processes you use, they’re only as effective as the mindset behind them. If security is treated as an afterthought, even the best tools won’t save you. But if you embed security into every step of your process — whether you’re writing code, running tests, or deploying features—you’re setting yourself up for success.

So, here’s my challenge to you: How is your organization ensuring Secure by Design? Are you integrating it into your Agile sprints? Are you shifting security left in your SecDevOps pipeline? Or are you relying on static and dynamic analysis to catch vulnerabilities early? Whatever your approach, the goal is the same: Build software that’s secure from the ground up.

Because when it comes to security, we’re all in this together.

#CyberSecurity #SecureByDesign #DevSecOps #SecDevOps #SoftwareDevelopment #RiskManagement

Share:

Monday, March 10, 2025

3 Pillars of Leading in the Age of AI - My Personal View

 


Introduction

In an era of rapid technological advancement, technical expertise alone is insufficient for effective leadership. The most successful tech leaders of the future will be those who master three essential pillars: AI-driven decision-making, systems thinking, and human-centric leadership. From my personal experience, reading, and observations, I believe these three principles are essential for effective leadership in today’s advanced technology environment. Here’s why these three pillars matter — and how you can apply them to future-proof your leadership.

Pillar 1: AI-Driven Decision Making

From Data to Insight

AI is more than just a tool — it’s a game-changer for decision-making. By leveraging AI to analyze project performance, customer sentiment, and team dynamics, leaders can make data-driven decisions with greater accuracy and speed. For example, during a major corporate merger, you can use sentiment analysis tools to track employee morale across teams. The data revealed early warning signs of disengagement, allowing us to intervene before it escalated into a productivity crisis. Source

Ethical AI: Augmenting, Not Replacing, Judgment

AI should enhance human decision-making, not replace it. As tech leaders, we must ask: Does this tool amplify human intelligence or override it? Ethical AI adoption means ensuring transparency, fairness, and accountability in how we deploy these technologies. Source

Pillar 2: Systems Thinking

Zoom Out, Then Zoom In

Tech leaders must balance big-picture vision with attention to detail. A systems-thinking approach ensures that solutions align with business objectives while remaining adaptable. For example, while developing a healthcare app, we could start by mapping the end-to-end user journey before reverse-engineering the tech stack. This approach ensured a seamless user experience while optimizing backend efficiency. Source

Resilience by Design

Modern architecture must be adaptive and resilient. A single point of failure can jeopardize an entire operation, so designing for scalability and flexibility is crucial. Case Study: An e-commerce platform experienced sudden traffic spikes during peak sales events, such as Black Friday. By implementing AI-driven auto-scaling and leveraging microservices architecture, we achieved 99.99% uptime, even during unexpected surges. This approach not only ensured seamless performance but also optimized resource utilization and reduced operational costs. Source

Pillar 3: Human-Centric Leadership

Bridging the Soft Skills Gap

Technical failures are rarely the primary reason projects go off course. According to a report, 70% of project failures stem from poor communication, misalignment, and team disconnects — not technical shortcomings. Source

The Approach: “No-Agenda” Check-ins

Leadership is about more than managing tasks — it’s about understanding people. One of the most effective strategies has been hosting weekly “no-agenda” check-ins. These informal meetings allow team members to bring up concerns before they become blockers, fostering a culture of trust and open communication. Source

Conclusion: The Future Belongs to Adaptive Leaders

To stay ahead in the age of AI, leaders must strike the right balance between technical acumen and human intuition. The most impactful leaders will be those who can seamlessly integrate AI-driven insights, systems-level thinking, and people-first leadership. 

What’s your non-negotiable leadership principle? Let’s discuss!

#TechLeadership #AI #SystemsThinking #ProjectManagement #FutureOfWork

Share:

Quantum Intelligence: The Next Frontier for Systems Architects

 

Introduction

Quantum computing has transitioned from a theoretical concept to a rapidly evolving reality. Companies like IBM and Google have achieved significant breakthroughs in quantum supremacy, shifting the technology from research labs to real-world applications. For systems architects, this presents both an opportunity and a challenge: adapt now or risk obsolescence.

Unlike traditional computing, which relies on binary logic (0s and 1s), quantum computing leverages qubits, which can exist in multiple states simultaneously. This fundamental shift in computation means that the architectures we rely on today may not be sufficient for the problems of tomorrow.

So, how can systems architects prepare for this new frontier? Here’s how enterprise architects should do it.

Why Quantum Changes Everything

Beyond Binary: A Paradigm Shift in Computing

Classical computers process information sequentially, while quantum computers operate in superposition—meaning they can perform exponential calculations in parallel. This opens the door to solving previously intractable problems, such as:

  • Drug discovery: simulating molecular interactions at an atomic level, revolutionizing pharmaceutical development. Source
  • Supply chain optimization: Running complex logistical simulations that classical computers would take years to process. Source
  • AI acceleration: Enhancing machine learning models with faster, more efficient computation. Source

For enterprise architects, the implications are clear: 

designing infrastructures that can integrate and leverage quantum capabilities will be a competitive advantage.

The Quantum Threat to Security

Quantum computing isn’t just an opportunity—it's also a security risk. Current encryption methods, such as RSA and ECC, rely on the computational difficulty of factoring large prime numbers. A sufficiently powerful quantum computer could break these encryptions overnight.

  • Are your systems quantum-safe?
  • Have you considered post-quantum cryptography (PQC) strategies?

The National Institute of Standards and Technology (NIST) is already working on post-quantum encryption standards. Systems architects must stay ahead by ensuring their infrastructures can transition to quantum-resistant algorithms. Source

Designing Quantum-Ready Systems

Hybrid Architectures: The Best of Both Worlds

Quantum computing is not yet ready to replace classical computing, but hybrid systems can help organizations start leveraging its power gradually.

Example: A financial institution might use classical systems for daily transactions but integrate quantum computing for portfolio optimization and fraud detection.

Agile, Modular Frameworks

To prepare for quantum integration, modularity is key. Building flexible, scalable architectures ensures that systems can evolve alongside quantum advancements.

Real-world case study: A banking client I worked with implemented a “quantum-ready” API layer, designed to seamlessly integrate with quantum computing resources when the technology matures. This strategic move future-proofed their infrastructure without requiring an immediate overhaul.

Your Quantum Journey: Preparing for the Future

Recognizing the urgency of quantum computing, we should took these steps to future-proof our expertise and systems:

  • Partnered with IBM Quantum: Conducted quantum simulations using their cloud-based quantum computing platform. Source
  • Upskilled Your Team in Qiskit: Trained engineers to use IBM’s open-source quantum SDK, ensuring they understand the fundamentals of quantum programming. Source
  • Began Redesigning Legacy Systems: Integrated quantum-friendly algorithms into existing infrastructure to prepare for gradual adoption.

Conclusion: The Time to Act is Now

The quantum revolution isn’t decades away—it's unfolding now. Systems architects who proactively explore and integrate quantum-ready solutions will be at the forefront of technological innovation.

So, what’s your first step toward quantum readiness? Are you exploring quantum-safe encryption, experimenting with hybrid architectures, or upskilling your team?

Let’s discuss! Share your thoughts and strategies in the comments.

#QuantumComputing #SystemsArchitecture #EmergingTech #Innovation #FutureOfComputing

Share:

Sunday, March 9, 2025

Will AI replace the Project Manager?

Introduction

Artificial intelligence is transforming industries at an unprecedented pace, and project management is no exception. Yet, amid all the talk of automation and digital transformation, a common fear emerges: Will AI replace another role, the project manager?

The reality is quite the opposite. AI is not here to take over but to enhance human capabilities, making project managers more effective, strategic, and valuable than ever before. By automating repetitive tasks, improving decision-making, and mitigating risks, AI allows project managers to focus on what truly matters—leadership, innovation, and value creation.

In this article, I’ll share how AI has revolutionized my project management approach and why the future belongs to PMs who embrace it.

AI-PM Partnership: A Game-Changer for Efficiency

Automation ≠ Replacement

AI excels at handling repetitive, time-consuming tasks such as scheduling, data entry, and progress tracking. But rather than replacing human intuition and leadership, AI acts as a force multiplier, allowing project managers to focus on strategy, stakeholder alignment, and team motivation.

Example: AI-powered tools like ClickUp and Monday.com now analyze historical data and team performance to predict project delays. This foresight helps project managers proactively address potential bottlenecks rather than react to crises.

Risk Mitigation: Seeing Problems Before They Arise

One of AI’s most powerful contributions to project management is its ability to identify and mitigate risks before they escalate. Machine learning algorithms can analyze vast amounts of data to detect patterns that humans might overlook, helping teams make informed decisions.

Case in Point: During a recent cloud migration project, we integrated an AI-driven risk assessment tool. The system identified a 92% chance of cost overruns due to scope creep—weeks before the issue would have surfaced. This early warning allowed us to recalibrate scope and budget, ultimately preventing financial losses and ensuring a smooth transition.

Real-World Case Study: AI in Action

The Challenge

A client approached us with an ambitious goal: launching a new product in just four months instead of the planned six. Given the compressed timeline, efficient resource management and rapid decision-making were critical.

How AI Transformed the Project

  1. Resource Allocation Optimization
  2. Automated Reporting & Insights

The Outcome

✅ Projects delivered on time.
✅ 25% budget surplus due to optimized resource allocation.
✅ Higher team morale with reduced administrative burden

How to Start Leveraging AI in Your Projects

The good news? You don’t need a PhD in AI to start incorporating these tools into your workflow. Here are three simple steps to begin:
  1. Audit Your Workflow
  2. Experiment with AI-powered tools
  3. Upskill Your Team

Conclusion: AI Is Your Co-Pilot, Not Your Replacement

AI isn’t here to take your job—it’s here to make you an unstoppable project manager. By embracing AI-driven insights, automation, and predictive capabilities, you can make smarter decisions, deliver projects more efficiently, and drive greater impact for your teams and stakeholders.

Your Turn: How are you integrating AI into your project management workflow? What’s your biggest challenge in adopting AI? Let’s discuss this in the comments!

#AI #ProjectManagement #FutureOfWork #TechLeadership #Innovation


Share:

Monday, February 10, 2025

JHipster vs Vaadin vs Spring Boot - Choosing your framework

 



Java frameworks provide pre-written code and tools that simplify the development of Java applications. They handle common tasks like database interaction, web request handling, and user interface creation, allowing developers to focus on the unique logic of their applications. Frameworks promote code reusability, consistency, and best practices, ultimately speeding up development and improving application quality. They range from lightweight libraries to full-fledged platforms that dictate the structure of your application.


Which ones are suitable for you?

Okay, let’s start with a brief overview of the frameworks.

1. Spring Boot

Spring Boot is not strictly a full-stack framework in the same way as the others. It’s more accurately described as a microframework or a toolkit built on top of the larger Spring Framework. Its primary goal is to drastically simplify the setup and configuration of Spring applications. Think of Spring Boot as the express lane for Spring development.

Key Features:

  • Auto-configuration: Spring Boot automatically configures many beans (objects managed by Spring) based on dependencies in your project. This reduces the amount of manual configuration you have to do.
  • Embedded Servers: Easily embed Tomcat, Jetty, or Undertow directly into your application, making deployment simpler.
  • Spring Boot CLI: A command-line interface that further simplifies development tasks.
  • Spring Initializr: A web-based tool for quickly bootstrapping new Spring Boot projects.
  • Use Cases: Spring Boot is ideal for building REST APIs, microservices, and any backend component where you need the power and flexibility of the Spring ecosystem.

2. JHipster

JHipster takes Spring Boot and combines it with powerful code generation capabilities. It’s a full-stack application generator that helps you create modern web applications with Spring Boot on the backend and popular JavaScript frameworks (Angular, React, or Vue.js) on the front end.

Key Features:

  • Full-stack code generation: Generates both backend and frontend code, including authentication, database integration, and basic CRUD (Create, Read, Update, Delete) operations.
  • Microservices support: Can generate applications designed for a microservices architecture.
  • Blueprint architecture: Allows for customization and extension of the generated code.
  • Use Cases: JHipster is perfect for rapidly prototyping full-stack applications, especially when you want to use Spring Boot and a modern JavaScript framework. It’s less ideal for very small, simple projects where the overhead of JHipster might be too much.

3. Vaadin

Vaadin is a full-stack Java web framework focused on building rich and interactive web UIs. It offers two main approaches:

  • Vaadin Flow: Allows you to build UIs entirely in Java, without writing HTML or JavaScript directly. Vaadin handles the rendering on the client-side.
  • Hilla: A newer approach that combines a Spring Boot backend with a reactive TypeScript frontend.

Key Features:

  • Component-based architecture: UI elements are represented as reusable Java components.
  • Server-side rendering (Vaadin Flow): UI logic is executed on the server, which can simplify development for Java developers. Hilla uses client-side rendering.
  • Rich set of UI components: Vaadin provides a wide range of pre-built UI components, from simple buttons to complex grids and charts.
  • Use Cases: Vaadin is well-suited for building complex, data-driven web applications where a rich user interface is essential. It’s a good choice for Java developers who prefer a Java-centric approach to UI development.

Core Focus, Strength and Weaknesses

1. Spring Boot

  • Core Focus: This is the foundation. Spring Boot simplifies building standalone, production-ready Spring applications. It handles a lot of the boilerplate configuration, making it easier to get a Spring project up and running quickly. Think of it as the engine of your application.

Strengths:

  • Speed: Rapid development with auto-configuration and embedded servers.
  • Flexibility: Works well with various databases, cloud platforms, and other technologies.
  • Mature and Widely Used: Huge community support, extensive documentation, and a vast ecosystem of libraries.

Weaknesses:

  • Not a Full-Stack Solution: You’ll need to choose and integrate your own frontend technologies (like React, Angular, or Vue.js).
  • Learning Curve: While Spring Boot simplifies things, understanding the underlying Spring framework can still take time.

2. JHipster

  • Core Focus: A code generator that helps you quickly create full-stack web applications with Spring Boot on the backend and popular JavaScript frameworks (Angular, React, Vue.js) on the frontend.

Strengths:

  • Rapid Prototyping: Generates a complete application with authentication, database integration, and basic CRUD operations in minutes.
  • Best Practices: Uses well-established technologies and patterns.
  • Microservices Support: Can generate applications designed for a microservices architecture.

Weaknesses:

  • Complexity: Can generate a lot of code, which might be overwhelming for smaller projects or developers new to the technologies.
  • Less Control: You have less control over the initial setup compared to building everything from scratch.
  • Maintenance: Upgrading generated applications can be challenging.

3. Vaadin

  • Core Focus: A full-stack Java web framework that lets you build rich, interactive web UIs with Java. It offers two main approaches:
  • Vaadin Flow: Build UIs entirely in Java, without writing HTML or JavaScript.
  • Hilla: Combines a Spring Boot backend with a reactive TypeScript frontend.

Strengths:

  • Productivity: Vaadin Flow’s Java-centric approach can be very efficient for Java developers.
  • Type Safety: Strong type safety throughout the development process.
  • Excellent UI Components: Provides a wide range of pre-built UI components.

Weaknesses:

  • Steeper Learning Curve: Vaadin has its own way of doing things, so there’s a learning curve.
  • Less Flexibility: Compared to Spring Boot, Vaadin is more opinionated about how you structure your application.
  • Performance: Vaadin Flow’s server-side rendering can sometimes lead to performance issues in complex applications.

Share:

Wednesday, February 5, 2025

ChatGPT vs Gemini vs Copilot vs DeepSeek vs Perplexity vs Qwen - A challenge on content writeup

 

I requested ChatGPT, Gemini, Copilot, DeepSeek, Perplexity, and Qwen to revamp my paragraph below and provide relevant articles as references. 

Feedback Methods focus on incorporating user feedback and real-world experience into the validation and verification process. This involves continuous monitoring of deployed AI systems, gathering user feedback, and using this information to improve the system's performance and address any emerging ethical concerns. A/B testing is a common example, where different versions of an AI system are deployed to different user groups, and their performance and user satisfaction are compared. This iterative process allows for continuous improvement and adaptation of the AI system to real-world conditions. Another example is the use of post-market surveillance for medical AI devices, where data on the device’s performance in clinical practice is continuously collected and analysed to identify potential safety or efficacy issues.

The text above was the input for all 6 GenAI to respond to. Let us see the outcome of it.

1. ChatGPT

Feedback methods play a critical role in the validation and verification of AI systems by integrating real-world user interactions and experiences. These methods ensure AI models continuously improve, remain aligned with ethical principles, and mitigate risks related to bias, safety, and transparency.

One key approach is continuous monitoring of deployed AI systems, where real-time user feedback is collected to assess system reliability, fairness, and unintended consequences. This process enables adaptive AI governance, allowing iterative refinements that improve model robustness and address emerging ethical concerns, such as discrimination in decision-making systems.

A/B testing is a widely used validation method where different AI system versions are deployed to diverse user groups, and performance metrics, user satisfaction, and fairness outcomes are compared. For instance, AI-driven hiring platforms can use A/B testing to ensure that model updates do not introduce bias against specific demographic groups.

Another essential application is post-market surveillance for medical AI systems, where real-world clinical data is analyzed to detect safety or efficacy issues. Regulatory bodies like the FDA emphasize continuous performance evaluation for AI-powered diagnostic tools to prevent harmful medical misdiagnoses.

Additionally, human-in-the-loop (HITL) systems enhance AI validation by allowing expert oversight, enabling human evaluators to intervene when the system's decisions are ambiguous or ethically sensitive. This is particularly crucial in AI systems deployed for judicial sentencing, autonomous vehicles, and healthcare diagnostics, where incorrect predictions can have severe consequences.

The adoption of explainable AI (XAI) techniques in feedback-based verification also ensures transparency, helping regulators and users understand AI decision-making processes. These methods align with ethical AI principles by fostering accountability, fairness, and trust in AI applications.

Recent Research and Publications (2015–2024)

Below are relevant and recent academic publications that discuss feedback methods in AI validation and verification:

  1. (2024) - "Ethical AI Governance through Continuous User Feedback: A Framework for Adaptive AI Systems" – [Journal of AI Research]
  2. (2023) - "Post-Market Surveillance in AI-Powered Healthcare: Challenges and Solutions" – [IEEE Transactions on AI in Medicine]
  3. (2023) - "A/B Testing for Fairness in AI Systems: Addressing Bias in Algorithmic Decision-Making" – [ACM Conference on Fairness, Accountability, and Transparency (FAccT)]
  4. (2022) - "Human-in-the-Loop AI: Enhancing Verification and Trustworthiness in Critical Applications" – [Nature Machine Intelligence]
  5. (2021) - "The Role of Explainable AI in Ethical AI Validation and Continuous Improvement" – [AI & Society Journal]
  6. (2020) - "AI Validation Frameworks for Real-World Deployment: Integrating Feedback for Fairness and Robustness" – [MIT AI Ethics Lab Report]
  7. (2019) - "Real-Time Monitoring of AI Systems: A Review of Feedback-Based Verification Techniques" – [Journal of Automated Reasoning]
  8. (2018) - "Post-Deployment AI Monitoring: Regulatory and Ethical Considerations" – [Harvard AI Review]
  9. (2017) - "Human Oversight in AI Verification: Case Studies in Healthcare and Autonomous Systems" – [Springer AI Governance Handbook]
  10. (2015) - "Ethical Challenges in AI Validation: The Need for Continuous Feedback Loops" – [Oxford Journal of AI & Law]

Share:

Formal Methods Techniques in AI Verification

Formal methods are mathematical techniques used to rigorously verify the correctness, safety, and robustness of AI systems, particularly in high-stakes applications such as autonomous vehicles, medical diagnostics, and aerospace. 

When I did my master's degree 10 years ago, I discussed, evaluated, and qualitatively reviewed some of these techniques within the formal methods. You may search my thesis title "A source code perspective C overflow vulnerabilities exploit taxonomy based on well-defined criteria"

Below is a brief explanation of key techniques within formal methods, along with relevant examples and mathematical formulations simplified to ease the understanding.


1. Abstract Interpretation

Definition:
Abstract interpretation is a static program analysis technique that approximates program behavior by mapping infinite concrete domains (e.g., real numbers) to a finite abstract domain (e.g., intervals). This technique is used to detect errors such as buffer overflows, division by zero, and numeric instability.

Example:
Consider an AI algorithm using floating-point arithmetic. Instead of testing all possible floating-point values, abstract interpretation groups them into intervals. If a neural network's activation function outputs values in [1,1][-1,1], the abstract interpretation would ensure no computations exceed this range.

Mathematical Representation:
For a program function f(x)f(x), abstract interpretation defines an abstraction function α\alpha and a concretization function γ\gamma:

xConcreteDomain,α(f(x))f(α(x))\forall x \in \text{ConcreteDomain}, \quad \alpha(f(x)) \approx f(\alpha(x))

where α(x)\alpha(x) is the abstract representation, and γ(α(x))\gamma(\alpha(x)) maps it back to the concrete domain.


2. Semantic Static Analysis

Definition:
Semantic static analysis inspects a program's source code without executing it to determine properties such as termination, correctness, and possible runtime errors.

Example:
A neural network classifier trained for medical diagnosis should not output probabilities exceeding 11. Static analysis verifies whether the probability function adheres to:

P(yx)=1,xInputDomain\sum P(y|x) = 1, \quad \forall x \in \text{InputDomain}

where P(yx)P(y|x) represents the probability of class yy given input xx.


3. Model Checking

Definition:
Model checking systematically explores a system's state space to ensure it satisfies a given set of formal specifications, typically expressed in temporal logic.

Example:
In an autonomous driving system, a model checker can verify whether a car always stops at a red light by checking the Linear Temporal Logic (LTL) formula:

(RedLightStop)\Box (\text{RedLight} \rightarrow \Diamond \text{Stop})

which states that if a red light appears, the car must eventually stop.


4. Proof Assistants

Definition:
Proof assistants are software tools that help construct formal proofs of system correctness by allowing users to define mathematical models and verify logical statements interactively.

Example:
A self-driving car’s braking system should ensure that stopping distance does not exceed a threshold dsafed_{\text{safe}}:

dstop=v22adsafe​

where vv is the vehicle speed and aa is the braking deceleration. A proof assistant like Coq or Isabelle verifies this inequality.


5. Deductive Verification

Definition:
Deductive verification formally proves that a system satisfies its specification using logical reasoning. This involves deriving proof obligations that demonstrate correctness.

Example:
In an AI-based medical diagnosis system, a deductive verification approach ensures that if input xx is classified as disease-positive, then the treatment T(x)T(x) should always be prescribed:

x,Diagnosis(x)=PositiveT(x)\forall x, \quad \text{Diagnosis}(x) = \text{Positive} \Rightarrow T(x) \neq \emptyset

6. Model-Based Testing

Definition:
Model-based testing (MBT) derives test cases from formal models of a system’s expected behavior, ensuring comprehensive test coverage.

Example:
For an AI-powered ATM system, a state machine model might specify:

  1. Insert Card → PIN Entry → Transaction → Dispense Cash
  2. Insert Card → PIN Entry → Incorrect PIN → Card Ejection

Each path is converted into test cases, ensuring all scenarios are tested.


7. Design by Refinement

Definition:
Design by refinement incrementally develops a system by starting with an abstract specification and progressively introducing more details while maintaining correctness.

Example:
For a neural network-based control system, an initial specification may state:

Output[0,1]

As the design is refined, more constraints are added to ensure robustness against adversarial attacks.


Conclusion

These formal methods provide robust frameworks for ensuring AI systems behave as expected in critical applications. While abstract interpretation and static analysis focus on pre-runtime validation, model checking, and proof assistants help verify properties at runtime. Deductive verification ensures correctness by logical reasoning, while model-based testing and refinement guide structured system development.


Share:

Saturday, January 18, 2025

Ensuring Robustness in AI Systems: A Multi-Phase Validation Approach

Abstract

Artificial Intelligence (AI) systems are increasingly integral across various sectors, necessitating rigorous validation to ensure they function as intended with minimal errors. This article delineates a comprehensive, multi-phase validation framework designed to enhance the reliability and accuracy of AI systems. Organizations can mitigate risks associated with AI deployment by implementing structured validation processes, thereby fostering trust and efficacy in AI applications.
Introduction


The proliferation of AI technologies has transformed industries by automating complex tasks and providing data-driven insights. However, the deployment of AI systems without thorough validation can lead to significant errors, undermining their intended purposes and potentially causing adverse outcomes. Therefore, establishing a robust validation framework is imperative to ensure AI systems operate with high accuracy and reliability.


Phases of AI System Validation

  1. Data Validation
  2. Model Training and Validation
  3. Pre-Deployment Validation
  4. Post-Deployment Monitoring and Validation

Reducing Error Rates in AI Systems

Achieving low error rates in AI systems is paramount, especially in critical applications. Studies indicate that acceptable error rates for AI should be significantly lower than those for human performance to foster trust and reliability. For instance, in medical diagnostics, a survey revealed that the acceptable error rate for AI was 6.8%, compared to 11.3% for human practitioners.

To minimize errors, organizations should implement high-quality data collection, robust validation processes, and advanced algorithms tailored to specific use cases.

Conclusion

Implementing a multi-phase validation framework is essential to ensure AI systems serve their intended purposes with minimal errors. By meticulously validating data, rigorously training and testing models, and continuously monitoring performance post-deployment, organizations can enhance the reliability and effectiveness of AI applications. Such structured validation not only mitigates risks but also builds stakeholder confidence in AI technologies.

References

  1. The 5 Stages of Machine Learning Validation (https://towardsdatascience.com/the-5-stages-of-machine-learning-validation-162193f8e5db)
  2. Goodbye Noise, Hello Signal: Data Validation Methods That Work (https://www.pecan.ai/blog/data-validation-methods-that-work/?utm_source=chatgpt.com)
  3. Training, validation, and test phases in AI — explained in a way you'll never forget (https://towardsdatascience.com/training-validation-and-test-phases-in-ai-explained-in-a-way-youll-never-forget-744be50154e8)
  4. Verification and Validation of Systems in Which AI is a Key Element (https://sebokwiki.org/wiki/Verification_and_Validation_of_Systems_in_Which_AI_is_a_Key_Element?utm_source=chatgpt.com)
  5. Should artificial intelligence have lower acceptable error rates than humans? (https://pmc.ncbi.nlm.nih.gov/articles/PMC10301708/?utm_source=chatgpt.com)
  6. Understanding Error Rate: A Crucial Guide for Professionals (https://helio.app/ux-research/design-metrics/understanding-error-rate-a-crucial-guide-for-professionals/?utm_source=chatgpt.com)

Share:

Friday, January 17, 2025

AI Audit and the Importance of Having Competent Auditor

"However, efforts to meet AI audit service demands, and by extension, any use of audits by public regulators, face three important challenges. First, it remains unclear what the audit object(s) will be – the exact thing that gets audited. Second, despite efforts to build training and credentialing for AI auditors, a sufficient supply of capable AI auditors is lagging. And third, unless markets have clear regulations around auditing, AI audits could suffer from a race to the bottom in audit quality." -
https://www.techpolicy.press/ai-audit-objects-credentialing-and-the-racetothebottom-three-ai-auditing-challenges-and-a-path-forward/?utm_source=chatgpt.com




As AI systems become increasingly prevalent, the need for rigorous auditing to ensure their safety and efficacy has never been greater. An article on TechPolicy Press highlights the critical role of AI auditing in ensuring the safety and effectiveness of AI systems. While I agree that increasing the number of AI auditors is essential, I want to emphasize the equally critical need to ensure their competence and experience.

We can't afford to take a lax approach where anyone who passes a certification exam is deemed qualified to audit AI systems. In-depth knowledge and practical experience are paramount. This concern is particularly relevant when considering the practices of some companies that prioritize hiring "bright young talent" with strong communication skills but lack real-world understanding of AI systems to manage their operational cost. These auditors often provide vague or irrelevant recommendations, or misunderstand the situation entirely, leading to wasted time and potentially jeopardizing safety.

Just like any complex system, AI requires careful auditing by qualified professionals. Competent and experienced auditors can identify and mitigate risks, ultimately safeguarding AI systems and the people they interact with.


The Importance of Auditor Expertise

AI systems are complex and can have unintended consequences. Auditors need a deep understanding of how these systems work, including their algorithms, data sources, and potential biases. They also need to be able to assess the risks associated with these systems and recommend appropriate mitigation strategies.

Unfortunately, the current landscape of AI auditing is not without its challenges. There is a lack of standardized training and certification programs, which can lead to inconsistencies in the quality of audits. Additionally, there is a risk of a "race to the bottom" in audit quality, as companies may prioritize cost over quality when selecting auditors.

A Path Forward

To address these challenges, we need to take several steps. First, we need to develop robust training and certification programs for AI auditors. These programs should be rigorous and cover a wide range of topics, including AI fundamentals, risk assessment, and audit methodologies.

Second, we need to establish clear standards for AI audits. These standards should be developed by experts in the field and should be regularly updated to reflect the latest developments in AI.

Third, we need to create a culture of quality in AI auditing. This means holding companies accountable for the quality of their audits and rewarding auditors for their expertise and experience.

Conclusion

AI auditing is critical to ensuring the safe and responsible development of AI systems. By investing in the training and development of competent AI auditors, we can help ensure that these systems are used for good and that their potential benefits are realized.

Let's foster a culture of rigorous AI auditing with a strong emphasis on auditor expertise. Share your thoughts in the comments!

#AI #auditing #artificialintelligence #safety #technology #riskmanagement

Share:

Tuesday, January 7, 2025

AI Writing Tools for Beginners: A Review of Sudowrite, Rytr, and NovelAI


AI is revolutionizing how we write, and AI writing tools are becoming increasingly popular among writers of all levels. If you're a beginner writer looking to improve your writing skills or simply looking for a way to overcome writer's block, AI writing tools can be a valuable asset. In this article, we'll review three of the most popular AI writing tools on the market: Sudowrite, Rytr, and NovelAI. We'll also discuss which tool is the best for beginners.

Comparison

Sudowrite

Pros

      • Excellent for character and plot development 
      • Focus on long-form writing 
      • User-friendly interface

Cons

Rytr

Pros

      • Versatile tool
      • Affordable options
      • Easy to use

Cons

NovelAI 

Pros

      • Creative and imaginative output
      • Strong community 
      • Image generation

Cons

Recommendation for Beginners

Rytr is a good starting point for beginners who want to explore AI writing tools without a significant upfront investment. It is versatile and affordable, and its simple interface makes it easy to use. However, use it wisely, as the free edition has a limitation on the number of words it can generate.

Key Considerations

  1. Budget: Determine how much you're willing to spend on a subscription.
  2. Writing style: Consider the genre and style of your novel. Some tools may be better suited for certain genres than others.
  3. Learning curve: Choose a tool that you find intuitive and easy to use.
  4. Trial periods: Take advantage of free trials or limited-time offers to test different tools before committing to a subscription.

Conclusion

AI writing tools can be a valuable asset for beginner writers. However, it is important to remember that these tools do not replace your creativity and writing skills. Use them to enhance your writing process, overcome writer's block, and explore new ideas.

Additional Tips

  1. Use a combination of different AI writing tools to get the best results.
  2. You can just experiment with different prompts to see what works best for you.
  3. Don't be afraid to edit and revise the output from AI writing tools.
  4. Use AI writing tools to help you overcome writer's block, but don't rely on them to do all the work for you.
p/s: The content is originated via collaboration with Gemini

Share:

About Me

Somewhere, Selangor, Malaysia
An IT by profession, a beginner in photography

Labels

Blog Archive

Blogger templates