Site icon eBooks1001

Fundamentals Of Responsible Artificial IntelligenceMl

Published 1/2023
MP4 | Video: h264, 1280×720 | Audio: AAC, 44.1 KHz
Language: English | Size: 4.69 GB | Duration: 8h 35m

Designing and mantaining AI/ML models that help data subjects, are explainable, are not biased, and are compliant.
What you’ll learn
Most problems with AI/ML models or their data, as well as how to address them
How to identify and mitigate ethical risks from AI/ML models, as well as comply with regulation
What is XAI (explainable AI), as well as the most common explanation elements and popular frameworks
Relevant regulation that impacts AI models, and how
Requirements
Have a basic knowledge of AI and ML
Description
ARTIFICIAL INTELLIGENCE? NATURAL KNOWLEDGEThere’s no doubt that AI is everywhere.In our cell phones, our computers, our cars, our apps, and many other aspects of life.Knowing how to design and train effective AI and ML models is not an easy task.But even when you master it, these may not be responsible.Due to bias, error, malicious management, or other factors, AI/ML models may hurt data subjects.There are, naturally, several courses on fragmented topics of the AI industry.How to train models, how to debias datasets, and other specific areas.Frequently, you can find information on aspects of AI, or on aspects of responsible AI. But not both.And on top that, many courses use different definitions, so you may become confused.In short, most courses on AI don’t present a single, united source of training on responsible AI.And this has consequences not just for your career, but yourself personally as well.What happens when you don’t have enough information (or in the adequate format)?You’ll become confused by what are responsible – and irresponsible – algorithms. Do we need debiasing? Do we need explainability? Others?You won’t be able to properly diagnose – and address – model problems that may be hurting data subjects;You’ll become frustrated and irritated due to not knowing what is wrong with a specific model;You won’t be able to make choices in terms of the models – from specific to sensitive classifiers, accurate versus explainable models, or many other specific model inferences – that, each, carry ethical consequences;You won’t be able to tell an employer – or the end user – that you can design "responsible" AI, with confidence;So, if you want to know everything about what makes AI/ML models responsible (or not), as well as how to address issues, where should you head?This new course, of course!A RESPONSIBLE COURSE FOR RESPONSIBLE AIUnlike other responsible AI model courses you’ll find out there, this course is comprehensive and updated.In other words, not only did I make sure that you’ll find more topics (and more in-depth) than in any other course you may find, but I also made sure to keep the information relevant to the types of models and use cases you will find nowadays.Designing responsible AI models may seem complex (and it is, to a point), but it relies on a few key, simple principles.In this course, you’ll learn about the essentials of how models are designed without bias, how they can become explainable, and how to mitigate the ethical risks posed by them.Not only that, we’ll dive deep into the activities, stakeholders, projects and resources involved in responsible AI model design.In this 8.5-hour+ masterclass, you’ll find the following modules:You’ll learn about the Fundamentals. We start by clarifying what is model bias, XAI (explainable AI), the usual ethical risks posed by AI, and an introduction to the different disciplines;You’ll learn about Responsible Data and Models. All types of problems that may occur with a model or its data, from data drift, to overloaded/correlated features, wrong inferences, overfitting, and many other issues with either the model or the data – and how to address them;You’ll learn about Transparency and Explainability. We will cover the discipline of XAI, or explainable AI, the basics of justifications, recipients, what makes good justifications, as well as some popular frameworks for AI explainability, such as LIME, SHAP and TCAV;You’ll learn about Ethics and Ethical Risks in AI. What are the specific ethical risks associated with a given AI/ML model, with the product that contains it, with the management of the company itself, as well as what regulation may affect your model decisions, and how your AI model may impact society, as a whole, with time and scale;By the end of this course, you will know exactly what makes AI/ML models irresponsible, and how to design responsible models, which help people, not hurt them, while still being useful and accurate.The best of this course? Inside you’ll find all of these 4 modules.THE PERFECT COURSE… FOR WHOM?This course is targeted at different types of people.Naturally, if you’re a current or future AI/ML practitioner, you will find this course useful, as well as if you are any other professional or executive involved in the design of AI/ML models for any purpose.But even if you’re any other type of professional that aims to know more about how AI/ML works, and how it may become responsible, you’ll find the course useful.More specifically, you’re the ideal student for this course if:You’re someone who wants to know more about AI/ML themselves (how they transform inputs into outputs, different types of models, their characteristics, and what may go wrong with each);You’re someone who is interested in scrutinising AI/ML models (what makes models be sensitive, yield wrong outputs, and/or develop biases for specific parts of the population);You’re some who is interested in ethics in tech (how AI/ML models may worsen societal problems, discriminate minorities, or otherwise make automated decisions, at scale, that may damage data subjects);LET ME TELL YOU… EVERYTHINGSome people – including me – love to know what they’re getting in a package.And by this, I mean, EVERYTHING that is in the package.So, here is a list of everything that this masterclass covers:FundamentalsYou’ll learn about the basics of irresponsible models. Models that have data or model issues, that are not explainable, or whose ethical risks are not hedged against (or not acknowledged) by the company;You’ll learn about the ethical consequences of classifiers and regressors, placing inputs in wrong categories, or estimating wrong values for these, as well as what this may cause. Also, the differences between discriminative and generative models, in terms of what may happen;You’ll learn about the specific ethical consequences of sensitive versus specific classifiers, and what they cause with different thresholds, as well as the analogous dilemma in regressors – being flexible versus efficient;You’ll learn about the dilemma of accuracy versus explainability, where different models sit on the accuracy/explainability spectrum, and how your use case usually guides model architecture;Responsible Data and Models:You’ll learn about model feature issues, which are issues caused by the selection of wrong (or biased) features in the model, including reduced features, having a historical focus, using proxies, defaulting, having overloaded/correlated features, "overefficiency" and feedback loops;You’ll learn about the use of reduced features, when a model tries to distill reality into one (or few) features, and its consequences;You’ll learn about having a historical focus, which traps people (or other inputs) into their "past versions", many times perpetuating feedback loops and disadvantages;You’ll learn about proxies, which are features that approximate other, unavailable features, but bring biases of their own, and of what types;You’ll learn about defaulting – forcing people (or inputs) into specific categories, and what happens when inputs default to the wrong categories (or to no category);You’ll learn about overloaded or correlated features, which are features that seem independent at first sight, but actually have additional meaning, many times encoding financial, racial and other information;You’ll learn about "overefficiency", the name that I give to the goal of maximising the value of one single feature, at all costs, regardless of the consequences that has to people and other elements;You’ll learn about feedback loops – what happens when model outputs are later used as inputs, and how that can perpetuate biases and discrimination;You’ll learn about model issues, which are problems with the model itself, or its training;You’ll learn about adversarial sensitivity – what happens when a model is not trained for noise, and, therefore, with small changes in inputs produces wild fluctuations in outputs;You’ll learn about overfitting – what happens when a model is trained just for one use case (or type of data), and the specific ethical consequences of it; You’ll learn about data issues, which are problems either with the training data or production data themselves:You’ll learn about biased data, how they occur, the biases that you may have, yourself, as a practitioner, and how to address them;You’ll learn about data drift – when production data starts to have different characteristics from training data, as well as the specific consequences of this process;You’ll learn about data-centric approaches – techniques to improve data quality; You’ll learn about some data quantity dilemmas – what to do when we have too much data for a feature, as well as when we don’t have data at all, and the consequences of different choices;You’ll learn about dataset hygiene, including responsibilities for sourcing and maintaining data, handling metadata, and other basic elements to assure high-quality training data;You’ll learn about diversity and debiasing – how datasets may become biased (in terms of location, ethnicity, financial status, or any other feature), and how to address your own biases as a practitioner, such as anchoring bias, survivorship bias, confirmation bias, and many others;You’ll learn about data profiling – how to increase the quality of data by detecting formats, patterns, business rules, statistical measures and more insights about data;You’ll learn about ethical data dimensions – besides the "usual" data dimensions in profiling, such as accuracy or completeness, using dimensions that specifically measure how ethical data are, such as fairness, privacy, transparency and others, and that indicate fair AI/ML model decisions (or the lack of them!);You’ll learn about data usage purpose/authority, and how the same data may be processed, in a company, for one purpose, and not for another one, safeguarding data subjects if the company does not have a purpose for their data;You’ll learn about model-centric approaches – decisions about the model, itself, to make its inferences more ethical:You’ll learn about the dilemmas of human overrides, and the ethical consequences of allowing users to go against AI outputs – or of not allowing them to;You’ll learn about different inference decisions, from the thresholds selected for classifiers, tuning recommendations based on personal information, and other dilemmas, as well as their consequences;You’ll learn about specialist validation – why it’s crucial to validate model design choices and goals with business experts, and not make the AI/ML practitioner responsible for these – and why;You’ll learn about subpopulation considerations – what happens when your model treats a specific subpopulation in a different way, and the dilemma of "breaking off" a model copy for a different subpopulation versus optimising the model, itself;Transparency and Explainability:You’ll learn about the basics of explainability. What is the discipline of XAI, or explainable AI, what is transparency, what is interpretability, other terms, and what are justifications or explanations;You’ll learn about the key elements of explanations, from fidelity, to the level of abstraction, contrasting information, and other elements of justifications of high-quality, explainable AI;You’ll learn about the different explanation recipients – internal users, external users, observes, regulators, and more, and what each may demand, in terms of explanations;You’ll learn about the downsides and challenges of transparency, including users gaming the system, copying AI models, attacking the model itself, and more;You’ll learn about an overview of XAI methods, including data-centric and data profiling methods, visualising different results, calculating the influence of different features, and counterfactual explanations;You’ll learn about common elements in XAI, including what are "concepts", what is "activation", and what are "surrogate models", used by many popular frameworks;You’ll learn about LIME, or Local, Interpretable, Model-Agnostic Explanations, an explainability framework using a linear surrogate model explainer;You’ll learn about SHAP, or Shapley Additive Explanations, an explainability framework using Shapley values to calculate feature contributions in a more subtle and advanced manner, and with specific implementations for all major model architectures;You’ll learn about TCAV, or Training with Concept Activation Vectors, an explainability framework that uses the activation of "concepts" instead of pixel regions for output determination, more user-friendly than other methods such as LRP (Layer-wise Relevance Propagation);Ethics and Ethical RisksYou’ll learn about some product considerations – risk and insights related to the products containing AI models;You’ll learn about the key components of good consent, such as choice, freedom, understanding, and others, and how to obtain "true consent" using these;You’ll learn about some dilemmas regarding recordkeeping – what types of logs to keep, and for how long, and the consequences this has for data subjects;You’ll learn about assessing the ethical risks posed by an AI model, at different stages. The risks posed by its creation, by its usage, by its possible deterioration with time, and more;You’ll learn about some management considerations – ethical risks posed by the behavior of people in the company;You’ll learn about ethical alignment – defining ethical values the company lives by, and how these are impacted by your AI/ML models, and implementing them in practice;You’ll learn about ethical governance – defining structures and responsible people that govern whether ethical values are obeyed or not – and the consequences;You’ll learn about the "compliance approach" – considering data ethics a type of compliance, which makes it quantifiable and measurable, and easier to adhere to;You’ll learn about enforcement and accountability, including how employee incentives may be perverse and contribute to malevolous model usage, as well as how to hold accountable employees that make unethical decisions with AI/ML models;You’ll learn about the three levels of oversight framework, considering AI/ML models to be scrutinised at three distinct levels of depth – isolated actions, the contributions to a system, and the contributions to the bigger society, in general;You’ll learn about relevant regulatory frameworks, and how they impact AI decisions;You’ll learn about the GDPR and its guidelines for data, but specifically, how it affects AI decisions, forbidding automated decisions with significant/legal impact on individuals and forcing basic model architecture disclosure, for example;You’ll learn about the CCPA, and its similarities and differences with the GDPR – and specifically, those that affect AI decisions;You’ll learn about fairness in finance regulation in the US, and how the CFPB can scrutinise any AI/ML model, not just in terms of outputs but also processes and documentation, for any activity that may cause harm to consumers in any financial services market;You’ll learn about societal considerations – what your AI model can cause, with time and at scale, to society, including possible acceleration of bias, provider dependency, contributing to a focus on surveillance, the reduction (or elimination) of objective experiences, and more;MY INVITATION TO YOURemember that you always have a 30-day money-back guarantee, so there is no risk for you.Also, I suggest you make use of the free preview videos to make sure the course really is a fit. I don’t want you to waste your money.If you think this course is a fit and can take your responsible AI/ML model knowledge to the next level… it would be a pleasure to have you as a student.See you on the other side!
Overview
Section 1: Introduction
Lecture 1 Course Intro
Section 2: Fundamentals
Lecture 2 Module Intro
Lecture 3 Responsible AI/ML Considerations
Lecture 4 Usual Model Characteristics – Intro
Lecture 5 Usual Model Characteristics – Classification and Regression
Lecture 6 Usual Model Characteristics – Specificity vs. Sensitivity
Lecture 7 Usual Model Characteristics – Accuracy vs. Explainability
Lecture 8 Module Outro
Section 3: Responsible Data and Models
Lecture 9 Module Intro
Lecture 10 Model Feature Issues – Intro
Lecture 11 Model Feature Issues – Reduced Features
Lecture 12 Model Feature Issues – Historical Focus
Lecture 13 Model Feature Issues – Proxies
Lecture 14 Model Feature Issues – Defaulting
Lecture 15 Model Feature Issues – Overloaded/Correlated Features
Lecture 16 Model Feature Issues – "Overefficiency"
Lecture 17 Model Feature Issues – Feedback Loops
Lecture 18 Model Issues – Intro
Lecture 19 Model Issues – Adversarial Sensitivity
Lecture 20 Model Issues – Overfitting
Lecture 21 Data Issues – Intro
Lecture 22 Data Issues – Biased Data
Lecture 23 Data Issues – Data Drift
Lecture 24 Data-Centric Approaches – Intro
Lecture 25 Data-Centric Approaches – Data Quantity Dilemmas
Lecture 26 Data-Centric Approaches – Dataset Hygiene
Lecture 27 Data-Centric Approaches – Diversity and Debiasing
Lecture 28 Data-Centric Approaches – Data Profiling
Lecture 29 Data-Centric Approaches – Ethical Data Dimensions
Lecture 30 Data-Centric Approaches – Data Usage Purpose/Authority
Lecture 31 Model-Centric Approaches – Intro
Lecture 32 Model-Centric Approaches – Human Override
Lecture 33 Model-Centric Approaches – Inference Decisions
Lecture 34 Model-Centric Approaches – Specialist Validation
Lecture 35 Model-Centric Approaches – Subpopulation Considerations
Lecture 36 Module Outro
Section 4: Transparency and Explainability
Lecture 37 Module Intro
Lecture 38 Explainability Basics
Lecture 39 Explanation Elements
Lecture 40 Explanation Recipients
Lecture 41 Transparency Challenges
Lecture 42 XAI Method Overview
Lecture 43 XAI Common Elements
Lecture 44 Popular Frameworks – Intro
Lecture 45 Popular Frameworks – LIME
Lecture 46 Popular Frameworks – SHAP
Lecture 47 Popular Frameworks – TCAV
Lecture 48 Module Outro
Section 5: Ethics and Ethical Risks
Lecture 49 Module Intro
Lecture 50 Product Considerations – Intro
Lecture 51 Product Considerations – Consent
Lecture 52 Product Considerations – Levels of Recordkeeping
Lecture 53 Product Considerations – Ethical Risk Assessment
Lecture 54 Management Considerations – Intro
Lecture 55 Management Considerations – Ethical Alignment
Lecture 56 Management Considerations – Ethical Governance
Lecture 57 Management Considerations – The Compliance Approach
Lecture 58 Management Considerations – Enforcement and Accountability
Lecture 59 Management Considerations – Three Levels of Oversight
Lecture 60 Relevant Regulation – Intro
Lecture 61 Relevant Regulation – GDPR
Lecture 62 Relevant Regulation – CCPA
Lecture 63 Relevant Regulation – Fairness in Finance
Lecture 64 Societal Considerations
Lecture 65 Module Outro
Section 6: Outro
Lecture 66 Course Outro
AI and ML practitioners that care about how impact data subjects,Ethics practitioners that will deal with AI/ML models or automated decisions,Any professional that will deal with AI/ML models

Homepage

www.udemy.com/course/responsible-ai-ml/

Buy Premium From My Links To Get Resumable Support,Max Speed & Support Me

Fikper
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part1.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part2.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part3.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part4.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part5.rar.html
Rapidgator
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part1.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part2.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part3.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part4.rar.html
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part5.rar.html
Uploadgig
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part1.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part2.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part3.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part4.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part5.rar
NitroFlare
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part1.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part2.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part3.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part4.rar
yfihv.Fundamentals.Of.Responsible.Artificial.IntelligenceMl.part5.rar

Links are Interchangeable – No Password – Single Extraction

Exit mobile version