diff --git a/content/speaking/koa_2024.md b/content/speaking/koa_2024.md
new file mode 100644
index 0000000..915d6cf
--- /dev/null
+++ b/content/speaking/koa_2024.md
@@ -0,0 +1,17 @@
+
+---
+date: 2024-06-20
+description: "Koa Club Panel 2024"
+featured_image: ""
+tags: ["webinar"]
+title: "UPCOMING: Bold Leaders Using AI Roundtable Panel"
+summary: "See me in August virtually with Koa Club talking about applying AI effectively in business"
+---
+
+Topic: Bold Leaders Using AI Roundtable
+Location: Virtual
+Date: August 21, 2024
+Links: [https://thekoaclub.com/event/bold-leaders-using-ai-roundtable/](https://thekoaclub.com/event/bold-leaders-using-ai-roundtable/)
+
+Alongside Holly Knoll, Kathleen Glass, Dr. Sarah Glova, and Laura Janusek
+Free to register and join!
diff --git a/content/speaking/odsc_west_2024.md b/content/speaking/odsc_west_2024.md
new file mode 100644
index 0000000..7c07663
--- /dev/null
+++ b/content/speaking/odsc_west_2024.md
@@ -0,0 +1,13 @@
+---
+date: 2024-06-20
+description: "ODSC West 2024"
+featured_image: ""
+tags: ["webinar", "careers"]
+title: "UPCOMING: ODSC West 2024"
+summary: "See me in October virtually at ODSC West talking about how to communicate effectively with business leadership about AI/ML"
+---
+
+Topic: "Just Do Something with AI": Bridging the Business Communication Gap for ML Practitioners
+Location: San Francisco/virtual
+Date: October 29-31, 2024
+Links: [https://odsc.com/california/](https://odsc.com/california/)
diff --git a/content/writing/dataprivacyinaidevelopmentdatalocalization.md b/content/writing/dataprivacyinaidevelopmentdatalocalization.md
new file mode 100644
index 0000000..6f485a9
--- /dev/null
+++ b/content/writing/dataprivacyinaidevelopmentdatalocalization.md
@@ -0,0 +1,321 @@
+
+
+
+
+---
+date: 2024-06-18
+featured_image: "https://cdn-images-1.medium.com/max/1024/0*YNnvOyfQcQIsUxRg"
+tags: ["data-localization","data-governance","data-engineering","privacy","getting-started"]
+title: "Data Privacy in AI Development: Data Localization"
+disable_share: false
+---
+
+####
+ Why should you care where your data lives?
+
+
+
+ In the process of writing my talk for the AI Quality Conference coming up on June 25 in San Francisco (
+ [tickets still available](https://www.aiqualityconference.com/)
+ !) I have come across many topics that deserve more time than the brief mentions I will be able to give in my talk. In order to give everyone more detail and explain the topics better, I’m starting a small series of columns about things related to developing machine learning and AI while still being careful about data privacy and security. Today I’m going to start with
+ **data localization**
+ .
+
+
+
+
+ Before I begin, we should clarify what is covered by data privacy and security regulation. In short, this is applicable to “personal data”. But what counts as
+ **personal data**
+ ? This depends on the jurisdiction, but it usually includes PII (name, phone number, etc.) PLUS data that could be combined together to make someone identifiable (zip code, birthday, gender, race, political affiliation, religion, and so on). This includes photos, video or audio recordings of someone, details about their computer or browser, search history, biometrics, and much more.
+ [GDPR’s rules about this are explained here](https://gdpr.eu/eu-gdpr-personal-data/)
+ .
+
+
+
+
+ With that covered, let’s dig in to data localization and what it has to do with us as machine learning developers.
+
+
+
+###
+ What’s Data Localization?
+
+
+
+ Glad you asked! Data localization is essentially the question of what geographic place your data is stored in — if you localize your data, you are keeping it where it was created. (This is also sometimes known as “data residency”, and the opposite is “data portability”.) If your dataset is on AWS S3 in us-east-1, your data is actually living, physically (inasmuch as data lives anywhere), in the United States, somewhere in Northern Virginia. To get more precise, AWS has several specific data centers in Northern Virginia, and you can get their exact addresses online. But for most of us, knowing the general area at this grain is sufficient.
+
+
+
+###
+ Why should I care where the datacenter is? Isn’t the cloud just ‘everywhere’?
+
+
+
+ There are good reasons to know where your data lives. For one thing, there can be real physical speed implications for loading/writing data to the cloud depending on how far you and your computer are from the region where the datacenter is located. But this is likely not a huge deal unless you’re doing crazy high-speed computations.
+
+
+
+
+ A more important reason to care (and the reason this is a part of data privacy), is that data privacy laws around the world (as well as your contracts with clients and consent forms filled out by your customers) have rules about data localization. Regulation on data localization involves requiring personal data about citizens or residents of a place to be stored on servers in that same place.
+
+
+
+
+ General caveats:
+
+
+
+* It doesn’t always apply to all kinds of data (financial data is more often covered)
+* It doesn’t always apply to all kinds of businesses (tech companies are more often covered)
+* It may be triggered by a government request, or it might be automatic (see Vietnam)
+* Sometimes there are ways you can get consent to move the data, sometimes not
+* Sometimes you just need to initially store the data in country, and then you can move it around later (see Russia)
+* Sometimes you can store it outside the country of origin but there are limits on where else it can go (see EU)
+
+
+
+ In addition, private companies sometimes impose data localization requirements in contracts, potentially to comply with these laws, or to reduce the risk of data breach or surveillance by other governments on the data.
+
+
+
+
+ This means, literally, that you may be legally limited on the locations of data centers where you can store certain data, primarily based on who the data is about, or who the original owner of the data was.
+
+
+
+###
+ Example
+
+
+
+ It may be easier to understand this with a concrete (simplified) example.
+
+
+
+* You run a website where people can make purchases. You collect data during these purchases, such as credit card details, address, name, IP address, and some other things. Your consent banner/fine print doesn’t say anything about data localization.
+* You get customers from Russia, India, and the United Arab Emirates.
+* Unless you got explicit consent, all of the personal data from these visitors are subject to different data localization rules.
+
+
+
+ What does this mean for you? All this data needs to be processed differently.
+
+
+
+* The data from Russian customers needs to be initially stored in a Russian-based server, and then
+ [may be transferred depending on the applicable rules](https://assets.kpmg.com/content/dam/kpmg/be/pdf/2018/09/ADV-factsheet-localisation-of-russian-personnal-data-uk-LR.pdf)
+ .
+* The data from EU customers can be stored in countries that have sufficient data security laws (notably, not Russia).
+* The UAE customer data needs to be stored in the UAE because you didn’t get consent from these customers to store it elsewhere.
+
+
+
+ This creates obvious problems for data engineering, since you need separate pipelines for all of the data. It’s also a challenge for modeling and training — how do you construct a dataset to actually use?
+
+
+
+###
+ Get consent
+
+
+
+ If you had gotten consent from the UAE customers to move data, you’d probably be ok. Data engineering would still have to pipe the data from Russian customers through a special path, but you could combine the data for training. However, because you didn’t, you’re stuck! Make sure that you know what permissions and authorizations your consent tool includes, so you don’t get in this mess.
+
+
+
+###
+ On the fly combination
+
+
+
+ Assuming it’s too late to do that, another solution is to have a compute platform that loads from different databases at time of training, combines the dataset in the moment, and trains the model without ever writing any of the data to disk in a single place. The general consensus (NOT LEGAL ADVICE) is that models are not themselves personal data, and thus not subject to the legal rules. But this takes work and infrastructure, so get your dev-ops hat on.
+
+
+
+
+ If you have extremely large data volumes, this can become computationally expensive very fast. If you generate features based on this data but personal data about cases is still interpretable, then you can’t save everything together in one place, but will either need to save de-identified/aggregated features separately, or write them back to the original region, or just recalculate them every time on the fly. All of these are tough challenges.
+
+
+
+###
+ De-identify and/or aggregate
+
+
+
+ Fortunately, there’s another option. Once you have aggregated, summarized, or thoroughly (irreversibly) de-identified the data, it loses the personal data protections and you can then work with it more easily. This is a strong incentive for you not to be storing personal data that is identifiable! (Plus, this reduces your risk of data breaches and being hacked.) Once the data is no longer legally protected because it’s no longer high risk, you can do what you want and carry on with your work, saving the data where you like. Extract non-identifiable features and dispense with identifiable data if you possibly can.
+
+
+
+
+ However, deciding when the data is sufficiently aggregated or de-identified so that localization laws are no longer applicable can sometimes be a tough call, because as I described above, many kinds of demographic data are personal because in combination with other datapoints they could create identifiability. We are often accustomed to thinking that when PII (full names, SSNs, etc) are removed, then the data is fine to use as we like. This is not how the law sees it in many jurisdictions! Consult your legal department and be conscientious about what constitutes risk. Ideally, the safest thing is when the data is no longer
+ *personal data*
+ , e.g. not including names, demographics, addresses, phone numbers, and so on at the individual level or in unhashed, human readable plaintext. THIS IS NOT LEGAL ADVICE. TALK TO YOUR LEGAL DEPARTMENT.
+
+
+
+
+ We are very used to being able to take our data wherever and manipulate it and run calculations and then store the data — on laptops, or S3 or GCS, or wherever you want, but as we collect more personal data about people, and more data privacy laws take effect all around the world, we need to be more careful what we do.
+
+
+
+###
+ FAQ
+
+
+####
+ What if you don’t know where the data originated?
+
+
+
+ This is a tough situation. If you have some personal data about people and no idea where it came from or where those people were located (and probably also no idea what consent forms they filled out), I think the safe solution is to treat it like this data is sensitive, de-identify the heck out of it, aggregate it if that works for your use case, and make sure it wouldn’t be considered personal or sensitive data under data privacy laws. But if that’s not an option because of how you need to use the data, then it’s time to talk to lawyers.
+
+
+
+####
+ What if your company can’t afford datacenters all over the world?
+
+
+
+ More or less, this is the same answer. Ideally, you’d get your consent solution in order, but barring that, I’d recommend finding ways to de-identify data immediately upon receiving it from a customer or user. When data comes in from a user, hash that stuff so that it is not reversible, and use that. Be extra cautious about demographics or other sensitive personal data, but definitely deidentify the PII right off. If you never store data that is sensitive or potentially could be reverse engineered to identify someone, then you don’t need to worry about localization. THIS IS NOT LEGAL ADVICE. TALK TO YOUR LEGAL DEPARTMENT.
+
+
+
+####
+ Why do countries make these laws?
+
+
+
+ There are a few reasons, some better than others. First, if the data is actually stored in country, then you have something of a business presence there (or your data storage provider does) so it’s a lot easier for them to have jurisdiction to penalize you if you misuse their citizens’ data. Second, this supports economic development of the tech sector in whatever country, because someone needs to provide the power, cooling, staffing, construction, and so on to the data centers. Third,
+ [unfortunately some countries have surveillance regimes on their own citizens](https://www.techpolicy.press/the-human-rights-costs-of-data-localization-around-the-world/)
+ , and having data centers in country makes it easier for totalitarian governments to access this data.
+
+
+
+####
+ What can I do to make this hurt less as a data scientist?
+
+
+
+ Plan ahead! Work with your company’s relevant parties to make sure the initial data processing is compliant while still getting you the data you need. And make sure you’re in the loop about the consent that customers are giving, and what permissions it enables. If you still find yourself in possession of data with localization rules, then you need to either find a way to manage this data so that it is never saved to a disk that is in the wrong location, or deidentify and/or aggregate the data in a way so that it is no longer sensitive, so the data privacy regulations no longer apply.
+
+
+
+####
+ What are some of the major data localization laws I need to know about?
+
+
+
+ Here are some highlights, but this is not comprehensive because there are many such laws and new ones coming along all the time. (Again, none of this is legal advice):
+
+
+
+* **India**
+ : DPDP (Digital Personal Data Protection) is the national data privacy regulation.
+ [This law is not as restrictive as some](https://iapp.org/resources/article/operational-impacts-of-indias-dpdpa-part5/)
+ , but individual agencies within the Indian government are permitted to make more restrictive policies about specific kinds of data. The Federal Reserve Bank of India is one example, and they
+ [impose data localization rules](https://m.rbi.org.in/Scripts/FAQView.aspx?Id=130)
+ more restrictive than the national law. Financial companies like American Express have been fined for storing data on Indian financial transactions on servers outside of India.
+* **China**
+ : PIPL is their national data privacy regulation, and the data localization rules are a bit complex. It applies to entities that
+ [“provide products or services to individuals in China” and/or “analyze and assess the conduct of natural persons in China”](https://fpf.org/wp-content/uploads/2022/02/Demystifying-Data-Localization-Report.pdf)
+ so that’s pretty broad. If the data is what the law considers “important” or “information that identifies or can identify natural persons”, then there is a good chance is subject to data localization. As always, this is not legal advice, and you should ask your legal department.
+* **Russia**
+ : Russia has had data localization laws for some time, and many companies including Facebook and Twitter have been fined for violations.
+ [“Article 18(5) of the Data Localization Law requires that Russian and foreign data operators that collect personal data of Russian citizens, including over the internet, initially record, store, arrange, update, and extract that data using Russian databases.”](https://www.morganlewis.com/-/media/files/publication/outside-publication/article/2021/data-localization-laws-russian-federation.pdf)
+ There are more laws that also apply (see link for details).
+ [After the initial collection and storage of the data on a Russian server, then the data
+ *can*
+ be transferred elsewhere.](https://assets.kpmg.com/content/dam/kpmg/be/pdf/2018/09/ADV-factsheet-localisation-of-russian-personnal-data-uk-LR.pdf)
+* **Vietnam**
+ : Their
+ [2018 law](https://www.tilleke.com/insights/decree-53-provides-long-awaited-guidance-on-implementation-of-vietnams-cybersecurity-law/)
+ requires that certain data be stored in country for 24 months,
+ [upon request by the government](https://www.trade.gov/market-intelligence/vietnam-cybersecurity-data-localization-requirements)
+ . This applies to domestic companies and certain foreign companies in sectors around e-commerce, social networking, and other digital services. In addition, any transfer of data to a third party requires customer consent.
+* **EU**
+ (GDPR): The EU sets some specific rules about certain countries where their citizens’ data cannot be stored (Russia, for example) due to concerns about state surveillance and data privacy.
+* [**UAE**](https://www.pwc.com/m1/en/services/consulting/documents/uae-data-privacy-handbook.pdf)
+ : For most data, you must get consent from the subject to transfer their data outside the UAE. In some select cases, this is not sufficient — for example, payment processing data must be kept inside the UAE.
+* **Japan**
+ :
+ [Data subjects must consent](https://withpersona.com/blog/data-residency-laws-international-guide)
+ to their data being transferred out of country, unless the other country is part of a specific data sharing agreement with Japan.
+
+
+
+ There are other potential considerations, such as the size of your company (some places have less restrictive rules for small companies, some don’t), so none of this should be taken as the conclusive answer for your business.
+
+
+
+###
+ Conclusion
+
+
+
+ If you made it this far, thanks! I know this can get dry, but I’ll reward you with a story. I once worked at a company where we had data localization provisions in contracts (not the law, but another business setting these rules), so any data generated in the EU needed to be in the EU, but we had already set up data storage for North America in the US.
+
+
+
+
+ For a variety of reasons, this meant that a new replica database containing just the EU stuff was created, based in the EU, and we kept these two versions of the entire Snowflake database in parallel. As you may expect, this was a nightmare, because if you created a new table, or changed fields, or basically did anything in the database, you had to remember to duplicate the work on the other. Naturally, most folks did not remember to do this, so the two databases diverged drastically, to the point where the schemas were significantly different. So we all had endless conditional code for queries and work that extracted data so we’d have the right column names, types, table names, etc depending on which database you were pulling from, so we could do “on the fly” combination without saving data to the wrong place. (Don’t even get me started on the duplicate dashboards for BI purposes.) I don’t recommend it!
+
+
+
+
+ These regulations pose a real challenge for data scientists in many sectors, but it’s important to keep up on your legal obligations and protect your work and your company from liabilities. Have you encountered localization challenges? Comment on this article if you’ve found solutions that I didn’t mention.
+
+
+
+###
+ Further Reading
+
+
+
+
+
+
+
+
+[What is considered personal data under the EU GDPR? - GDPR.eu](https://gdpr.eu/eu-gdpr-personal-data/)
+
+
+
+
+
+
+
+
+
+
+ .
+
+
+
+
+
+
+
+
+
+
+
+
+[Decree 53 Provides Long-Awaited Guidance on Implementation of Vietnam's Cybersecurity Law](https://www.tilleke.com/insights/decree-53-provides-long-awaited-guidance-on-implementation-of-vietnams-cybersecurity-law/)
+
+
+
+
+
+
+
+---
+
+
+
+[Data Privacy in AI Development: Data Localization](https://towardsdatascience.com/data-privacy-in-ai-development-data-localization-50df725bfa1c)
+ was originally published in
+ [Towards Data Science](https://towardsdatascience.com)
+ on Medium, where people are continuing the conversation by highlighting and responding to this story.
+
+
+
diff --git a/content/writing/themeaningofexplainabilityforai.md b/content/writing/themeaningofexplainabilityforai.md
new file mode 100644
index 0000000..debb776
--- /dev/null
+++ b/content/writing/themeaningofexplainabilityforai.md
@@ -0,0 +1,215 @@
+
+
+
+
+---
+date: 2024-06-04
+featured_image: "https://cdn-images-1.medium.com/max/1024/0*J4AEtoSIun9dPlWB"
+tags: ["artificial-intelligence","ethics","machine-learning","xai","editors-pick"]
+title: "The Meaning of Explainability for AI"
+disable_share: false
+---
+
+####
+ Do we still care about how our machine learning does what it does?
+
+
+
+ Today I want to get a bit philosophical and talk about how explainability and risk intersect in machine learning.
+
+
+
+###
+ What do we mean by Explainability?
+
+
+
+ In short,
+ [explainability](https://www.researchgate.net/profile/Kai-Heinrich-3/publication/344357897_White_Grey_Black_Effects_of_XAI_Augmentation_on_the_Confidence_in_AI-based_Decision_Support_Systems/links/5f6ba89392851c14bc922907/White-Grey-Black-Effects-of-XAI-Augmentation-on-the-Confidence-in-AI-based-Decision-Support-Systems.pdf)
+ in machine learning is the idea that you could explain to a human user (not necessarily a technically savvy one) how a model is making its decisions. A decision tree is an example of an easily explainable (sometimes called “white box”) model, where you can point to “The model divides the data between houses whose acreage is more than one or less than or equal to one” and so on. Other kinds of more complex model can be “gray box” or “black box” — increasingly difficult leading to impossible for a human user to understand out of the gate.
+
+
+
+###
+ The Old School
+
+
+
+ A foundational lesson in my machine learning education was always that our relationship to models (which were usually boosted tree style models) should be, at most, “Trust, but verify”. When you train a model, don’t take the initial predictions at face value, but spend some serious time kicking the tires. Test the model’s behavior on very weird outliers, even when they’re unlikely to happen in the wild. Plot the tree itself, if it’s shallow enough. Use techniques like feature importance, Shapley values, and
+ [LIME](https://arxiv.org/abs/1602.04938)
+ to test that the model is making its inferences using features that correspond to your knowledge of the subject matter and logic. Were feature splits in a given tree aligned with what you know about the subject matter? When modeling physical phenomena, you can also compare your model’s behavior with what we know scientifically about how things work. Don’t just trust your model to be approaching the issues the right way, but check.
+
+
+
+
+>
+> Don’t just trust your model to be approaching the issues the right way, but check.
+>
+
+
+###
+ Enter Neural Networks
+
+
+
+ As the relevance of neural networks has exploded, the biggest tradeoff that we have had to consider is that this kind of explainability becomes incredibly difficult, and changes significantly, because of the way the architecture works.
+
+
+
+
+ Neural network models apply functions to the input data at each intermediate layer, mutating the data in myriad ways before finally passing data back out to the target values in the final layer. The effect of this is that, unlike splits of a tree based model, the intermediate layers between input and output are frequently not reasonably human interpretable. You may be able to find a specific node in some intermediate layer and look at how its value influences the output, but linking this back to real, concrete inputs that a human can understand will usually fail because of how abstracted the layers of even a simple NN are.
+
+
+
+
+ This is easily illustrated by the “husky vs wolf” problem. A convolutional neural network was trained to distinguish between photos of huskies and wolves, but upon investigation, it was discovered that the model was making choices based on the color of the background. Training photos of huskies were less likely to be in snowy settings than wolves, so any time the model received an image with a snowy background, it predicted a wolf would be present. The model was using information that the humans involved had not thought about, and developed its internal logic based on the wrong characteristics.
+
+
+
+
+ This means that the traditional tests of “is this model ‘thinking’ about the problem in a way that aligns with physical or intuited reality?” become obsolete. We can’t tell how the model is making its choices in that same way, but instead we end up relying more on trial-and-error approaches. There are systematic experimental strategies for this, essentially testing a model against many counterfactuals to determine what kinds and degrees of variation in an input will produce changes in an output, but this is necessarily arduous and compute intensive.
+
+
+
+
+>
+> We can’t tell how the model is making its choices in that same way, but instead we end up relying more on trial-and-error approaches.
+>
+
+
+
+ I don’t mean to argue that efforts to understand in some part how neural networks do what they do are hopeless. Many scholars are very interested in
+ [explainable AI, known as XAI in the literature](https://arxiv.org/pdf/2404.09554)
+ . The variations in the kinds of model available today mean that there are many approaches that we can and should pursue. Attention mechanisms are one technological advancement that help us understand what parts of an input the model is paying closest attention to/being driven by, which can be helpful.
+ [Anthropic just released a very interesting report digging into interpretability for Claude, attempting to understand what words, phrases, or images spark the strongest activation for LLMs depending on the prompts using sparse autoencoders.](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
+ Tools I described above,
+ [including Shapley](https://skirene.medium.com/demystifying-neural-nets-with-shapley-values-cca29c836089)
+[and LIME](https://github.com/marcotcr/lime/blob/master/doc/notebooks/Tutorial%20-%20images%20-%20Pytorch.ipynb)
+ , can be applied to some varieties of neural networks too, such as CNNs, although the results can be challenging to interpret. But the more we add complexity, by definition, the harder it will be for a human viewer or user to understand and interpret how the model is working.
+
+
+
+###
+ Considering Randomness
+
+
+
+ An additional element that is important here is to recognize that many neural networks incorporate randomness, so you can’t always rely on the model to return the same output when it sees the same input. In particular, generative AI models intentionally may generate different outputs from the same input, so that they seem more “human” or creative — we can increase or decrease the extremity of this variation by
+ [tuning the “temperature”](https://medium.com/@harshit158/softmax-temperature-5492e4007f71#:~:text=Temperature%20is%20a%20hyperparameter%20of%20LSTMs%20(and%20neural%20networks%20generally,the%20logits%20before%20applying%20softmax.)
+ . This means that sometimes our model will choose to return not the most probabilistically desirable output, but something “surprising”, which enhances the creativity of the results.
+
+
+
+
+ In these circumstances, we can still do some amount of the trial-and-error approach to try and develop our understanding of what the model is doing and why, but it becomes exponentially more complex. Instead of the only change to the equation being a different input, now we have changes in the input plus an unknown variability due to randomness. Did your change of input change the response, or was that the result of randomness? It’s often impossible to truly know.
+
+
+
+
+>
+> Did your change of input change the response, or was that the result of randomness?
+>
+
+
+###
+ Real World Implications
+
+
+
+ So, where does this leave us? Why do we want to know how the model did its inference in the first place? Why does that matter to us as machine learning developers and users of models?
+
+
+
+
+ If we build machine learning that will help us make choices and shape people’s behaviors, then the accountability for results needs to fall on us. Sometimes model predictions go through a human mediator before they are applied to our world, but increasingly we’re seeing models being set loose and inferences in production being used with no further review. The general public has more unmediated access to machine learning models of huge complexity than ever before.
+
+
+
+
+ To me, therefore, understanding how and why the model does what it does is due diligence just like testing to make sure a manufactured toy doesn’t have lead paint on it, or a piece of machinery won’t snap under normal use and break someone’s hand. It’s a lot harder to test that, but ensuring I’m not releasing a product into the world that makes life worse is a moral stance I’m committed to. If you are building a machine learning model, you are responsible for what that model does and what effect that model has on people and the world. As a result, to feel really confident that your model is safe to use, you need some level of understanding about how and why it returns the outputs it does.
+
+
+
+
+>
+> If you are building a machine learning model, you are responsible for what that model does and what effect that model has on people and the world.
+>
+
+
+
+ As an aside, readers might remember from
+ [my article about the EU AI Act](https://medium.com/towards-data-science/uncovering-the-eu-ai-act-22b10f946174)
+ that there are requirements that model predictions be subject to human oversight and that they not make decisions with discriminatory effect based on protected characteristics. So even if you don’t feel compelled by the moral argument, for many of us there is a legal motivation as well.
+
+
+
+
+ Even when we use neural networks, we can still use tools to better understand how our model is making choices — we just need to take the time and do the work to get there.
+
+
+
+###
+ But, Progress?
+
+
+
+ Philosophically, we could (and people do) argue that advancements in machine learning past a basic level of sophistication require giving up our desire to understand it all. This may be true! But we shouldn’t ignore the tradeoffs this creates and the risks we accept. Best case, your generative AI model will mainly do what you expect (perhaps if you keep the temperature in check, and your model is very uncreative) and not do a whole lot of unexpected stuff, or worst case you unleash a disaster because the model reacts in ways you had no idea would happen. This could mean you look silly, or it could mean the end of your business, or it could mean real physical harm to people. When you accept that model explainability is unachievable, these are the kind of risks you are taking on your own shoulders. You can’t say “oh, models gonna model” when you built this thing and made the conscious decision to release it or use its predictions.
+
+
+
+
+ Various tech companies both large and small have accepted that generative AI will sometimes produce incorrect, dangerous, discriminatory, and otherwise harmful results, and decided that this is worth it for the perceived benefits — we know this because generative AI models that routinely behave in undesirable ways have been released to the general public. Personally, it bothers me that the tech industry has chosen, without any clear consideration or conversation, to subject the public to that kind of risk, but the genie is out of the bottle.
+
+
+
+###
+ Now what?
+
+
+
+ To me, it seems like pursuing XAI and trying to get it up to speed with the advancement of generative AI is a noble goal, but I don’t think we’re going to see a point where most people can easily understand how these models do what they do, just because the architectures are so complicated and challenging. As a result, I think we also need to implement risk mitigation, ensuring that those responsible for the increasingly sophisticated models that are affecting our lives on a daily basis are accountable for these products and their safety. Because the outcomes are so often unpredictable, we need frameworks to protect our communities from the worst case scenarios.
+
+
+
+
+ We shouldn’t regard all risk as untenable, but we need to be clear-eyed about the fact that risk exists, and that the challenges of explainability for the cutting edge of AI mean that risk of machine learning is harder to measure and anticipate than ever before. The only responsible choice is to balance this risk against the real benefits these models generate (not taking as a given the projected or promised benefits of some future version), and make thoughtful decisions accordingly.
+
+
+
+
+ Read more of my work at
+ [www.stephaniekirmer.com](http://www.stephaniekirmer.com)
+ .
+
+
+
+###
+ Further Reading
+
+
+* [Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
+ (May 21, 2024, Anthropic team)
+* [Explainable Generative AI: A Survey, Conceptualization, and Research Agenda](https://arxiv.org/pdf/2404.09554)
+ (April 15, 2024; Johannes Schneider) — this one is a really accessible read, I recommend it.
+* [An analysis of explainability methods for convolutional neural networks](https://www.sciencedirect.com/science/article/pii/S0952197622005966)
+ (January 2023, Von der Haar et al)
+* [Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions](https://dl.acm.org/doi/full/10.1145/3563691)
+ (Feb 2, 2023; Ibrahim et al)
+* [Google’s AI tells users to add glue to their pizza, eat rocks and make chlorine gas](https://www.livescience.com/technology/artificial-intelligence/googles-ai-tells-users-to-add-glue-to-their-pizza-eat-rocks-and-make-chlorine-gas)
+ (May 23, 2024)
+
+
+
+
+
+---
+
+
+
+[The Meaning of Explainability for AI](https://towardsdatascience.com/the-meaning-of-explainability-for-ai-d8ae809c97fa)
+ was originally published in
+ [Towards Data Science](https://towardsdatascience.com)
+ on Medium, where people are continuing the conversation by highlighting and responding to this story.
+
+
+