You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 16, 2023. It is now read-only.
In order to evaluate how well we expect FLoC to be able to prevent leaks of “sensitive” categories, we first need to first agree on our definition of sensitive.
Are we defining sensitive as any category of information that advertisers are legally forbidden from using, either by their own policy commitments or by government regulations? As an example, see the categories considered sensitive by Google AdSense or government regulation on medical advertisements and advertising to children. Under this constrained definition, this seems like a difficult problem to solve. Even AdSense is not yet able to guarantee the classification of ads that fall into these categories; the AdSense page on the topic includes the disclaimer Our system classifies ads automatically [...]. Our technology will make its best attempt to filter ads from the categories above; however, we don't guarantee that it will block every related ad. If we’re not able to guarantee that we can filter out creatives that fall into these categories, why do we expect to be able to successfully filter out any portion of a user’s browsing history that directly reveals, or is a proxy for, a sensitive category?
Moreover, I believe most users would define “sensitive” to cover any information they would feel uncomfortable sharing or any information that, by exposing it, endangers the user. If we agree that this is the definition we should be using, then ensuring that FLoC protects “sensitive categories” is impossible.
Information that users are uncomfortable sharing will vary between individuals. E.g. I might be fine with sharing my income level, but another user may not. The context of the sharing also matters. E.g., I may be willing to share information about whether I have an interest in racing cars to the local car dealerships, but not with potential car insurance advertisers.
Determining what information may endanger an individual also seems impossible to define and filter. For example, let’s say we allow for age range to be captured in FLoC (and filter out children). Are we sure advertisers aren’t targeting scam ads to senior citizens? As another example, let’s say a user in Hong Kong ends up in a flock that many users who’ve participated in protests are in--it’s not inconceivable that those participating in the protests share some common interests that are distinct from those not participating. Are we confident simply being a member of that flock won’t be abused by the Chinese media given that they’re allegedly already abusing other forms of advertising?
The text was updated successfully, but these errors were encountered:
englehardt
changed the title
This specification should define what is meant by a "sensitive category"
This proposal should define what is meant by a "sensitive category"
Sep 4, 2019
Hello @englehardt, thanks for posing this question its a good one, and I think your summary of the problem in your first paragraph makes a really strong point, (if ad sense, can't do it why should we think we can). My intuition says we should be prepared to give the same disclaimer. But in thinking about this a little deeper, a few alternatives come to mind. Perhaps the browser, by which I mean Chrome, has a category of FLoC's called "sensitive" and they are not targetable. For example the "Children" or "Diabetes" FLoC's cannot be targeted. Alternatively, the browser and underlying ML could decide not to build sensitive FLoC's?
The harder problem to solve is how to handle users preferences wrt what each of us considers sensitive. Income for example. I think your definition is a good one, but I do believe the FLoC can respect it and still accommodate sensitive categories. Take income for example, I am going to take the position that income is not a sensitive FLoC. I am also not comfortable sharing my income so I don't. But if the Browser wants to infer my income and put me in a FLoC I see no violation of my privacy.
I think we should allow deterministic FLoCs where people declare their interests, or income in our example. If I don't feel comfortable sharing my income, I should never wind up in this FLoC. And alternatively, probabilistic FLoCs, where the browser can infer my interests/income and put me in whatever FLoC it thinks I belong in.
Finally, I think FLoCs become much less frightening once we define them, transient actions like participation in a protest probably shouldn't be a FLoC, whereas "Running Enthusiast" should. Thanks again for posting the question.
In order to evaluate how well we expect FLoC to be able to prevent leaks of “sensitive” categories, we first need to first agree on our definition of sensitive.
Are we defining sensitive as any category of information that advertisers are legally forbidden from using, either by their own policy commitments or by government regulations? As an example, see the categories considered sensitive by Google AdSense or government regulation on medical advertisements and advertising to children. Under this constrained definition, this seems like a difficult problem to solve. Even AdSense is not yet able to guarantee the classification of ads that fall into these categories; the AdSense page on the topic includes the disclaimer
Our system classifies ads automatically [...]. Our technology will make its best attempt to filter ads from the categories above; however, we don't guarantee that it will block every related ad.
If we’re not able to guarantee that we can filter out creatives that fall into these categories, why do we expect to be able to successfully filter out any portion of a user’s browsing history that directly reveals, or is a proxy for, a sensitive category?Moreover, I believe most users would define “sensitive” to cover any information they would feel uncomfortable sharing or any information that, by exposing it, endangers the user. If we agree that this is the definition we should be using, then ensuring that FLoC protects “sensitive categories” is impossible.
Information that users are uncomfortable sharing will vary between individuals. E.g. I might be fine with sharing my income level, but another user may not. The context of the sharing also matters. E.g., I may be willing to share information about whether I have an interest in racing cars to the local car dealerships, but not with potential car insurance advertisers.
Determining what information may endanger an individual also seems impossible to define and filter. For example, let’s say we allow for age range to be captured in FLoC (and filter out children). Are we sure advertisers aren’t targeting scam ads to senior citizens? As another example, let’s say a user in Hong Kong ends up in a flock that many users who’ve participated in protests are in--it’s not inconceivable that those participating in the protests share some common interests that are distinct from those not participating. Are we confident simply being a member of that flock won’t be abused by the Chinese media given that they’re allegedly already abusing other forms of advertising?
The text was updated successfully, but these errors were encountered: