Tag: facebook

Facebook Whistle-Blower

What I get from the Facebook whistle-blower’s interview and testimony is that social media has the same problem I see with commercialized news. To make money and secure “customers”, news agencies have basically taken a side. And they tailor their news toward engaging their target demo — get people outraged, reinforce how right people are in their opinion. It’s the adrenaline junkie approach to news.

And social media does the exact same thing. The platform isn’t there to make you a more informed citizen, give you access to awesome new sewing patterns or soap recipes, or make sure you know about the PTO’s next meeting. The platform exists to suck up info about you and deliver ads to you based on that data. Thing about AI is that it performs better as it gets more data. Someone who logs in once a week and checks out a few things is a “bad customer” much the same way that someone who pays off their credit card bill each month is a bad customer. Not doing anything wrong, but losing money for the company that provides the service. They want users who are checking the platform every hour. They want people who they know will come back tomorrow, people who will share things with other people so the algorithms can build out connections. The way to achieve those goals is to get people fired up about what they’re reading. Commercialized news being spread via social media algorithms is a compounded problem.

Senate Facebook Hearings

The hearing today reminds me of digital discovery pre-Zubulake – bunch of folks who I suspect might be investigating edgy technologies to ditch cuneiform script making rulings regarding how search and seizure case-law applies to electronic data. Not terribly encouraging that they intend to draft legislation controlling … what? Digital privacy in general? Social media platforms? Here’s hoping a good number of Congresspersons take Scheindlin’s initiative to educate themselves about that on which they seek to rule.

Something that stands out to me is how much of the platform’s operations, litigation, and regulation about which Zuckerberg claims not to know anything. I get not wanting to provide an answer that looks bad for your company, not wanting to provide inaccurate information in a Congressional hearing … but I expected they would have come up with a more reasonable boilerplate fob off answer than, essentially, “I don’t know about that stuff”

The anti-trust thread is an interesting path to go down, although I doubt Graham will follow that path. Shame, too. I had great hopes for Google+ — backed by a company with enough money to compete, enhancing Google’s current ad platform, and the idea of circles to provide granular control of who can see what. An idea which would have vastly limited the impact here. In Google+, I could avoid sharing a lot of personal information with vague acquaintances and distant family members. Heck, close family too if they’re the types who are always downloading rubbish and infecting their computer.

Consumerism and advertising is a priori accepted as a good thing. Not shocking, considering the way of American society, but it really stood out to me throughout the testimony that no one questions the benefit of having stuff more effectively marketed, to having ads that are more apt to result in a sale. They’ve spent enormous sums of money, dedicated incredible human capital to delivering an ad that is more likely to show a shirt I like. Why is that a good thing? I have clothes. If I needed more, I would either go to a store or search online. I understand why a business wants to sell me a shirt … but how is more effectively separating me from my earnings a personal boon??

And the American public is having a good self-education week. There’s interest in taint teams from Cohen yesterday, and today we’re understanding the actual business model of large tech companies — the nuance between “selling my data” and “using my data to form advertising profiles and sell my services in presenting advertising based on those advertising profiles”. Back when the ISPs wanted to be able to commoditize web history, I encountered a lot of uproar about literally selling someone’s browsing history. Which – and no offense meant – your browsing history? Not a thrilling read. Taking your browsing history and turning it into profiles, then using those profiles to sell services presenting ads to customers. Objecting to “selling my data” provides a strawman for the companies to tear down (as Zuckerberg did several times with “we don’t do that”).

Hopefully people are gaining a more complete understanding of what information is available through the “Facebook Platform” … and that you are trusting not just Facebook but the other company to act in good faith regarding your privacy. When the ToS says they may sell data or analytics to a third party … well, they may well do that. What does that third party do with the data? How much control can you, Facebook, or the app developer exert over the data sold to the third party? Not a whole lot.

Finally – the bigger question that doesn’t get asked … how can Americans insulate themselves from having personal information used to foment discontent? How can we get better and analyzing “news” and identifying real fake news. Not Trump-style FAKE NEWS which basically means “something I don’t like hearing” but actual disinformation.

On Cambridge Analytica

A friend of a friend said she doesn’t mind her personality profile being tracked so FB can suggest things she likes. Why does everyone think it is so bad when she’s stumbled upon many gems from web series, shopping sites, particular products that she highly enjoys. Well, I have two reasons.

Firstly, some people are making a tactical decision to trade personal information for access to technology platforms they enjoy. There are a handful of people I knew in Uni who I thought were wonderful people, but just lost track of over the years. And it’s nice to meet them. There are special-interest groups for vegans, 3d printers, sewists, soap makers, and chicken owners that provide a lot of useful information to me. As an informed decision to share some basic demographic information & whatever FB can glean from my random musings in exchange for communicating with old friends and interest-based communities … I *don’t* think it is a bad deal (or I would not have an account). Heap-o people making something other than an informed tactical decision, though, isn’t exactly in my “good” column. And some third party having information about me because, although I have the platform ‘stuff’ disabled on my FB account, a friend downloaded an app … that contravenes my specifically selected privacy settings. And feels like a violation of my trust.

More generally, I don’t care for psyops tactics trying to separate me from my money (or, in this case, my vote serving my real interests). That’s what all these data analytics seem like to me. I opt out of interest based ads on my computer and cell phone. New companies come online and things I’ve thought about buying and decided against once again start stalking me across the Internet. And, yeah, I’ve discovered products that actually INTERESTED me (not always, advertising steaks to a vegetarian is a major profiling fail). But I don’t need, nor I particularly WANT, to spend more money on ‘stuff’. If I have an obvious need for something in my life, I either make something myself or research product options.

I’m not a huge fan of Pinterest for a similar reason – I have a large backlog of projects I want to make. I *really* don’t need an algorithm to look at my projects and suggest additional ones I may like. Yeah, I *do* like them. Until my time machine comes online, I’ve only got so many hours to spread out between family, work, friends, caped crusading, hobbies, research. And I’m quite adept at finding *new* projects when I’ve got some spare time or have a particular need.

I see interest based advertising – online, mailing, any source – the same way I think about toys in the cereal aisle at the supermarket. I don’t object to toys on principal. I object to placing them in a location my kid is going to see because young kids (the target demo, based on toys available) are prone to public screaming fits when they don’t get their way. And 2$ to avoid an unpleasant and stressful situation doesn’t seem too awful when you’re already tired and just want to GET HOME. When the yarn I already decided wasn’t worth it (or decided against the whole project) … being asked to continually reassess this decision is an attempt to reach me at a time when I’m less prone to make rational decisions.

So while “bad” isn’t the word I’d elect to use … it’s the same kind of underhanded as piping O2 into intentionally windowless casino to keep gamblers playing longer. Or maybe it is bad, because the other example I think of is chemically engineering cigarettes and processed food to be more addictive.

Facebook’s Offensive Advertising Profiles

As a programmer, I assumed Facebook used some sort of statistical analysis to generate advertising categories based on user input rather than employing a marketing group. A statistical analysis of the phrases being typed is *generally* an accurate reflection of what people type, although I’ve encountered situations where their code does not appropriately weight adjectives (FB thought I was a Trump supporter because incompetent, misogynist, unqualified, etc didn’t clue them into my real beliefs). But I don’t think the listings causing an uproar this week were factually wrong.
 
Sure, the market segment name is offensive; but computers don’t natively identify human offense. I used to manage the spam filtering platform for a large company (back before hourly anti-spam definition updates were a thing). It is impossible to write every iteration of every potentially offensive string out there. We would get e-mails for \/|@GR@! As such, there isn’t a simple list of word combinations that shouldn’t appear in your marketing profiles. It would be quite limiting to avoid ‘kill’ or ‘hate’ in profiles too — a group of people who hate vegetables is a viable target market. Or those who make killer mods to their car.
 
FB’s failing, from a development standpoint, is not having a sufficiently robust set of heuristic principals against which target demo’s are analysed for non-publication. They may have considered the list would be self-pruning: no company is going to buy ads to target “kill all women”. Any advertising string that receives under some threshold of buys in a delta-time gets dropped. Lazy, but I’m a lazy programmer and could *totally* see myself going down that path. And spinning it as the most efficient mechanism at that. To me, this is the difference between a computer science major and an information sciences major. Computer science is about perfecting the algorithm to build categories from user input and optimizing the results by mining purchase data to determine which categories are worth retaining. Information science teaches you to consider the business impact of customers seeing the categories which emerge from user input. 
 
There are ad demo’s for all sorts of other offensive groups, so it isn’t like the algorithm unfairly targeted a specific group. Facebook makes money from selling advertisements to companies based on what FB users talk about. It isn’t a specific attempt to profit by advertising to hate groups; it’s an attempt to profit by dynamically creating marketing demographic categories and sorting people into their bins.
This isn’t limited to Facebook either – any scenario where it is possible to make money but costs nothing to create entries for sale … someone will write an algorithm to create passive income. Why WOULDN’T they? You can sell shirts on Amazon. Amazon’s Marketplace Web Service allows resellers to automate product listings. Custom write some code to insert random (adjectives | nouns | verbs) into a template string then throw together a PNG of the logo superimposed on a product. Have a production facility with an API to order, make the product once it has been ordered, and you’ve got passive income. And people did. I’m sure some were wary programmers – a sufficiently paranoid person might even have a human approve the new list of phrases. Someone less paranoid might make a banned word list (or even a banned word list and source one’s words from a dictionary and look for the banned words in the definition too). But a poorly conceived implementation will just glom words together and assume something stupid/offensive just won’t sell. Works that way sometimes. Bad publicity sinks the company other times.
 
The only thing that really offends me about this story is that unpleasant people are partaking in unpleasant conversations. Which isn’t news, nor is it really FB’s fault beyond creating a platform to facilitate the discussion. Possibly some unpleasant companies are targeting their ads to these individuals … although that’s not entirely FB’s fault either. Buy an ad in Breitbart and you can target a bunch of white supremacists too. Not creating a marketing demographic for them doesn’t make the belief disappear.