1567 stories
·
0 followers

52 things learned in 2017

2 Shares

One of my favorite end-of-the-year lists last year was Tom Whitwell’s 52 things I learned in 2016. An item from that list:

Instead of batteries, the ARES project in Nevada uses a network of train tracks, a hillside and electric trains loaded with rocks to store wind and solar power. When there is a surplus of energy, the trains drive up the tracks. When output falls, the cars roll back down the hill, their electric motors acting as generators.

Whitwell’s list for 2017 is similarly interesting:

In Silicon Valley, startups that result in a successful exit have an average founding age of 47 years. [Joshua Gans]

“Artificial intelligence systems pretending to be female are often subjected to the same sorts of online harassment as women.” [Jacqueline Feldman]

Dana Lewis from Alabama built herself an artificial pancreas from off-the-shelf parts. Her design is open source, so people with diabetes can hack together solutions more quickly than drug companies. [Lee Roop]

Amazon Echo can be useful for people suffering from Alzheimers’: “I can ask Alexa anything and I get the answer instantly. And I can ask it what day it is twenty times a day and I will still get the same correct answer.” [Rick Phelps]

China opens around 50 high bridges each year. The entire rest of the world opens ten. [Chris Buckley]

Men travelling first class tend to weigh more than those in economy, while for women the reverse is true. [Lucy Hooker]

Facebook employs a dozen people to delete abuse and spam from Mark Zuckerberg’s Facebook page. [Sarah Frier]

Tags: lists   Tom Whitwell
Read the whole story
huskerboy
9 days ago
reply
Seattle
Share this story
Delete

Reaction GIFs and digital blackface

3 Shares

In the latest installment of the newish video series Internetting with Amanda Hess, Hess discusses The White Internet’s Love Affair with Digital Blackface. From Teen Vogue, an explanation of digital blackface by Lauren Michelle Jackson:

Adore or despise them, GIFs are integral to the social experience of the Internet. Thanks to a range of buttons, apps, and keyboards, saying “it me” without words is easier than ever. But even a casual observer of GIFing would notice that, as with much of online culture, black people appear at the center of it all. Or images of black people, at least. The Real Housewives of Atlanta, Oprah, Whitney Houston, Mariah Carey, NBA players, Tiffany Pollard, Kid Fury, and many, many other known and anonymous black likenesses dominate day-to-day feeds, even outside online black communities. Similar to the idea that “Black Vine is simply Vine,” as Jeff Ihaza determined in The Awl, black reaction GIFs have become so widespread that they’ve practically become synonymous with just reaction GIFs.

If you’ve never heard of the term before, “digital blackface” is used to describe various types of minstrel performance that become available in cyberspace. Blackface minstrelsy is a theatrical tradition dating back to the early 19th century, in which performers “blacken” themselves up with costume and behaviors to act as black caricatures. The performances put society’s most racist sensibilities on display and in turn fed them back to audiences to intensify these feelings and disperse them across culture. Many of our most beloved entertainment genres owe at least part of themselves to the minstrel stage, including vaudeville, film, and cartoons. While often associated with Jim Crow-era racism, the tenets of minstrel performance remain alive today in television, movies, music and, in its most advanced iteration, on the Internet.

Tags: Amanda Hess   language   Lauren Michelle Jackson   racism   video
Read the whole story
huskerboy
9 days ago
reply
Seattle
Share this story
Delete

NoScript 10.1.1 Quantum Powerball Finish... and Rebooting

1 Share

noscript-quantum.jpg

v 10.1.1
=============================================================
+ First pure WebExtension release
+ CSP-based first-party script script blocking
+ Active content blocking with DEFAULT, TRUSTED, UNTRUSTED
  and CUSTOM (per site) presets
+ Extremely responsive XSS filter leveraging the asynchronous
  webRequest  API
+ On-the-fly cross-site requests whitelisting

Thanks to the Mozilla WebExtensions team, and especially to Andy, Kris and Luca, for providing the best Browser Extensions API available on any current browser, and most importantly for the awesome tools around it (like the Add-on debugger).

Thanks to the OTF and to all the users who supported and are supporting this effort financially, morally and otherwise.

Coming soon, in the next few weeks: ClearClick, ABE and a public code repository on Github.

Did I say that we've got a chance to reshape the user experience for the best after more than a dozen years of "Classic" NoScript?
Make your craziest ideas rain, please.

Long Live Firefox Quantum, long live NoScript Quantum.

Update

Just gave a cursory look at the comments before getting some hours of sleep:

  • Temporary allow is still there, one click away, just toggle the clock inside the choosen preset button.
  • For HTTPS sites the base domain is selected by default with cascading, while for non-secure sites the default match is the full address.
  • For domain matching you can decide if only secure sites are matched by clicking on the lock icon.
  • You can tweak your "on the fly" choices in the Options tab by searching and entering base domains, full domains or full addresses in the text box, then customizing the permissions of each.

Next to come (already implemented in the backend, working on the UI) contextual permissions (e.g. "Trust facebook.net on facebook.com only").
And yes, as soon as I get a proper sleep refill, I need to refresh those 12 years old instructions and screenshots. I know I've said it a lot already, but please keep being patient. Thank you so much!

Update 2

Thank for reporting the Private Browsing Window bug, I'm gonna fix it ASAP.

Update 3

Continues here...

Read the whole story
huskerboy
24 days ago
reply
Seattle
Share this story
Delete

How good should we expect decisions to be?

1 Share
A statement I commonly hear in tech-utopian circles is that some seeming inefficiency can’t actually be inefficient because the market is efficient and inefficiencies will quickly be eliminated. A contentious example of this is the claim that companies can’t be discriminating because the market is too competitive to tolerate discrimination. A less contentious example is that when you see a big company doing something that seems bizarrely inefficient, maybe it’s not inefficient and you just lack the information necessary to understand why the decision was efficient.
Read the whole story
huskerboy
24 days ago
reply
Seattle
Share this story
Delete

New – Interactive AWS Cost Explorer API

2 Shares

We launched the AWS Cost Explorer a couple of years ago in order to allow you to track, allocate, and manage your AWS costs. The response to that launch, and to additions that we have made since then, has been very positive. However our customers are, as Jeff Bezos has said, “beautifully, wonderfully, dissatisfied.”

I see this first-hand every day. We launch something and that launch inspires our customers to ask for even more. For example, with many customers going all-in and moving large parts of their IT infrastructure to the AWS Cloud, we’ve had many requests for the raw data that feeds into the Cost Explorer. These customers want to programmatically explore their AWS costs, update ledgers and accounting systems with per-application and per-department costs, and to build high-level dashboards that summarize spending. Some of these customers have been going to the trouble of extracting the data from the charts and reports provided by Cost Explorer!

New Cost Explorer API
Today we are making the underlying data that feeds into Cost Explorer available programmatically. The new Cost Explorer API gives you a set of functions that allow you do everything that I described above. You can retrieve cost and usage data that is filtered and grouped across multiple dimensions (Service, Linked Account, tag, Availability Zone, and so forth), aggregated by day or by month. This gives you the power to start simple (total monthly costs) and to refine your requests to any desired level of detail (writes to DynamoDB tables that have been tagged as production) while getting responses in seconds.

Here are the operations:

GetCostAndUsage – Retrieve cost and usage metrics for a single account or all accounts (master accounts in an organization have access to all member accounts) with filtering and grouping.

GetDimensionValues – Retrieve available filter values for a specified filter over a specified period of time.

GetTags – Retrieve available tag keys and tag values over a specified period of time.

GetReservationUtilization – Retrieve EC2 Reserved Instance utilization over a specified period of time, with daily or monthly granularity plus filtering and grouping.

I believe that these functions, and the data that they return, will give you the ability to do some really interesting things that will give you better insights into your business. For example, you could tag the resources used to support individual marketing campaigns or development projects and then deep-dive into the costs to measure business value. You how have the potential to know, down to the penny, how much you spend on infrastructure for important events like Cyber Monday or Black Friday.

Things to Know
Here are a couple of things to keep in mind as you start to think about ways to make use of the API:

Grouping – The Cost Explorer web application provides you with one level of grouping; the APIs give you two. For example you could group costs or RI utilization by Service and then by Region.

Pagination – The functions can return very large amounts of data and follow the AWS-wide model for pagination by including a nextPageToken if additional data is available. You simply call the same function again, supplying the token, to move forward.

Regions – The service endpoint is in the US East (Northern Virginia) Region and returns usage data for all public AWS Regions.

Pricing – Each API call costs $0.01. To put this into perspective, let’s say you use this API to build a dashboard and it gets 1000 hits per month from your users. Your operating cost for the dashboard should be $10 or so; this is far less expensive than setting up your own systems to extract & ingest the data and respond to interactive queries.

The Cost Explorer API is available now and you can start using it today. To learn more, read about the Cost Explorer API.

Jeff;

Read the whole story
huskerboy
24 days ago
reply
Seattle
Share this story
Delete

The Future of the NOC

1 Share

One of the best things about working at PagerDuty is that our customers, our users, our champions, and our buyers are all the same people. With this year’s push into major incident response, we’ve spent a lot of time talking to Network Operation Centers (NOCs) about what the future holds for them.

Every job changes with new technology — some, like long-distance trucking will be completely disrupted by self-driving trucks — but after all the discussions we’ve had with the best NOCs around, it looks like their evolution will be significant but manageable.

I’ve always thought about PagerDuty as helping your Mean Time To Promotion, in keeping with that, here are some of the possible futures we see for NOCs.

Site Reliability Engineer

One of the most straightforward paths is towards becoming a Site Reliability Engineer (SRE).

If you want a job doing this, you need all the troubleshooting skills of a systems admin, layered on with a deep understanding of monitoring. The goal of an SRE is to detect glitches before they develop into problems that users can notice. And if that doesn’t work, SREs moves heaven and earth to get everything back online. You’ll frequently see SRE positions at big cloud or online companies, like Amazon, Google, Heroku, and even Etsy. People get really cranky if they can’t buy things immediately, and SREs are there to make sure they can.

SREs keep the world online (ok, that’s kind of a big ask). As an SRE, you would work with a team to predict needs and build scale in a way that is fluid and invisible from the front end. Site Reliability Engineering is the art of never letting the user see you sweat, as a company. You’re working to make sure there is always enough capacity, enough uptime, enough pipe, and enough monitoring to make sure something isn’t falling apart invisibly.

Instead of firefighting, you want to be a building inspector, designing wider hallways, doors that always swing out, and multiple staircases (metaphorically). It may look heroic to jump in with a fire ax and a hose and tear down doors and fight flashovers, but it’s better to never need the heroics if you have smart policies around building materials and building sprinklers.

Ops becomes QA

Historically, quality assurance (QA) at software companies has had an unfair reputation. In fact, there are lots of great companies like Microsoft where there’s a parallel track for Software Development Engineers in Test (SDET). Click testing has long since become automated unit tests which are now automated click & API tests against the staging server.

Operations and QA are the formalizations of, “Eek! Things are broken.” If you have a solid QA team checking things in test before you deploy, there are far fewer surprise outages. If you have an Operations team, they design and build things mindfully, considering risk and performance, rather than simply installing and hoping things work right.

At its core, DevOps and Operations are about getting servers or containers to meet the “three R requirements”:

  • Reliable: stays up or fails over to something else gracefully
  • Replaceable: you can start a new instance of the server with no special steps
  • Routine: server provisioning and decommissioning should be so easy that you can create a web form to do it

To me, that also sounds a lot like QA.

DevOps means if something broke and woke you up, you are empowered to write the test that ensures it never makes it to production again — you’re already the best part of QA.

As you get better at preventing downtime or outages and streamlining requests, you can scale volume more easily because you’re not responding to one-off requests. Think about the difference between manually resetting user logins and offering an automated system to do it. You may spend the same amount of time fixing user login problems, but for ten to twenty times as many users.

NOC as point to all of tech

One of my favorite NOCs I’ve visited is a telecommunications company in Los Angeles — it’s a classical NOC with an unconventional feel. Starting from the massive wall of dashboards, the room is arranged in rows, with each row representing a promotion in their operations org.  Promotions average 6-12 months apart, with clear milestones and can stop with being in the back row (as a defacto SRE) or into other parts of the org. With so many companies lamenting how hard it is to find talent these days, I expect this will become more common.

At PagerDuty we treat our support team in much the same way: employees in our support org have gone on not only to be managers or more technical roles inside that org, but also to the engineering, marketing, and sales teams and I don’t see any sign of that stopping (unsurprisingly, this makes it easier for us to hire great people)

Change isn’t always bad, but it always comes

Predictions are hard, especially about the future; but it’s clear that the future of the NOC will not be humans watching screens waiting to press buttons. For many classes of always-on applications, it will still make sense to keep people ready to jump into action — the question is what to do with the other 99% of their time.

The NOC has undergone quite a bit of change in recent years and will continue to do so. Those that adapt to the changing digital landscape will position themselves for success, and we look forward to navigating that transition with you.

The post The Future of the NOC appeared first on PagerDuty.

Read the whole story
huskerboy
24 days ago
reply
Seattle
Share this story
Delete
Next Page of Stories