Quantum computers will likely manifest themselves as co-processors, and you'll have a nice well-abstracted API to access those implementations within traditional languages, i.e.
#!/usr/bin/env python4 from quantum import qc qc.init(device="/dev/quantum0") factors = qc.factor(15)
I often wonder what will happen at the 0-day quantum machine where it's not just a few qbit but the real deal.. I think anyone in possession of such technology will be able to crack any SSL certificate, and thus gain access to almost anything online. I wonder if criminal organisations aren't secretly investing in such thing? And, not to be paranoid, but we're almost certain it will be possible to build them, wouldn't it be prudent to start investing in the defense against such things? What kind of security could we have to counter a quantum computing? Would it only be possible to use quantum computing to defend against quantum computing?
How could the FRCP work otherwise? They're in effect saying: if the evidence pertinent to a crime is online, and is either (a) on Tor or some other service where we don't know precisely where it is, or (b) on a botnet or some other environment where it's spread across 100 different jurisdictions, a judge can issue a warrant to obtain that evidence.
Judges can already issue warrants to obtain electronic evidence in, I think, exactly the fashion EFF describes here. The limitation they have today is procedural: they can only issue those warrants in their own court district.
But if you don't know the right court district, or a search would effectively require you to get warrants in every district, procedural rules make it hard to get a warrant today. That seems... stupid. The fact that evidence pertinent to a criminal case is on a Tor hidden service shouldn't make it inaccessible to the courts.
The conference is composed of: "the Chief Justice of the United States, the chief judge of each court of appeals federal regional circuit, a district court judge from various federal judicial districts, and the chief judge of the United States Court of International Trade." 
You can disagree with their decisions, but don't try and imply that they are duplicitous. I expect better of the EFF.
The ONLY thing changed by this proposed rule is the venue in which the government can apply for warrants, expanding it to include any jurisdiction involved in the crime under those two specific circumstances that the EFF blog post mentions.
It does NOT change any of the rules of probable cause involved in getting a warrant. It does NOT grant any kind of "new hacking powers". It does NOT criminalize Tor or allow law enforcement to get a warrant simply because someone used Tor.
There are reasons to not like this rule change based on what it actually means. Misrepresenting things that you don't agree with ultimately hurts your own side because it makes it trivial for people on the other side to dismiss your complaints as ignorant and wrong.
Recent discussion of the rule change:
A smaller one:
Reality: "Make legal what has been going on illegally for years"
Ok, land of the free.
The only way it changes is if the US does away with career politicians, or fear of the government becomes > fear of terrorists.
Although DOJ has been using malware for nearly fifteen years, it never sought a formal expansion of legal authority from Congress. There has never been a Congressional hearing, nor do DOJ/FBI officials ever talk explicitly about this capability.
The Rule 41 proposal before this advisory committee was the first ever opportunity for civil society groups, including my employer, the ACLU, to weigh in. We, along with several other groups, submitted comments and testified in person.
Our comments can be seen here [3,4]. Incidentally, it was while doing the research for our second comment that I discovered that the FBI had impersonated the Associated Press as part of a malware operation in 2007 .
Ultimately, the committee voted to approve the change to the rules requested by DOJ. In doing so, the committee dismissed the criticism from the civil society groups, by saying that we misunderstood the role of the committee, that the committee was not being asked to weigh in on the legality of the use of hacking by law enforcement, and that "[m]uch of the opposition [to the proposed rule change] reflected a misunderstanding of the scope of the proposal...The proposal addresses venue; it does not itself create authority for electronic searches or alter applicable statutory or constitutional requirements."
The malware one seems entirely reasonable to me. If you have malware, chances are you're aiding criminals by providing them with hardware to commit their crimes with. Why shouldn't a judge issue a search warrant or have your computer seized? The computer is literally part of the crime scene. If you don't like it, don't install malware.
The first one I'm not really sure where it would be used. Is it just, say, "police are allowed to use TOR vulnerabilities to gain access to the servers serving .onion links in the course of their investigation"?
I guess their point is that the changes should've been initiated by Congress, since it's more than procedural. I can buy that, even if the changes themselves seem innocent enough.
Most of those papers are really not "essential" for getting things done in Haskell.
If you want to dig deeper into Haskell's type system or Category Theory in general, then yes, there are a lot of good papers in that list.
If you just want to write safe, conscious and understandable code, then you are much better off reading the excellent "Haskell Programming from first priciples"  or the slightly outdated "Real World Haskell" .
 http://haskellbook.com/ http://book.realworldhaskell.org/
sizeof( &array )
sizeof( array )
&*( array + 0 )
array + 0
This is just a really convoluted way to write 2:
&array - &array &*(array+2) - &*(array+0) (array+2) - (array+0) 2 - 0
Any reviews of the other ones from HN crowd?
Without more details on data layout, you could be seeing the side-effect of aggressive pointer prefetching, which may only be possible with one layout but not another. Or there could be a fixed stride for some of the data allowing prefetchers to kick in. It's hard to tell and deserves some more experiments to isolate what is going on.
When data isn't in a register and isn't in the L1 cache, it takes a lot of time to fetch it from other caches or about 200 clock cycles if it comes all the way from main memory. We measure that event as a cache miss. But modern x86 processors will go to great lengths to execute the rest of the program while waiting for the data to arrive. A cache miss only really slows the program down if there aren't enough nearby instructions that can be executed while waiting for the missing value to arrive.
You could likely write a program that triggers a cache miss every 30 clock cycles but runs at the same speed as a program without cache misses. In a different program, a cache miss every 30 clock cycles can mean a slowdown by two orders of magnitude. Cache misses are only a useful metric to give us an idea where to look, not to show actual problems.
One nit: Please, please, stop perpetuating the use of the non-word "performant." I cringe every time I hear it (now even in person at work!) Using it just makes you sound dumb, and clearly the author is not dumb.
A flag that guarantees side effect freedom to a set of operations and suboperations, for the processor that would be great.
> Samsung, for one, plans to ship its 10nm finFET technology by years end.
> TSMC will move into 10nm production in early 2017
> Intel will move into 10nm production by mid-2017
Another model previously posted on HN, with (IMO) worse results than these two models: http://tinyclouds.org/colorize/
CNN = Convolutional Neural Networks in this context.
What I find somewhat annoying is that whilst they show some examples from their validation set, and a couple of examples of the model failures. They don't appear to show a random selection of cases from their validation set.
Mitsubishi Heavy Industries doesn't have an experience to build submarines overseas and there were significant risks leaving someone to "learn on the job".
USA has been (correctly) pressuring Australia to select vendor on its merit and not to take into account geopolitics. Geopolitically, Japan is a lot more important to Australia than France. This is the reason Japanese believed for so long they had the deal in the bag.
With Abbott losing his job, it became easier for France or Germany to hop in. Especially when new prime minister had agenda of damaging previous PM. I think Abbott must feel terrible about Japan not getting the contract but it only shows deals like these should be done at arm's length. Abbott even wrote personal letter to Japanese PM to apologize.
Having worked in the areospace industry, I've seen France lose some "in the bag" public markets by letting other countries lobby and spin bid's technical specifications to their advantage.
It seemed they learned the lesson, at least for this bid.
Can there possibly be an upside to [Prime Minister] Malcolm Turnbull's decision to squander billions of taxpayers' dollars building 12 French submarines in [the state of] South Australia?
It's hard to think of one.
Of course, there are potentially critical South Australian seats at stake in the coming election and Turnbull no doubt believes it's worth every penny to ensure that the Australian people are not deprived of his greatness.
But surely there were cheaper ways to buy off the South Australians.
With a 30 to 40 per cent local cost premium as a starting point and the history of the Collins class submarine to go by, the federal government could have hired all the Australian Submarine Corporation (ASC) workers to do nothing and the taxpayer would have been billions of dollars better off because at least they wouldn't have been making grossly overpriced submarines.
Lets grow up kids. Just get to Mars already.
I am sure there is more to it than just one businessman being smarter than the other one. Or AU leadership being strategically smart.
I believe it is a "divide and conquer" type of thing, one of "the great games" at play.
I am an engineer who spent 12 years in the ad tech industry. This problem is especially acute in the online video ad side where dollars are exchanged based on views and CPMs, not someone buying something online (this being more accountable is less open to abuse). In my old job, we tested what % of traffic are fraudulent. These traffic are daisy-chained from one ad buyer/seller to the next. Inevitably it will hit someone unscrupulous. In aggregate we found anywhere from 20% to 95% of the traffic we see for video ads to be fraudulent.
There are some very sophisticated bot farms out there that gets around detection, mostly operated out in Eastern Europe and Asia. If you look at Comscore 100 video sites, you can always tell who's gaming the system when from one month to next, an unknown brand just jumped high in the top 100.
This is the reason Facebook had shut down Liverail that they spent $450M on. Super high percentage of ad fraud.
All of the exchanges (on the buy and sell side) know it's going on, and they generally don't try to stop it provided that it doesn't become news. A large number of well-known, venture-backed ad tech companies make money from this and they're not incentivized to stop it.
It's rare that the companies actively encourage arbitrage, but some do.
The world is a curious place, full of contradiction and wonder. We only like the free market for some activities.
Traders buying and reselling at a higher rate could be distorting the markets and removing the efficiency that were supposed to see through real-time bidding, he said.
By definition successful arbitrage makes the market more efficient - it brings prices closer together (making the cheaper exchange more expensive and the more expensive one cheaper).
Furthermore, arbitragers in ad exchanges are causing more ads to be sold, providing a valuable service to buyers and sellers.
Suppose there is a sell order on exchange X, a buy order on exchange Y, and these orders are compatible. Unlike public equities markets (which have RegNMS) this order may NOT be routed from X to Y. The result is inventory is wasted or put to a lower value use.
If an arbitrageur notices this he can cause the transaction to occur which would not otherwise occur.
People like the author, Andrew Waterman, and others like Bunnie Huang who work towards making more of computing open are inspiring. I feel like the last piece of the puzzle is open FPGAs. I'm quite sure FPGAs are critical to the open hardware movement.
I should quit Google and solve this...
IMHO, a memory-to-memory architecture would make for a much simpler ISA and allow much easier code generation (no register allocation needed).
This is not a popular thing to say, but democracy would work a lot better if misinformed people were simply not allowed to vote: https://outlookzen.wordpress.com/2014/01/21/democracy-by-jur...
That's the problem? The fact that you take a site at face value regarding its fact checking score, which could be totally BS, partisan, inaccurate, etc., in real life, isn't a problem?
>But look at the above graphs. If PolitiFact was clearly biased, they wouldn't be as wide ranging as this.
That still leaves two ways it can be partisan (or, off) wide open:
1) It can be partial to the shared assumptions/ideology of both parties (who are more alike than different compared to how parties are in other western countries).
2) It can be partisan to one party, and still mark as "bad liars" its people -- only for that party it does it for second players, whereas for the other party it does it for the leadership and first-rate players.
This still gives a total like "here, we have reported on 100 lies from Democrats and 100 from Republicans" thus we're impartial, while still hurting one or the other party far more.
Just look how many debunked stories on Snopes comes from sites whose sole purpose is to trick people on Facebook.
In addition to this, a lot of journalists need to ask themselves whether people's distrust of them comes from them being treated as 'idiots' by the press that's supposed to represent them.
There's a point here where the article says:
"Only about 20% feel positive towards newspapers today, again following the decline in trust in our politicians."
But how about another reason? It's not just the fact the politicians are seen as almost completely non trustworthy, but the fact the press are seen as completely out of touch and more interested in supporting the 'status quo' than the population. It's the fact that having an opinion to the left or right of the media gets you labelled as 'crazy' or 'bigoted' or 'horrible'. That supporting Sanders gets you called a 'Bernie Bro'. Etc. The level of contempt a lot of journalists show towards their audience leads to people hating them, which leads them finding people that exploit that hatred for less noble ends (usually extremist groups and publications).
That's something else the press needs to fix, and fast.
Setting the clock back to the 1890s.
I like the way my wife smells, and we certainly share no relatives for many, many generations.
The notion that you are lacking an entire layer of communication when you do not have a sense of smell is also subjective, especially when it comes to those born without it; you can't miss what you don't know. Also other senses tend to compensate, i.e. when some blind people report heightened sense of hearing.
 Lack of discussion of sample size makes me think it wasn't representative in size or demographics.
Interesting article. I'm curious as to how much reliable research has been done into contagious emotions. It sounds a bit farfetched the way they describe it in the quote above. The idea is talked about in a Live Science article too , but also not much detail is given. I haven't read much about it, but it sounds like one of those single paper non-reproduced ideas that's interesting enough on its own for people to keep talking about it. Does anyone who knows more about this than me have other opinions?
(I have no idea what bits of the BBC are geo-blocked. Sorry if this isn't available.)
Smell is an important part of the workplace.
So is touch and 3d vision.
That said, I also think the focus on paper maps is misplaced. Old-style road maps had to answer the question, "how do I get there from here?" New-style digital maps don't need to answer that question any more! Questions new-style maps need to answer include:
* I know the name of a place or street; where should I zoom in to see more things around that place or street?
* I need to go to a (gas station / rest stop / hospital); where's the closest one?
* How would I get home from where I am?
* I'm in an unfamiliar place and would like to go "downtown" (where there are restaurants and things to do); where is "downtown"?
* Where is my car right now?
Roads help you to orient yourself with the map, but they aren't as fundamentally important to digital maps as they were to old-style road maps. The visual space of the map might be better spent helping answer questions like these.
Maybe people forgot, but google maps was /blazing fast/ in the beginning.
Nowdays, it brings my browser down to a crawl even before the images are shown. Maybe people are just stuck with this "google is the best" mentality, but this has stopped being universally true since many years.
Use OpenStreetMap. It's data is way superior. It's _your_ data. Cannot strett this enough.
Want a fancy browser? Nokia maps have always been incredibly sleek to use:
Google has still the lead with street view, but for the actual maps I really encourage you to look for alternatives. They've destroyed their interface as far I'm concerned.
Google, hire cartographers. Amazon, hire librarians and typesetters. Spotify, hire musicologists.
(Edit: as the child comments correctly point out, this is only the default OSM rendering style.)
On google maps I can never find what I want. I thought it was because I've been using OpenStreetMap, and had gotten used to a different display style. Seeing a place once in OSM anywhere in the world, and zooming out from it to continent view, I can almost always find it again later. On Google Maps I always got lost. Now I finally get what the problem is.
Google provide an alternative mapping product, Google Earth, for satisfying curiosity about the planet. Google maps is primarily a navigation tool. They have very distinct use cases.
Another thing I'd like to see is making the "avoid tolls" setting easier to get to. Northern Illinois is toll road central, and I-355 in particular is a huge ripoff when you pay cash. Since I don't need any of the tollways for commuting, I can't justify getting an I-Pass.
The new data that Google has comes from Android handsets and from users using Google maps and Waze on Android and non-Android handsets.
This data is all about users in motion. At the scale shown in this article, it's almost exclusively people driving. As a result, it makes a lot more sense to focus on the connections over the places they connect. This becomes clearer when you view the roads as more active entities by including congestion and other real-time data.
This may not be the best presentation for everyone, but it seems to be the presentation that fits best with Google's current mission and capabilities.
The paper map was used to navigate from place to place. That never happens with a google map. Sure, you navigate with them, but by telling Google where you want to go and letting them draw a line on your map. You don't need all that extra information if your phone is navigating you from place to place. You just need something clean that you can glance at to get a sense of where you are.
So that's what they've designed their maps to give you.
At the zoom level the screenshots are taken at, maps are essentially useless. The most important information they can convey is "there's lots of roads here" or "this region is densely populated". The maps aren't optimized for accuracy, they're displaying a summary. The long island example really struck me - the old map displayed the primary route only, the new map conveys the fact that there are multiple options. If you're stuck in traffic and you pull up the map, you can see there's another decent route and ask the app to provide you with directions on an alternate route. If you're using the old maps, you'd just see the single primary route highlightedand assume you should stick with the route you're on.
ps: I recently discovered the 'my timelime' feature. Surprising to say the least.
Find alternatives, there are several mentioned in these comments for starters.
When/if Google start seeing a reduction in their map use, only then will they start paying attention.
So I'm not too bothered by this change. What I don't understand is why hasn't anybody taken the Google Maps routing and use it in OSM apps? Might not be legal, but similar non-commercial projects it should be fine.
Not sure I agree, in my opinion most of us use search & destinations nowadays, even in offline mode. The only reason I look at a map is to gauge distance between me and my destination.
You don't need a hulking great map with loads of detail at a high level to get from point A to point B, you now just use your smartphone for that.
I assume Google spotted a trend of people searching place names as opposed to picking points between two separate locations.
So how does that change the function of the map?
Well, we no longer need to have the zoomed out overload of detail, if we need more information about a place we are visiting, we type in the city name, or address, then zoom in close to see the detail we need.
The article kind of skimps over the point that we can interact with those maps now.
Off-topic, but as a native NYCer, we would never call it that. It's the LIE. I once had a woman ask me in the parking lot of a Walgreens how to get on the 278. I was puzzled for a second, then I realized she was talking about the BQE. Living out in California now, I miss the days of calling highways by name.
He poses the question of which map you'd want when lost. A mobile phone with Google Maps is clearly the right answer.
It's like claiming that the new york times should display the entire full front page of the newspaper on a mobile device so you can read several articles without scrolling or loading more content because that's what you used to be able to do with the real paper.
That or people just find whitespace aesthetically pleasing and Google designers went kind of crazy with it.
My biggest gripe is contrast, rather the lack thereof. Zooming in and out doesn't help. There's a lack of contrast at all levels!
And the algorithm for displaying place names sucks. You'll see certain names at one level, zoom in and they disappear, zoom in some more and they finally reappear.
Paper maps are unquestionably more ergonomic (but much less convenient) than Google maps. But it's not just Google. I find other online maps equally bad. It's quite sad that a paper Rand McNally map is so much better at actually presenting the geography of an area.
Perhaps other posters here are right, it seems like Google maps is designed for point-to-point navigation, nothing more.
Old style printed maps had cities on them, because the map didn't know where you were going! You had to find your city or your location on the map. Now the map knows where you're going, so it can show that place extra-clearly while hiding a lot of detail that's not relevant. Roads are relatively more relevant than cities, since you travel along them to get from one place to another: displaying a road shows the user that they have a primary thoroughfare between locations. You might not care about the name of a city if you're just passing through; and the city that is your destination will be specifically shown.
My guess is that they display only as many cities as needed to help people orient themselves while looking at the map, to understand what they're seeing. More than that is irrelevant to the primary use-case of navigation.
> Google Maps of 2016 has a surplus of roads but not enough cities. It's also out of balance. So what is the ideal? Balance.
The ideal is utility, and the key use-case for Google Maps at that zoom level is driving navigation. The user's going to input their own destination into maps anyway, most of the time, and they'll expect it to appear, so it's no surprise when it does.
Google would have data on this: how many users use Google Maps while driving regularly, multiple times on a trip (at that zoom level), while not having a destination entered (and with no destination, obviously no turn-by-turn directions)? Probably not many. Now imagine overlaying your route with current position and destination on the maps - it's going to be easier to scan the new ones. Edit: Navigation is the primary use-case for a map, and I'd guess usage motivated by that purpose dwarfs the rest by an order of magnitude, and so it's a good default.
Anecdatum: last month I was driving out for a weekend in some rural bungalows a few miles outside a small city (Elvas, Portugal). The address was a bit vague, the place name too common, so much so that I had GPS coords stowed away in a message pic (don't ask ;-).
So, when I pop out Google Maps in the old faithful iPad2 (which happens to be the 3G version, and therefore GPS chipped, good for navigation), and zoom in the area ... amidst thin local roads and lots of blank space, there's the place name and the days we're staying there.
Turns out the Gmail app in that iPad was also used to send or review the emails with the reservations.
Even in my 'desktop' I've been noticing Google Maps marking out city places which seem small compared to other landmarks, but where I often go or mention in emails.
(Thanks, I do know where I left my keys today, I'm good.)
The paper map has to have lots of cities on it, because there's no other way to find where exactly the specific suburb is if it's not on the map. In Google Maps, you can zoom in, you can search, you can have a link for direction sent to you, ...
The author may claim "less is just less", but apart from "printing a map before knowing where I'm going", I can't think of a situation where the "improved" map would be at all useful.
Now when we service these modems the OEM vendor comes with DOS running in a VM on a normal pc. When you know what we we rent this PC for (few $K per month), I just can't help but laugh.This PC was also not possible to purchase from the OEM.
I absolutely love this car. Definitely built for a purpose, not much hand-holding, just the sight of this manual switchboard, the position of the seat, the minimalistic dashboard, ... <3
The McLaren F1 is a car for the track, the few examples that exist do go out and race. Over a race weekend I imagine the car is taken apart and put back together again in a multitude of ways, e.g. wheels taken off and different ones put on. Note how those wheels are held on with just the one big bolt that has to be tightened massively. That is not 'hi-tech', that is using the appropriate race-grade technology for the job.
I have only stared into the bowels of a McLaren F1 once, but I bet that beyond the gold there are lots of things held together with nuts, bolts and clips that look crude compared to bicycle technology with bearings that really are cruder than on a bicycle. Yet these parts can be swapped in and out and adjusted easily.
My point being that high-end race cars are not entirely high tech, under the hood there is stuff that is 'bits of bent tin'.
It's configurable, but results can be displayed in eww, Emacs' built-in text-based web browser.
Acme's plumber lets the user customize behavior on a piece of text based on its context or a program that takes it as an argument.
A previous discussion about Acme on HN: https://news.ycombinator.com/item?id=4533156
If you use OS X, something is broken in browse-url (I think) which causes anchors to not get opened correctly (annoying), so I added this snippet to allow it. Not robust at all, but you get the idea. The "open -a" gives window focus to Chrome, mimicking the built in behavior.
The storage comparison really should be updated..
1. make it highly available
2. play nice with firewalls
If I deploy Prometheus outside a NAT, and want to monitor 100 physical machines on the inside with node_exporter, as well as a dozen different services, how to make these metrics available?
What if I have four identical NATed sites and want them all monitored by the same outside Prometheus instance(s)?
As another interesting use case, here's the solution of the "eight queens problem" in Postgres:
An here's the accompanying article:
I wonder how names for this conjecture have diverged like that...
vs today :
"Work less, enjoy more!"
Of course it is much more efficient to keep the subjects well fed and well hypnotized, rather than waste a lot of energy on keeping them subdued through violence. It is harder, but the returns are much worth it.
And that's not only what the "elites" who realize (they only gain a slightly higher standard of living), but also the mythical "AI singularity" which should one day spontaneously arrive.
Our fears of "the machine" which kills all humans because we are pests are unfounded - a hyper intelligent entity would quickly understand that violence is a very weak tool of control. Pleasure is much more powerful and achieves much more plus a thankful smile.
It is "efficiency", "pleasure", "less work", "more fun" that we should be wary about ... but not me , I like to enjoy myself. And I welcome whatever overlord (AI, aliens) that promises more pleasure and less pain any day.
It's a pity the comic had to be removed from its original site citing copyright reasons. imho, it could have been considered as fair use. Sad.
Unlikely, but is anyone able to recommend any way of connecting to the pins without having to drill or solder, my visual acuity isn't up to making a solid job of this and my soldering is frankly dire.
The problem at hand isn't 'how do you catch criminals on tor', it's 'how do you engineer the social contract to prevent rebellion'. Secret legal proceedings don't serve the latter goal.
On the simple review, this seems like garbageware and a nice exploit. But the name PUP gives it away; potentially unwanted programs. We can't say for sure that the user didn't want them.
Now, if the program resists removal at the behest of the user, then yes, it's malware. But I've done computer work back in the day with Bonzi Buddy, and there real users who wanted that pile-o-crap on their computer.It was very much wanted, and went out of their way to get.
Oh dear. Four exclamation marks? Someone's about to start wearing their underpants on their head, methinks. 
"With a network display 11.7 million PCs installed worldwide, Tuto4pc.COM GROUP achieved a turnover of 12 million during the year 2014."
his dream house included a maze of trap-doors and what Sergeant Scheimreif called escape holes. It was everything he seemed to want a building to bewith near-infinite ways of getting from one room to another and no upper limit on the places he could hide.
> For Roofman, it was as if each McDonalds with its streamlined timetable and centrally controlled managerial regime was an identical crystal world: a corporate mandala of polished countertops, cash registers, supply closets, money boxes, and safes into which he could drop from above as if teleported there. Everything would be in similar locations, down to the actions taking place within each restaurant. At more or less the same time of daywhether it was a branch in California or in rural North Carolinaemployees would be following a mandated sequence of events, a prescribed routine, and it must have felt as if he had found some sort of crack in space-time, a quantum filmloop stuttering without cease, an endless present moment always waiting to be robbed. It was the perfect crimeand he could do it over and over again.
> For Roofman, it must have looked as if the rest of the world were locked in a trance, doing the exact same things at the exact same times of dayin the same kinds of buildings, no lessand not just in one state, but everywhere. Its no real surprise, then, that he would become greedy, ambitious, overconfident, stepping up to larger and larger businessesbut still targeting franchises and big-box stores. They would all have their own spatial formulas and repeating events, he knew; they would all be run according to predictable loops inside identical layouts all over the country.
Personally, I got a bit bored with the author's style of continuously seeking bold metaphors for the same thing, but I'm curious: do people consider this interesting writing, or does this style detract from the content?
I don't think this is common enough to justify having pentesters think about your corporate procedures' exposure to such behavior but it's definitely an interesting thing to think about when designing sensitive human or mixed human/computer systems.
Edit: word building fix
Imagine if an organized mob decided to mutualize the analysis and then stroke at the same time 100 shops?
Imagine if banks do the same as mc do?
Sorry, but if someone tried to lock me in a freezer last thought would be how polite they were.
After two years, my boss asked me to mix up my rounds. When I mixed up my rounds, I saw a lot of people doing things in that building they shouldn't have. Did I care? No! It was nothing life, or death, and I wasen't going to die over $7.49/hr. Nor, would I ruin some guys life. No--I wasen't a good security guard, but the patrons always got their lost purse/wallet back, if I found their items.
People don't like to hear this, but so much theft is internal. Entities like to blame professional criminals, drug addicts, etc., but so much theft is internal.
The people higher up in the organization stole the most. It was then middle management. And then Cops stole--wow, it was staggering, but they were pretty slick. And the thefts were always blamed on gangs, the homeless, or that new janitor.
I stayed quiet, and watched their behavior. I can usually walk through any store, and spot which employee is stealing. I have found they are usually overly enthusiastic, care too much about following exact procedure, and they are usually the last person in the organization you would expect would have a dark side, and never complain. In other words, the person who gets the managent promotion.
(I don't want to argue. I won't be back. If you do have a problem with stealing, really try to stop. If you can't stop, be smart. Don't steal enough to rack up a felony. I believe it's over $500? Don't ever walk into a establishment with no money. I forget what it's called, but it racks up big charges. If you are stealing because of the thrill it brings, take up intence exercise, or see a therapist. And try to take on Robin Hood morality; Never take from the poor. Don't let the innocent guy take the fall. Be a stand up guy?)
If you pay attention, the best Agile systems focus on improving the skills of developers instead of forcing people to follow the 'steps' or a formula.
What about starting off with microservices (because tools like k8s make the otherwise insane management overhead more tractable)? Could that be a possible solution?
The problem of scaling state, databases need to handle anyway. Ideally the same database just keeps on working as you go up. Or atleast the protocol is the same, and you switch out the implementation from a single instance to a clustered version.
So, I don't think we can take it as far as author is suggesting where it's like classical vs quantum physics. Maybe at ASIC vs software level. Even those were partly joined with synthesis & coprocessing tools. I just don't see it as every high-level description I read about things is based on similar principles at each layer of the stack. Given similar constraints or goals, you would use similar strategy. There's certainly divergence or outliers but more repetition of patterns than anything.
The only myths are that technology/fads X, Y, and Z should've been widespread adopted over the ones (or enhancements of them) that consistently worked. The author is seeing results of people building on stuff that came with assumptions that don't match new problem. Or people straight-up ignoring root cause in their solutions or methods. Common problems.
So, every process around being able to deliver that by just applying that process creates a false hope. Or it is not meant for this purpose but the "buyers" aren't aware of it and have other expectations.
I'd like to give you a simplistic answer, like "All you need, kid, is a small team! For anything!"
Slogans like that are true, but yet they are terribly misleading, because 1) many organizations are already terribly overstaffed, and 2) it doesn't really help to tell teams "What do I do right now?"
So here's as simple as I can make it:
Good organizations will do whatever is necessary to make things that people want, even if that means instead of programming, the programmers sit on the phone and do some manual chore for people as they call. Before you code anything, you have to be hip-to-hip with some real person that you're providing value to.
But as soon as you have those five folks sitting on the phones doing something useful? You gotta immediately automate. Everything. This means you're going to have all freaking kinds of problems as you move from helping ten people a day to helping a million. You have to automate solutions, access, coordination, resource allocation, failovers, and so on -- the list is almost endless (but not quite)
As they grow, poor organizations take a scaling problem and assign it to a person. Somebody does it once, then they're stuck with it for life. Good organizations continue to do it manually, then immediately automate. Somebody does it once, then the team immediately starts "plugging all the holes" and fixing the edge cases so that nobody ever has to manually be responsible for that again.
Growing "sloppy" means you end up helping those million people a day -- but you have hundreds of people on staff. Meetings take time. Coordination takes time. There are a ton of ways to screw up. People tend to blame one another. Growing good means you can be like WhatsApp, servicing a billion users with a tiny team of folks.
If you're already an existing BigCorp and have been around for a while -- odds are that you are living with this sloppy growth pattern. That means you need to start, right now, with identifying all the feedback loops, like release management, and automating them, such as putting in a CI/CD pipeline. Not only that, but there are scores of things just like that. You have a lot of work to do. It might be easier just to start over. In fact, in most cases the wisest thing to do is start over.
Now picture this: you're an Agile team at BigCorp and you've got the fever. Woohoo! Let's release often, make things people want, and help make the world a better place. But looking around, all you see is ten thousand other developers in a huge, creaky machine that's leaking like a sieve. You go to a conference with a thousand other BigCorps, just like yours. Are you going to want to hear about how it's better just to trash things and start over, about the 40 things you need to have automated right now but don't, or how to make your section of 150 programmers work together; how to "scale agile"?
Scaling Agile is an issue because the market says it is an issue.
The problem I see is that people aren't really willing to be honest about the cultural transition that happens, so we also can't be honest about the process transition.
I think Agile-ish approaches work very well in startups, because the structure is pretty flat, and the goals are shared. But as companies grow, they tend to become what I think of business feudalism: hierarchical, control-oriented, territorial. For that, it makes sense you need different processes. And I think large company Agile is in effect Waterfall with a faster cadence, so you get that different process. But nobody will admit it. "We're doing Agile," they say, with too-bright eyes and gritted teeth.
What I wonder is: what if instead of killing the peer culture and the human-centered process as we scaled, we kept them?
"If my experience serves any purpose, it is to illustrate what most already know: our courts must not be allowed to consider matters of great importance in secret, lest we find ourselves summarily deprived of meaningful due process. If we allow our government to continue operating in secret, it is only a matter of time before you or a loved one find yourself in a position like I was standing in a secret courtroom, alone, and without any of the unalienable rights that are supposed to protect us from an abuse of the states authority."
This seems similar to the Apple case, was Tim Cook just too big to bully ?
Snowden had his own encryption or used GPG so a Lavabit backdoor encryption key would not lower the entropy of Snowden's encryted emails.
Was Levinson's gag order lifted ? Why did Lavabit have to close but Apple didn't ?
[EDIT] Levison was not jailed.