"Under the new operating structure, its main Google business will include search, ads, maps, apps, YouTube and Android and the related technical infrastructure (the Google business)"
"In connection with the new operating structure and upon completion of the Alphabet Merger (as defined below), Larry Page will become the Chief Executive Officer (CEO) of Alphabet, Sergey Brin will become the President of Alphabet, Eric E. Schmidt will become the Executive Chairman of Alphabet, Ruth Porat will become the Senior Vice President and Chief Financial Officer (CFO) of Alphabet and David C. Drummond will become the Senior Vice President, Corporate Development, Chief Legal Officer and Secretary of Alphabet. Larry, Sergey, Eric and David will transition to these roles from their respective roles at Google, whereas Ruth will also retain her role as the CFO of Google."
Interesting strategy, hard to second guess from the outside of course. Sun's motivation was to figure out whether the other parts of the company could stand on their own, it also makes it less fiscally complicated to discharge an entire group into the void. Think HP selling off the Agilent half of itself.
Generally though this sort of move is a way of containing and then "fixing" cost problems. Divestiture is so much easier once you've created the framework of a whole organization around each chunk. It can also be weirdly inefficient, at Sun each of the "planets" paid in a sum of money to IT (Bill Raduchel's organization) for "Corporate IT support" except that Corporate IT didn't work for them, they were just the only vendor you could use to get your IT services, so what you ended up with was really crappy IT work that you couldn't shop around for. It was maddening. But the 'collection of companies' design pattern requires either that you have your "service providers" that everyone uses (HR, IT, Legal) which gives little incentive for quality service, or everyone gets their own version which means a lot of excess overhead and duplicated work.
I could think of at least two other ways Google could have re-organized without bringing that pain upon them, and as Eric lived through it at Sun as well I'm sure he has an opinion.
Oh, and having one of the sub-companies get the world's #3 brand? I wonder how that works out.
 Answer "No" for SunSoft, "Yes" for Sun Hardware, "No" for Sun Labs.
Sergey and I are seriously in the business of starting new things. Alphabet will also include our X lab, which incubates new efforts like Wing, our drone delivery effort<a href="http://www.hooli.xyz/" target="_blank" class="hidden-link">.</a>
I don't know much about trading, but look at that "after hours" spike! http://postimg.org/image/ho5ecyr99/
EDIT: All google subsidiaries are now subsidiaries of a conglomerated called Alphabet. Google is a subsidiary too. Google stock will now be Alphabet stock.
- Google will now be operated as a subsidiary of a new company called Alphabet.
- Alphabet will be publicly traded under the same symbols as Google is now traded.
- Stock will just transfer as-is.
- Sundar Pichai is now the CEO of Google.
- Larry and Sergey will run Alphabet as CEO and President, respectively.
As Google cofounder Larry Page, now CEO of the holding company Alphabet, that will have as its main subsidiary Google, the search company, said earlier today:
>As Sergey and I wrote in the original founders letter 11 years ago, Google is not a conventional company. We do not intend to become one. As part of that, we also said that you could expect us to make smaller bets in areas that might seem very speculative or even strange when compared to our current businesses. From the start, weve always strived to do more, and to do important and meaningful things with the resources we have.
Well, if Google wants to keep spending investor money into "speculative" areas, what could be dumber than reporting its financials as "Google: hugely profitable" and "other random stuff: huge cash drain"? It will just make investors all the more sensitive to the fact that Google's search business is basically what makes money, and everything else is - for now, at least - a huge cash drain.
Raising awareness to Google's - oops, Alphabet's - business unit's individual financials will attract attention of the likes of Carl Icahn, who's raided Ebay in the past, and who'll engage in open challenging of Page and Brin's capital allocation decisions. It will definitely not compensate for the advantages of having Sundar Pichai take on greater responsibilities as Google chief, etc.
Not at all a wise move.
I can't recall this sort of thing happening in my lifetime, so it will be really interesting to see how this plays out. I also wonder how this would be treated if Google didn't have the crazy corporate structure they have now (where public shares are essentially non-equity and non-voting).
Edit: I am reasonably certain this is a tax and liability optimization strategy. It allows their more risky units to operate with separate liability from their cash cow.
Edit 2: I'm actually surprised the stock value hasn't tanked because most of the future potential of Google just got moved outside of the company. How much of Google's future value was based on X? I would say a non-trivial amount of the stock price is the anticipation of future profits, which are now no longer a part of the company the stock is intended to index.
Edit 3: Disregard Edit 2, I misread the release the second time through and assumed X was not part of the company :).
1) Google - a company comprised of reliably profitable products that run at massive scale (search, video, mobile, mail etc), and they know that Sundar Pichai can manage this
2) Everything else - these are high risk ventures with possibly enormous pay-offs. This is a breeding ground for positive black swans which Google are keen to expose themselves too.
To borrow Nassim Taleb's nomenclature, Google is splitting into mediocristan (1) (bounded variance - existing products [like YouTube] are predictably profitable) and extremistan (2) (Calico - if a major breakthrough in combating aging related diseases is made it will be both unpredictable and hugely materially beneficial)
>> We will rigorously handle capital allocation and work to make sure each business is executing well.
This sounds like the business restructuring will allow Sergey and Larry to apply just as much capital as they see fit to the extremistani business divisions. In other words they would like to control their exposure to possible consequential rare events in a simple fashion: by controlling a very simple set of parameters - i.e. how much cash each business division gets.
Google, Calico, Nest, Fiber, Ventures, Capital, X.
Looks like Search / ads, YouTube, Maps, Apps, and Android will stay under Google Inc.
So many people keep saying their biggest fear of google is that they will turn devices like Google Glass, or the Google car into products to collect information on people. When those products themselves are viable business models.
Alphabet will initially be a direct, wholly owned subsidiary of Google. Pursuant to the Alphabet Merger, a newly formed entity (Merger Sub), a direct, wholly owned subsidiary of Alphabet and an indirect, wholly owned subsidiary of Google, will merge with and into Google, with Google surviving as a direct, wholly owned subsidiary of Alphabet.
Appointing Sundar as CEO also allows them to focus more on the cool stuff in Alphabet and let Sundar run the meat and potatoes Google operations. Interesting moves.
> We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.
Seems disingenuous, since YouTube and Android (at least) were acquisitions.
I have very fond memories of early Google.com, and there always used to be a vivid spirit in their products that everything was so experimental and technically on the edge. That feeling has been gone for a very long time, but since Larry has come back it's been slowly returning. Call me what you want, but I feel like this is such a smart move for the founders' freedom to explore.
And the way they announced it is totally in line with the spirit. I'm sure there was a lot of technical work, and will be more, but they way it's all hidden in the back so that they can focus on the most important parts. I'm a fan.
Life sciences, life extension, military, information, telecommunications all under one umbrella company.
This will keep Google products unified and work together, and will give them opportunity to throw mud at the wall with Alphabet.
Also worth mentioning is that this kind of corporate restructuring is fairly common. Usually it is done by X company expanding into X Industries with X being a subsidiary of X Industries. It is just more visible because they went with a different name, reasons of which are in the post.
They're making a Bet on the Alpha versions of these products.
Also, feel bad for the owners of that domain as it is effectively being DDoS'ed.
It used to be you were investing in a search/ad company that owned a lot of other stuff. Now you are investing in a company that owns the leading search/ad company.
The difference is obviously academic but I think it will make a difference in how the shares are traded. Perception drives the market after all.
I think it will streamline the management of all of these different businesses, at least make it clear where Larry and Sergey are focusing their efforts.
While BH uses money from a cash cow business (insurance) to build a portfolio of companies that look like the established economy, and manage those companies in an exceptional way, improving individual returns while reducing overall unsystematic risk (effectively using good management to move beyond the Markowitz efficiency frontier), Google will use money from a cash cow business (Ads) to build a portfolio of companies the look like the new economy, using effective management in the same way.
The King and his vassals, ladies and gentlemen.
A is for Asynchronous, the way our code should be
B is for Beta, the first stage of our code the user will see
C is for Capacity, for this planning helps our hardware not fail,
planning capacity helps our product meet our users at scale
D is for Datagram, which you may not get from me
E is for E-tag, for caching is key
F is for Freedom, the state information wants to achieve,
follow information through history if you want to believe
G is for Google, the advertising and indexing whale
H is for Hystrix, because Netflix Tools are for scale
I is for the Internet, for without it, many start-ups would fail
I is also for iPhone who's apparently in jail
K is for Kill, because scripts can misbehave and memory can leak
L is for LifeSize, for meetings about meetings must be
M is for Metadata, because tracking in bulk is (mumble, mumble, something, something), look! "privacy!"
N is for NoSQL, for relational data is dead
O is for Octocat, who houses our code so it is not in our head
P is for a Penguin named tux
Q is for Quiet, lost to the the tide of the open office flux
R is for Rabbit, because some problems require a Queue
S is for Secure, for we have our our user's data to lose
T is for the terminal, for how else can we see ascii Star Wars
U is for UTF-8, who's lack of handling makes bugs in our source
V is for Vitesse, never has mysql DBs been so easy to scale
W is for the 5 Whys, that guides us in post-motems when we fail
X is for Executable, which chmod can help our script to be
Y is for YCombinator, for many a start up, encubators are key
And Z is for Zsh, not your ordinary shell
These are the letters, remember them well
These are the letters, from A to Z
These are the letters, next time will you please say them with me?
[edit: format, typo]
Legal issues, I presume? Or are Brin and Page just having identity crises?
I don't think anyone will care if they see "Foobar, an Alphabet company" in the same way they would if it was Google, in any case.
I'm not a proud man.
I think it's likely that tax benefits from the reorg are the biggest reason for the stock price increases. But it also appears likely that there will be an offering of Alphabet stock in some form so it's curious to see how the value will break out.
If I was a conspiracy nut I would say this way it's harder for ex. the EU or any other political entity to claim they have any dominant positions as such.
The company that owns it, ascio.com, isn't even using it. Or perhaps they were a bit too greedy?
It is a win win for everyone.
It's noteworthy that Berkshire Hathaway refuses to deal with technology companies, while Google is exclusively tech.
i.e. Japan Display Inc. is a conglomerate that encompasses the LCD businesses of Sony, Toshiba, and Hitachi
(Just joking. Except for my email, that part is 100% true.)
But I like Google company, really well administrated, it's easy to see on the annual release reports. Despite the not so good name, they are doing a great job and the right next step.
Thought I'd link this to:
This will become the Umbrella corporation from the resident evil fame :P
Who gets Google's airport in Mountain View?
Is this supposed to say Sundar? Kind of an awkward mistake to make.
No Alphabet there.
Calico (focused on longevity)
Google (now led by Sundar Pichai and includes search, ads, maps, apps, YouTube, and Android)
Life Sciences ("that works on the glucose-sensing contact lens")
X lab ("which incubates new efforts like Wing, our drone delivery effort")
Oracle is suing over Android using Java APIs, Alphabet can move Android to its own subdivision if they lose the lawsuit and close it off or sell it off and then develop a new mobile OS to replace it.
When we made Crash Bandicoot (with a team of 7), it was already virtually impossible to make a AAA game with 6-10 people, and that was 20 years ago.
I tell inexperienced entrepreneurs to take their honest best estimate and multiply by 10. Or, as Mark Cerny (our producer on Crash) used to tell us, "add one and increase the unit: 1 week = 2 months; 2 months = 3 years; 3 years = you're doomed".
For a less anecdotal version, read The Mythical Man Month. (The factor he arrives at is 9.)
It reassures me that kickstarter is actually being used to fund new things, rather than just as a marketplace for existing products, where you don't even have to have found any capital up front.
Not to pile on, but this was very surprising to me... how could you not know that this would have a major impact?
"At first we could not believe that our baby was not more successful, in our emotions we started looking for explanations not related to the game. Maybe gamers are just spoilt brats, bashing on everything, maybe there is an oversaturation of indie market, maybe all the free-to-play games by big studios are giving players a false sense of value. How could less than $10 be to expensive for a beautiful game like Woolfe? How could this be our fault? "
"Of course none of the emotional excuses above are the reason of our mixed steam rating. We can only blame ourselves "
It's good to see someone excepting the fact that their game just wasn't good enough. Also nice to hear someone admit what a cop out blaming customers or the market is.
If anyone is motivated enough, it would take just 500-1000 people who would chip in some spare change to make this open source (I'm not interested in any of this, before anyone suggests it - just hinting to those mentioning 'open source').
How did this happen? Shouldn't that money have been allocated earlier? Did they spend every dollar thinking that this is the dollar that turns everything around.
Maybe I just don't get the mentality of those looking to fund projects on Kickstarter.
"What about our Kickstarter backers?
The people that believed in us from the beginning? People we made promises too. People we have let down. Even worse people we will not be able to give the full rewards they invested in.
The crazy thing is, that we have most of the rewards ready for postage. All the backer stickers and letters of enlistment just need a stamp. All the poster sets printed, signed and ready. The artbook is ready to be printed, the soundtrack is ready for distribution, the DVD case is ready for production. But we have literally no money whatsoever to pay for stamps, let alone print the artbooks and dvd-cases."
This could be a good closure. From comments:
"Fredrik Waage:Please honor your backers who believed in your game by releasing the DRM-free version like you promised so linux backers (hopefully via WINE) and ppl who don't like steam can enjoy your game."
Or maybe even go open source?
I hope they try again and apply whatever lessons they learned
Remind me of Alice, which had some beautiful levels.
While I can personally understand this reaction to having trouble, it's not healthy nor is it fair to cease communication in this way. A lot of promising projects announce difficulties by falling silent, including Limit Theory, which I was looking forward to seeing realized some day. Maybe crowdfunded projects need to shift to a mindset where it's expected to be upfront about problems.
> So with a heavy heart I have to communicate that as of now the IP of Woolfe, all of the assets and source code is now for sale
I said it before, but I'll gladly say it again: backing a Kickstarter project is a bet, not a pre-order. When I back a project, I calculate the odds and place my money accordingly, full well knowing it could be gone. As long as makers honestly tried to achieve something, I'm fine with failure.
Not everybody sees it that way, though. It would be nice if new games on Kickstarter would include a pledge for their IP to enter public domain in case the project as a whole fails. Because the alternative being played out here is most likely not very helpful to anyone, including the defunct studio.
I wonder if more experienced game developers could have recognized the failure much further down the track and pulled the plug (or taken on a lot more funding?) before it took the company under. I also wonder, if this undertaking had been made in the US, would the outcome have been any different due to different... I dunno, funding laws or more available funding from traditional investors?
Was the completeness of the failure of GRIN down to the fact they spent way more money than they had trying to prove a point (don't need to do pixel or highly stylized art) ?
The game industry is interesting to me because of the nature of the problems they have to solve, but it is so damn brutal I don't want any part of it.
I think the lesson here (and this one really is probably important for entrepreneur types), is getting funded really should not be viewed as an achievement. It just means the stakes got higher.
Game development is so incredibly hard. It feels like a winner-takes-all game where the very top 1% have everything, and the rest have almost nothing.
It's a shame that this studio has to fold. The game looks pretty good. You don't get to that point without having incredibly talented artists, programmers and managers, without having a team that's working pretty well together. And yeah, their reviews on steam aren't perfect -- but to me that doesn't seem that terrible. Everyone stumbles before they really catch their stride.
It sounds like they just ran out of money, really. They ran out of time. They weren't given a real shot. What a shame. I see startups pop up all over the valley here that don't even have a tiny fraction of the ingenuity and talent of this company.
It's a shame that our economy doesn't value art like this more. A real shame.
Tale of Tales is another example of a studio who produced some very interesting games in term of Art. The path was a very interesting game and as influenced quite a few other games.
"In the video game industry, AAA (pronounced "triple A") is a classification term used for games with the highest development budgets and levels of promotion." https://en.wikipedia.org/wiki/AAA_(video_game_industry)
It should be obvious from this definition of an AAA game that 6-10 people in a small indie game studio can't make a game with 'the highest development budgets and levels of promotion'. It requires lots of $$$ and other resources, which an indie game company almost never has.
"Wisdom begins with the definition of terms." - attributed to Socrates
1. I'm shooting from the hip, because I have zero game dev experience, and I know even know if this is possible, let alone whether it even makes sense (are 8-bit, etc. games more simple to build than those with modern graphics?).
I think game development must rank way up there with restaurants in terms of business failure rates. It might even be worst than restaurants but the data could be impossible to collect.
Because restaurant failures are a matter of public record while game developers more often fail privately. The data simply evaporates. It's a really tough business, even with money.
For the most part lack of business experience and idealism or hubris can play a big role in this. The good old "the market is <insert big number> billions, if we only grab 0.1%" fallacy.
To be sure, hubris and doing something because you love it has it's place and fortunes have been made because of this. That said, the cold hard reality is that the gaming industry is paved with the corpses of probably millions of entrepreneurial efforts who have tried and failed.
Generally speaking, for most developers, I think there's far more money in developing games for those who have cash to burn (whether successfully or not) than to try to create the next blockbuster.
As a small data point, years ago we were approached by a company to develop an iOS children's game for them. Lots of animation, sound, graphics creation, etc. They had no experience in software development at all. They wanted to convert this low budget cartoon character into a game because they convinced themselves they'd make millions with an app.
We told them it would cost $50K to $250K (or more) and months of development depending on specs. Of course, they had no specifications. It would be impossible to understand costs without a solid spec.
We also recommended they DO NOT develop this game and stick to their core business. In fact we pushed back hard on this point. I sat down with the CEO for a couple of hours to explain failure rates, challenges, issues, etc. They needed to fundamentally transform their company and were not equipped to do so at the time.
I got an angry email from the CEO telling me we were crooks and how they found a company in India that could build them the entire game for just $15K in three months. What the hell did I know? Right?
A year later, almost to the day, I got an email from the same CEO asking if we could meet. We did. He revealed they burned the $15K and got nothing more than a slideshow made with templates. They then found a larger company (also in India) and burned an additional $50K and got something that was buggy and wasn't even playable. By the time he asked me for a meeting they had burned through over $150K trying to have their game made and had nothing. They couldn't even submit it to the app store. They were nearly out of money.
You could probably guess what happened next. He asked if we could fix it for $20K. I explained I'd be surprised if anyone would have any interest in touching that code-base for any amount of money. And, no, $20K couldn't even touch building the app they envisioned a year earlier. I repeated my recommendation to stick to their core business. Which they did. After learning an expensive lesson.
Anyhow, long story to relate one type of scenario behind game development where ignorance and hubris meet a pile-o-cash and a bonfire follows.
Sorry to see the Woolfe team fail. I don't think I am being a pessimist when I say this is far more likely to be the outcome with games. Kudos for trying. Move on. Quickly.
Could we Kickstart a project to drop it all in the Public Domain?
Really? Did anyone not think that would be huge?
The crazy thing is, that we have most of the rewards ready for postage. All the backer stickers and letters of enlistment just need a stamp. All the poster sets printed, signed and ready. The artbook is ready to be printed, the soundtrack is ready for distribution, the DVD case is ready for production. But we have literally no money whatsoever to pay for stamps, let alone print the artbooks and dvd-cases. "
I understand you failed. Find money for postage though? These were Kickstarter investors? You have money for a bankruptcy attorney? Just shopping around for a bankruptcy attorney can save thousands? (Maybe you are doing the legal in house?). My point is the Kickstarter Backers will appreciate a small gift of gratitude, and just might fund you in the future?
I think most small companies know they are going under weeks--months in advance? I knew months for myself. There's nothing illegial about keeping a small fund for the last days of a business.
Good luck, and with Capitalism, and all its risk-thank goodness for Bankruptcy. When I was younger, I didn't quite appreciate bankruptcy laws. I now keep a close ear out for any changes to bankruptcy laws.(There are entities that want to change the federal statutes. I knew the Obama administration wouldn't let lobbiests touch them. I worry about the next administration?)
If I was to do it over again, I would have incorporated every business I ever started? I might have even incorporated my legal name right out of college--if legal?
PG has actually written on this subject: http://www.paulgraham.com/die.html
Basically, if a company stops communicating, it's almost a sure sign that it's in its death throes, and if a company is doing well, it's going to try to reach out as much as possible so they can show off how well they're doing.
This single blog post is strong evidence for why you should never, ever buy an Oracle product, and if you are running anything written by them, why you should plan to migrate away.
Now, the culture of consultants in the Oracle sphere of influence is pretty toxic and money-grubbing. I can imagine companies being badgered into paying security weasels big bucks to analyze software with tools that cough up a zillion false positives, whereupon the weasel looks like a hero and is paid a bunch of cash, the customer panics and demands that Oracle fix a pile of non-existent vulns, and some department buried inside Oracle doesn't know how to deal. Whereupon the weasel skates off to another company to run the same scam: rinse, repeat, and this blog post.
In which case Oracle should simply call it out: "Please don't send us crappy automated scanning tool reports from the shitty security weasel consultant you hired because those reports are useless, and the same weasels have been sending identical ones in, monthly, for years, and you are being ripped off." But Oracle never passes up the opportunity to express contempt for its customers, nor can it admit to being wrong.
Better to avoid that whole ecosystem.
But: this is authentic. This is what we (i.e. hackers) are always claiming we want. Someone speaking her mind, shooting from the hip, etc. Not an anodyne blob of corporate-speak: this is an opinion, stated pretty clearly, and backed up with fighting words.
You'd expect: "Our legal team has advised us to remind consultants that they are bound by any and all terms and conditions to which their clients have ... etc. etc. etc."
You get: "Otherwise everyone would hire a consultant to say (legal terms follow) Nanny, nanny boo boo, big bad consultant can do X even if the customer cant!"
Here we have someone who clearly loves the company and the product with a passion, defending both against what she sees (very wrongly, in my opinion) as criminal misuse and waste of resources.
I'll take one of these posts and argue its merits any day, over a block of mealy-mouthed corporate crap.
> A. I actually heard this from a customer. It was ironic because in order for them to buy more products from us (or use a cloud service offering), theyd have to sign a license agreement! With the same terms that the customer had already admitted violating. Honey, if you wont let me cheat on you again, our marriage is through. Ah, er, you already violated the forsaking all others part of the marriage vow so I think the marriage is already over.
What a thoroughly nasty comment. She is comparing her customer with someone who is cheating on their spouse. Disgusting.
This post is an absolute nightmare/facepalm. Basically my takeaway is "I guess I don't want to buy Oracle software". It's really mind blowing that this is the position of a major software company in this day and age. I mean I guess I shouldn't be shocked since it is in the EULA but man I'm kind of speechless (this clause has to be illegal in some countries, too).
Edit: as an aside as a bad guy this would make me very interested in reverse engineering Oracle products. If they disallow it for their customers the reaction times to any security issues will be lower and it will be pretty valuable to find bugs in their products.
Edit2: Seems like the blog was cracked. At least the "About" on the side seems to indicate that.
I've seen this institutional hubris first-hand. The unshakable belief (typically by nontechnical management) that all of the smartest people in the world are employed here, working for me.
It always ends badly.
Please don't do this. The HN guidelines ask you to use the original title. If that's really not suitable, a subtitle or some representative language from the article is ok. But putting your own spin on it is not ok. HN's goal is to let readers make up their own minds, and for that we need accurate, neutral titles.
We've changed the title to a representative phrase from the article, and can change it again if someone suggests something better.
No matter how interpersonal she puts it. It makes me not ever want my system to rely on a company that threatens and belittle customers for protecting themselves.
If I bought a fridge for my house, I found a listening device and a pinhole camera in the fridge. Just because the company has a clause I am not allowed to open up the fridge, it doesn't mean I shouldn't.
Well, the company might have found the devices. Indeed maybe nothing customers can do until the company fixes it. Keep telling customers they are not allow to look for flaws it just ridiculous. Yes, it's your product, but this is my home!
A lot of people think open source software is a much better methodology than proprietary, highly-protected source code. That's fine, there are a lot of good arguments there. However, it doesn't make sense to throw a bunch of other, barely related insults at a company when really, all you're upset about is that their code is not open source. Criticize that...that is what you're upset about (at least so far as this specific blog post is concerned)
"(Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except we had already found all of them and we were already working on or had fixes. Woo hoo!)"
But what I really don't get is this bug bounty hateathon. If it's only 3% of bugs (currently WITHOUT incentives like a bug bounty), then that's really not that much money... and in return you get more cred, something you might use for recruitment, and the off chance that you might increase that 3% versus something going on the black market. Even more so, how much could this really cost!? And Oracle has how much money?! If you can't spend that on a bug bounty when you're security is just so awesome as the post contends, then something is really in trouble.
Did someone at Oracle actually think that this was the best way to make this point?
They admit more security vulnerabilities are found by customers than security researchers and still they release this smug "fuck off" toned blog.
Your JDBC driver IP isn't that valuable, just give me the damned source code so I can figure out why my Postgres copy out stream is blocking when I insert it into your copy in stream.
RMS would have a field day.
a) is bad, and the users should just be turned away. b) is good and far better than selling them on the black market. c) is... who cares it's a license agreement.
Well, Apple does (for jailbreak exploits).
>I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem
Uh ... You don't think that percentage will increase if you offer bounties?
And in any product that uses LGPL code, for example, it's actually a license violation to forbid customer modification and reverse engineering for the purpose of debugging those modifications.
(Though, admittedly, everyone always violates this term)
Aren't the issues not found by Oracle the problem? I'm amazed that stil 23% of the externally found security issues are reported by researchers, the incentive to responsibly disclose security issues to Oracle isn't really big. It sounds like a cumbersome process with potential legal consequences.
There also are researchers(, maybe after a first bad experience about an EULA,) that sell security issues to the grey/black market. Is there any data on how many Java zero days are exploited in the wild before being fixed?
Changing your stance and being grateful for responsible disclosures and only using your EULA to threaten and sue the bad people can potentially save everyone with java installed from a few zero days at zero cost.
If I'd read this last night... I still would've argued the same thing, but I would've been really unhappy about it.
If so, then somebody at Oracle realized that post reflected poorly on their organization. Perhaps there is some hope for Oracle yet.
While I don't endorse breaking the agreement (which was properly signed and "celebrated", as lawyers say), I find it funny in the first place that they're selling a glass container and say "you can't look into it, just use it".
I prefer the honesty of free software/open source projects that sell customer support to this business model (which is also adopted by others, not just Oracle). However, if I were already bound to it, and couldn't pay the cost of migration, I understand I'd have to stick with it.
It's also amusing that people/organizations seriously believe they can reverse engineer something as complex as a database engine and "fix it" without acces to the diagramas, docs, tests, source code, build environment, etc.
Yes we did not reverse engineer that code even though I feel it would have done lot of good for us. Not to mention the tool set provided by Oracle is utter crap as in it barely works on its own.
So I am not at all surprised that Oracle have that kind of mentality here. In all our communications with Oracle I felt they never really actually cared for what we the customers really want. All they actually care about it protecting their investments.
Though saying to your client that they cannot reverse engineer to look for security problems, is totally not done! What is next? "Exploits will not be fixed, because the users has signed an agreement that they will not hack?"
JRE CVEs: http://www.cvedetails.com/vulnerability-list/vendor_id-93/pr...
It's been 5 years since Oracle took over Java, so they can't claim it was left over.
Oracle's security record is terrible by all accounts, so how can their CSO justify anything in this blog post?
ORACLE product list CVEs: http://www.cvedetails.com/product-list/product_type-/firstch...
This makes me want to climb the empire state building, beat my chest like a gorrilla, and yell "Let me do what I know best!"
Any subsequent valid points she makes - and there aren't many - are undermined by this bitterness.
Heightened emotion so often enables effective communication, but it doesn't do any favors in this post.
(We're in the midst of an Oracle->Postgres conversion right now. It's going wonderfully. I strongly advise you to look into it, bet you'll find it way easier than you think.)
(One of the nicest things about it: we give every app its own cluster of two PG boxes, because you can just do that instead of running a centralised monster box with an expensive license. It turns out that just everything not having to play nice with others makes stuff stupendously easier to manage.)
I wonder if Oracle would send one of those reminders to a customer who analyzed an attack by an attacker who "broke the license agreement" by reversing the customer's copy of some Oracle software.
Really? What if no money changes hands?
BTW, the post is gone.
Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right?
A. Sigh. At the risk of being repetitive, no, it doesnt, just like you cant break into a house because someone left a window or door unlocked. Id like to tell you that we run every tool ever developed against every line of code we ever wrote, but thats not true. We do require development teams (on premises, cloud and internal development organizations) to use security vulnerability-finding tools, weve had a significant uptick in tools usage over the last few years (our metrics show this) and we do track tools usage as part of Oracle Software Security Assurance program. We beat up I mean, require development teams to use tools because it is very much in our interests (and customers interests) to find and fix problems earlier rather than later.
That said, no tool finds everything. No two tools find everything. We dont claim to find everything. That fact still doesnt justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which frankly hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one.
Q. Hey, Ive got an idea, why not do a bug bounty? Pay third parties to find this stuff!
A. <Bigger sigh.> Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isnt secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except we had already found all of them and we were already working on or had fixes. Woo hoo!)
I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem (and without learning lessons from what you find, it really is whack a code mole) when I could spend that money on better prevention like, oh, hiring another employee to do ethical hacking, who could develop a really good tool we use to automate finding certain types of issues, and so on. This is one of those full immersion baptism or sprinkle water over the forehead issues we will allow for different religious traditions and do it OUR way and others can do it THEIR way. Pax vobiscum.
I have started my Express Entry application and very soon I will say goodbye to US , I don't mind the cold in Canada. I will have freedom to change jobs, won't be an indentured servant. I will also get permanent residency fast. US green card for Indian citizen is around 10 years backlogged.
I suggest it's best for you to apply for the Canadian Express Entry for skilled workers.
Do not work for free.
Once more, please, do not work for free.
1. You don't have to work for free, far from it. You have in-demand skills and experience in a global job market. You can make really good money in many, many desirable locations around the world.
2. I would be extremely wary of anyone who would take you up on the basis you're proposing. Anyone who would give you such 'charity' may have very questionable morals - 'oh sure, I'll take this desperate man's skills, make potentially a LOT of money off of him without giving him his due reward, and that's completely fine with me, because that's what he said he wanted'. Imagine the sort of person who would utter such a sentence - do you want to tie your livelihood for the next however many years to such a person? I'm sure you know, there is a whole class of criminal activity in developed countries which exploits illegal immigrants based around this very premise. DO NOT put yourself on that path.
3. Never put yourself at the mercy of any one person or organisation for your survival. Your current situation is awful, but what kind of life would that be to move to? How will you feel waking up in a morning in a bed someone is letting you sleep in, eating some food they gave you for breakfast, then going to work all day only to guarantee an evening meal and bed when you return home. Repeating every day for a long time. That is not a life.
 That's what they might justify it as, at least. The reality is the opposite.
 I really don't mean to offend here, I know that's not what you are, at your core. But that's how they will see you, and that's the position you will put yourself in and indeed what you will become by following such a path.
Also, money or not, if you're working in US in capacity that usually someone would get compensated for, even for a company outside the US, you need to have work permit in the country.
If I were you I would look for jobs within Middle East like Qatar, UAE. Jobs related to tech are there, US universities are there, and the requirements with immigration are basically "if employer wants you get in." Rack up a few years of experience, then getting H1B would be viable.
All of the above is absolutely, without a shadow of a doubt, THE most fortunate thing that has happened to me and I owe all of my current success to this.
What I'm trying to say is - PLEASE get back your dignity. You're not a monkey, don't make any person, government or society make you think you are one.
While I am sure there are conditions that will allow you to come and stay in the country, I would be careful what your arrangement is with any potential startup and how it is worded.
Perhaps another individual on HN has more insight into U.S. visa rules and can provide better guidance?
Good luck nonetheless!
My suggestion is to not do this. Enter as a tourist and enjoy your time in the US. If you want to work in the US, do it legally. Do work on an open source project and try to network and get a job that that way. Maybe try to join a huge company like Google or Facebook from abroad and transfer. That's your best way, especially if you get an L1 visa.
- Once you get citizenship in one country, you can freely work on any of the other countries, or move there and live there. Creating a much bigger area of opportunity for jobs. You could have citizenship in France, and work at a cool startup in Amsterdam
- Though it causes a lot of political instability currently (immigrants constantly drowning in the ocean, trying to get across), getting a visa here isn't that hard, especially when you're from a conflict zone and can show you have a good chance to get a job.
- Europe is pretty awesome.
Coding for a startup and not receiving pay, is likely still not legal. In my experience with US immigration (I'm Australian, living in Canada...traveling to the US from time to time) they don't really care about the money, they really just are about if you're taking away work that could have otherwise been done by a US citizen. Which leads me to the point of:
The fact you're doing the work for free is very likely to be irrelevant, its just the fact that you're doing work that is an issue, irrespective of the reimbursement you're receiving.
Wouldn't that be compensation, technically? Also, I'd expect a company to be required to pay someone at least a minimum wage, but I could be mistaken.
> I am doing this because i live in a war torn country, some issues happened and i've lost all my savings
Dude, forget the US for now. Your first priority is to get a safe place to live in and a stable job so you can build your financial life back. Try other countries, such as the Netherlands, Canada, Australia, New Zealand, Ireland. These have way better immigration policies, specially for people in tech.
Then, when you are ready, try California again. Having no money will be an obstacle otherwise. How are you going to get translated, notarized documentation otherwise? Not to mention any kind of fees, plus transportation.
> i can't get an H1B visa because i don't have a university degree
Then don't, try another route. Such as via a big US multinational company. Or get the degree, if you follow the suggestion to go to an "easier" country first. You are young, you have time.
A very narrow exception exists for unpaid interns. But that requires one to also be authorized to work in one form or the other for e.g. as a student who needs work experience in his/her field of study.
While a work visa is not likely to be easy, the current tech scene has huge demand for programmers of all kinds. Especially if you're expert in Unity/Full-stack.
If it'll help, let me know here, and I'll connect you to someone in this very area (game programming, Unity SDK programming).
Other options would be Canada, Mexico, Vietnam, or anywhere else you can work remotely.
For visa details seehttp://www.immihelp.com/nri/indiavisa/employment-visa-india....
If you are interested in moving here, shoot me a mail. Alternatively, check out my blogpost on medium: "Eight reasons why I moved to Switzerland" (https://medium.com/@iwaninzurich/eight-reasons-why-i-moved-t...)
I'm going through the immigration process right now and everyday Canada looks like a good option. I know it's not the US but it's still an awesome western country and has a reasonable immigration system.
There are plenty of Jobs which you can get without knowing German, and many employers provide free classes where you can learn some basic German. IMO knowing a new language is also a very marketable skill depending on where you are from. Depending on the company you may get 25-30 paid day offs in a year.
You can get paid well if you are qualified/experienced. Living costs are low as well, I live in Berlin in a spacious 3 room apartment in a great area (http://i.imgur.com/qLqzqN7.jpg). The infrastructure is amazing. My daily commute is 20 mins door to door (subway or cycle) and I don't need a car at all. My daughter goes to daycare for free, and the healthcare system though it has its quirks, works quite well.
Getting a blue card is easy and with your qualifications you should be able to get it quickly, with the blue card you can travel outside the EU and come back within 12 months, no questions asked.I just took a 3-week vacation back home and plan to take another one this year.
If you wanna explore some options I would be more than happy to help, drop me an email at email@example.com
You can find a job on oDesk (upwork now). I did it before, I earned $3K/month and worked 5 hours a day only. It's a good money for these countries (well and for US too).
Just work remotely, live there, save money. One day you'll find a job and will legally move to U.S. (seems like you'll be qualified after 9 years of professional experience).
Firstly, there's the Refugee Cash Assistance (RCA) program: https://www.sccgov.org/sites/ssa/debs/calworks/Pages/refugee...
Here are some other California refugee programs: http://www.dss.cahwnet.gov/refugeeprogram/
List of other refugee programs:http://www.visaus.com/benefits.html
Next, food aid (food stamps) is called CalFresh (req 5 yrs of residency for noncitizens)
After that, there's MediCal (state-run health insurance available at the county social services agency) (unsure of requirements)
Lastly, General Assistance (emergency cash, a pittance) (only 15 days of residency is required). You can sign up for it at a local social services agency office.
Here's the main website for Santa Clara county: https://www.sccgov.org/sites/ssa
(Beware of name clash: federal Social Security is also called SSA. I hear any sort of Social Security benefits usually takes a very long time and lots of paperwork to get.)
GA policies: https://www.sccgov.org/ssa/general/gachap06.pdf
Other California counties' websites are listed here: http://www.counties.org/
Yadav.rakesh (at) gmail
No need to work for free - definitely not when you know how to program and build systems. We don't seem to have enough of those.
if you are interested, let me know.
Specifically Australia, Canada, Germany all have working holiday visas which are flexible and would let you do this sort of thing. Generally anything to do with the US and visas is a bad day.
The border guards said that unless he was a citizen or had a work visa then he was not allowed to work on fixing up his own house, and would have to hire a local to do it.
tl;dr working for "just food and a place to live" is still technically working, and unless you have permission to do so it would be risky for all parties involved.
I don't know if you'd be allowed to work, but instead of taking grants from the US as a refugee, you could maybe convince them that you are a skilled-refugee who is leaving your war-torn country and you would like to work instead of being given a handout.
Something tells me that the red-tape in the US won't allow this, but it is worth a shot, especially if you speak to an immigration lawyer about it.
And on the global search for alternatives, should US not work out, here's some overview data of 110 most startup-friendly cities in the world: http://my.teleport.org/ -- and a mobile app for searching among them: http://teleport.org/mobile(visa data layers coming soon, too, but dozens of other cost & quality of life criteria already there)
Also, please keep us up to date regarding your situation.
1. Take a deep breath and be calm. It will be ok. You have a visa which is the option to move. you are in a good place already.
2. Think of the most stable (infrastructure and cost wise) country you can access visa free, go there and try getting a remote position in the US. With that, you can fun living a fairly stable life in the mean time.
3. DO NOT risk your B1/B2 by trying to trick the system. Aim for a maximum of 4 months/year in the US on it.
4. With your B1/B2 you can travel to Mexico and Turkey for a while too.
Finally, DO NOT risk your B1/B2 and always have a decent reason when entering. the paper you have in your passport is merely for the CHANCE to gain entry at the immigration border and not a visa in itself.
It will be ok bud!
What about working on some open source projects? I don't think that would fall into the danger zone of immigration law(since you wouldn't be working "for" anyone).
Alternatively, maybe a company here can offer you an internship? The visa requirements could be less.
Does anyone here know an immigration lawyer that could help this person get out of a bad situation?
I have exactly the same problem. I'm from Russia and I don't have university degree so I can't get H1B visa right now (but I will when I have 12 years of exp).
US is really hard country to get in.
I relocated to Stockholm, Sweden since Sweden doesn't require university degree for work permit. Software developers are in shortage occupation list.
Sweden is easiest wealthy western country to get in.
If you will bored in Sweden, you can later apply to UK (as far as I understand Tier 2 General doesn't require degree either).
You can get your job in Hong Kong and Singapore without university degree but it will be a bit harder.
So I recommend Sweden. It's better to be normal employee in Stockholm than working for food in California.
Also, don't stay for a few months in US on tourist visa. Next time they ban you to issue new visa!
I think your best option is to ask for Asylum 
I'm from a very remote part of Brazil and I used www.seek.com.au to get a programming job in Australia in 2008.. The company ran some remote tests with me and paid for all the relocation costs. You should try this.
Also you can try to get a permanent visa even before you try to move there. You can use the Immigration Points Calculator (https://www.wannamigrate.com/tools/) to know if you have the basic requirements for these same countries.
Also, there _may_ be nothing stopping you from living in the US but working remotely for a company in another country. That may be a good path to getting an Australian/European/other company to sponsor you for skilled migration.
Best of luck!
I like you and your tenacity.
Why don't you ask it differently so all those annoying comments trying to "help" you would stop.
What I'm thinking is this:
Does anyone have a fun side project I could hack on? Would you also be so generous as to have a couch for me at your place and host me for couple of weeks or whatever time?"
I can't imagine why such a proposal would have any illegal implications as long as you're presence in this country is legal. You can also qualify the "side project" as non-commercial and "hobby"
Does that make any sense whatsoever in your situation?
Anyway, best of luck. I really hope things get better.
Anybody can start class and you qualify to join signup for a professional degree after getting A in 3 or more classes.Good luck !
However, as many others have suggested, I too would recommend you to try another country, where visa rules are relaxed.
You can contact the Helsinki Citizens Assembly or the International Organization for Migration for advice.
If you have all this time, why not develop an app and sell it on the internet? You can always say you're working for your own company back home.
Good luck; we're rooting for you.
For a skilled Ruby dev with a diploma(for a third world country, this is a requirement) you can get around here pretty easily.
I don't know how easy it is to get a Visa in your particular situation!
It should really put things into perspective.
It's highly likely that he will enter a few times with short stays outside the USA and then get denied entry, sent to secondary processing at an airport, questioned and be offered: A) the right to contest his case in court which will mean jail time until his case comes up or B) The option to withdraw his petition to enter the USA and catch the next plane back to his home country. Most people choose B for obvious reasons which leads to you being marched through the airport by security and put on a plane back home.
What I'd very strongly recommend is to not go around offering to work for free. If you do in fact live in a war torn country and have 'lost all your savings', do what many offshore folks do and get a US company to hire you for pay and just work wherever you are and get paid in your home country. Why the "work for free" offer and why the long story? It makes companies nervous. We like to pay people for their good work whether in the USA or outside the country, but legally and above board. You should get paid too.
Just posting a few data points regarding H1B stuff and immigration in general:
Time varies for visa processing and 10 years is not average for most immigrants (as has been mentioned). It took me 6 months from zero to green-card and 3 years from conditional residency (green card) to full citizenship. Not H1B. So it varies according to type of Visa, where you file and your country. Wait times can be found here:
Microsoft brings in H1B's at a rate of 2000 to 4000 people per year into the Redmond area.
Google about the same numbers, mostly into Mountain View:
I'm not sure I agree about H1B being indentured servitude. I'd also add that, if your intention is to become a citizen via H1B, make sure you understand how the process works before you even apply for H1B:
> In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, I dont see the use of this; let us clear it away. To which the more intelligent type of reformer will do well to answer: If you dont see the use of it, I certainly wont let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
That might be a false assumption (look, some people just don't care) but you gain very little by complaining and getting mad at things that already happened.
We love to complain about things our predecessors did wrong, but often, we don't do those things either :)
Which suggests a test for your understanding of a market: can you map out the incentives and explain why what looks like apparently-irrational behavior is happening?
For example, in healthcare, we waste 30%+ of the $3T we spend each year. Much of that waste is due to hospital readmissions for an ongoing condition like heart failure. Startups sometimes try to fix this by developing a special machine learning algorithm to predict readmissions and apply an intervention. But even when the technology succeeds, the business fails: hospitals charge for readmissions, so there's an active disincentive for the hospital to buy the product. (That is now changing with ACOs, and a change in incentives is an opportunity for new companies.)
Likewise when customers came to visit us at trade shows my boss would sit politely through their compliments, then immediately jump to the question "So what don't you like about our product?"
Fast forward to today. I'm friends with top people at a Really Big Guitar Company and a Huge Amplifier company. Even in private, these C-level execs show nothing but respect for products of their competitors. They are not ashamed to own and even personally use said products (especially vintage ones).
It seems to me that dissing your competitors even privately can make you dangerously blind to the challenges they pose to you, set a bad example for your employees, and also restrict your job prospects should you decide to work for a competitor one day.
Developers: inherited code is considered guilty until proven innocent. Or maybe more accurately, guilty until you've rewritten it. Surely the old developer had no idea what they were doing.
The "other faction": Democrats/Republicans, different religions, rich vs. poor people... most generalizations about the faction you don't belong to start off with thinking "they're so stupid". "Look at those Republicans/Democrats. Can't they see that Trump/Obama is just lying through his teeth?"
Bad actors: The presumption of stupidity carries over into the way people think about computer hackers and terrorists and the like. You'll see stories about how "those terrorists are learning how to use cell phones to detonate bombs!" or how "criminals are migrating online to prey on people with phishing attacks!" The underlying assumption is that they're stupid, but getting (dangerously) smarter.
I think we'd make a lot more headway in most areas by assuming our competitors, detractors, and wrong-doers are probably already pretty smart.
We've been brought up in school to think we have to get nearly every answer right on the test in order to get a good grade (and get more than half of the answers right just to not flunk out). In the real world, getting one right answer, and not screwing the rest up too badly is often enough (and sometimes only barely achievable!).
So maybe your competitor did something "stupid" because they're stupid, or maybe it's because that thing doesn't actually matter that much, and they're focused on doing something else incredibly well instead.
"Of course, just because you presume intelligence doesn't mean that every decision made was smart."
I'd rephrase as follows: it's unwise to assume stupidity on the part of your competition, but it's very wise to allow the possibility of stupidity.
With the corollary that if there's an inexpensive way to capitalise on that stupidity if it exists, it's probably worth trying, just in case the thing that's walking like a duck and quacking like a duck is in fact a duck.
As a tangent to that - the chances that assumptions of stupidity are correct go up in direct proportion to your level of domain knowledge.
I see a lot of non-film people say "the movie industry does $FOO and that's really stupid", for example, and 95% of the time, they're wrong and there are good reasons for doing $FOO.
However, I also see people who know the film world (including me) say "a lot of / most filmmakers do $BAR and it's dumb" - and $BAR has a considerably higher chance of actually being a dumb, common mistake.
For example, a company I worked for had the best technology, but bad UI and the competitors had good UI, but their tech was old and inaccurate.
For years we thought they were imbeciles, because they didn't update their tech and we would smash them in the future, because they cannot catch up with us.
But in the end the customers bought the software with the better UI and didn't look behind the scenes.
So their decision was logical. Why pour money and time in parts of the software when noone wants to pay for this.
A common example in startupland is a company whose senior management has short term incentives that reward a fast exit over long term growth. That company may very well behave in ways that appear dumb to competitors with a long term focus. But if the "seasoned" CEO and his cronies get their compensation even in a mediocre deal, why bother trying to build a company for the ages when they can cash out, rest a bit and land in a similar situation at the next gig?
"What important truth do very few people agree with you on?"
I interpret "truth" to really be a highly-opinionated belief rather than something like "2+2=4". In other words, what factors do you believe in that would make the business model successful that outsiders would dismiss as insane or stupid?
(On trivia related note: I notice the blog as the title of "stupitidy" instead of "stupidity" so I'm not sure if there's an inside joke I missed.)
For instance there was a period of many years where both Google and Bing image search were embarrassingly bad and I was able to build something far better for a certain range of queries.
It took me a year to build out my system but in that year, Bing and Google both improved dramatically, so my demo comparing results with them was no longer impressive at all.
But more often than not it is the presumption of intelligence that pays off.
I do think that you should try to think about how you might try to solve something before looking at what your competitors do. The reason being that it's easy to trap our minds into thinking that there are no other solutions unless they fit into a similar box of what's already working. Navet combined with thinking for yourself can often be a powerful reason why many startups succeed.
If your solution ends up looking similar, at least, it was likely derived from first principals vs the path of least resistance: blind copying.
Not that this article is bad. It's just datapoint 107 that a founder has to reconcile with all the other competing advice.
(Btw, the Thiel view of not picking fights you can't dominate and Buffett's sticking to defensible business models is a good mindset to calibrate a venture's success per risk gut perception. And with timing, team and execution you might just make something that hits.)
Everybody watches their competitors, it's entirely natural. It's solid advice to study them, and try to stay/get ahead of them where possible. This doesn't just apply to app features, but every facet of the business across many disciplines (sales/marketing/development/back office etc).
On the other hand, building a business based solely on a competitor's business decisions and not doing your own homework is the path to madness. We might take inspiration from our competitors, but we always check in with our customers next to make sure they actually want the feature. It's also our job to get feedback on not just what we're doing but also how we're planning to do it, as our users might have unique business requirements that our competitor's users do not.
Then later, while developing my own work, I find that I end up with the same complications, that I'm forced to make the same background assumptions, and I have the same difficulty in motivating my choices.
In either case, successful solutions have to work around the gap in the system rather than simply charging into it.
However when you are considering the general public it is best to presume stupidity and design with that in mind.
Sometimes what our competitors were doing was stupid, and we ate their lunch.
Sometimes what our competitors were doing was the only way to really run things, and we had to adapt to follow them.
The direct quote is:
>Thiel described the argument Zuckerberg finally came down on like this: "[Yahoo] had no definitive idea about the future. They did not properly value things that did not yet exist so they were therefore undervaluing the business."
Yahoo's market capitalization in July 2006 was $42.51 billion. A 22 year-old presumed they were stupid, and he was right. 
Today FB has a market cap of $264.91B and Yahoo? Down to $35 billion after 9 years of growth.
 by the way to get the market valuation at the time, I did this search: http://www.wolframalpha.com/input/?i=what+was+yahoo%27s+mark... I can't believe it worked! I used wolframalpha because this is the kind of search they promise they can answer - and they were right, they actually delivered. Nobody else on the face of the planet does this, and it shouldn't even be possible. But it is. If you think something is possible, JUST DO IT. If you think your competitors are stupid (compared to what you think you can do), you're probably right. (or you wouldn't have that thought.)
I can think of a few more cases that I've seen cause havoc:
- U+FEFF in the middle of a string (people are used to seeing it at the beginning of a string, because Microsoft, but elsewhere it may be more surprising)
- U+0 (it's encoded as the null byte!)
- U+1B (the codepoint for "escape")
- U+85 (Python's "codecs" module thinks this is a newline, while the "io" module and the Python 3 standard library don't)
- U+2028 and U+2029 (even weirder linebreaks that cause disagreement when used in JSON literals)
- A glyph with a million combining marks on it, but not in NFC order (do your Unicode algorithms use insertion sort?)
- The sequence U+100000 U+010000 (triggers a weird bug in Python 3.2 only)
- "Forbidden" strings that are still encodable, such as U+FFFF, U+1FFFF, and for some reason U+FDD0
People should also test what happens with isolated surrogate codepoints, such as U+D800. But these can't properly be encoded in UTF-8, so I guess don't put them in the BLNS. (If you put the fake UTF-8 for them in a file, the best thing for a program to do would be to give up on reading the file.)
/dev/null; rm -rf /*; echo"That's a little aggressive for testing no?
Is it naughty to include it here?
Fuzz lists are to web pentesters what drain snakes are to plumbers.
Using a newline as a delimiter in that file excludes newlines from being part of the strings you are testing - but newlines are an important "naughty" character to consider. Unfortunately the same is true of basically any other common delimiter character.
Maybe base64-encoding the strings would be one way to solve for this? You could use base64-encoded values in JSON, for example.
 - https://chrome.google.com/webstore/detail/bug-magnet/efhedld...
 - https://github.com/gojko/bugmagnet
, , ,
(Well, the text file has empty lines separating the comments and example strings so it technically includes the empty string, but it's not in the JSON file.)
What about XML billion laughs strings, or parser-busting very long runs of parentheses?
Edit: Found this two minutes later: https://github.com/googlei18n/libphonenumber, seems to be an official Google product and Apache licensed.
* How could this be used to test 'corrupt' characters? Isn't the process of savign the file itself as UTF-8 un-corrupt...the file?
* Is there some recommended way to group these into "strings that should pass validation" versus "strings that should fail"... or is that too application-specific?
I'd also add more invalid UTF encodings and embedded null bytes, etc. The JSON format would be preferable to plain text for that though.
Edit: Another one that tends to be fun is  in the param, like http://example.com/?get=.
And you can things inside, like http://example.com/?get['"%05<!]=[%FE%FF]
This one seems to be skyrocketing.
Oh here we go, and lookie who is at the top: https://github.com/trending
Why and How to Avoid Hamburger Menus Hamburgers & Basements An Update on the Hamburger Menu The Hamburger is Bad for You
 WWDC 2014 Session 211 Designing Intuitive User Experiences @ 32:00, available here: https://developer.apple.com/videos/wwdc/2014/
Addendum: It's a responsive design so you can see this even on a desktop browser just by shrinking the width of the window. The top menubar collapses into a hamburger.
Addendum 2: Illustrated transcript here: http://blog.manbolo.com/2014/06/30/apple-on-hamburger-menus
Almost everything has or needs something like a hamburger menu somewhere. Can it be abused? Yes. Does that make it inherently bad? I don't think so.
It's interesting to see Hamburger menus bleeding back into the design language with Windows 10. It seems a strange, sad concession to meeting Android/iOS designs and even Desktop designs (with their million year old menu bars) "half-way". That said, one of the interesting twists that Windows 10 designs thus far tend to put on the Hamburger menu is that secretly in many cases the Hamburger icon is just a replacement for the Windows Phone 8's App Bar ellipsis:
The items on the bar show just icons at tablet size or lower and the Hamburger simply reveals app labels and maybe (rarely) lesser used text-only options. (At larger than table sizes sometimes the bar defaults expanded rather than condensed.)
This roughly corresponds with the Facebook suggestions in the article here.
The interesting differences to a WP8 app bar are that the W10 hamburger "app bars" have mostly gone vertical and the hamburger is a toggle rather than the WP8 app bar ellipsis was a "slide".
It will be interesting to see how this design language continues to accrete/evolve as Windows 10 Mobile gets closer to launch.
A tab bar is great in an iOS app with a limited scope of functionality. That just doesn't work a sprawling news site covering dozens of topics. A small, product-focused website may even be able to get away with showing all of their navigation options at once. For many sites, however, it's unfortunate, but sometimes you just need a well-organized junk drawer inside a hamburger menu.
Hey, designer. I know screen real estate on mobile is extremely limited. I know it would be really nice to fill the whole screen with content and just have a little, square, "more" icon tucked in the corner. I know you've tried to establish the hamburger icon as the universal "more" icon.
Too bad. Users aren't catching on as quickly as you'd like. They don't notice, understand or utilize the icon. Even if they do notice and understand, an ambiguous "more" is dramatically less engaging than explicitly showing what they can get. A "more" icon is asking them to expend effort up front exploring your interface with no clear reward in sight. So, they don't bother. Like, a measurable 50+% drop in engagement don't bother.
So, stick to tab bars as much as you can. It seems like a waste of screen space. But, the results still seem worth the cost.
And this is the crucial misinterpretation. Progressive disclosure as defined and used by Xerox is about objects and related actions. And it's all about visible objects! 
(Mind the classic example of a square in a drawing application: Clicking the shape discloses editing functions and displays handles to size the object.)
And here is the real problem: The hamburger icon as used today has no other object but the global context. By exposing context to the global context, it's a mere apropos without an object the user might relate to.
When Norm Cox designed the original icon for the Xerox Star user interface, it was a visual anchor for a menu revealing contextual functions to the visible content of the document. (Like selecting rows, etc. ) This is notably something else than the global, quite abstract context of a site navigation, disclosing navigational functions to address off-screen content.
Today's hamburger icon is just a paradigmatic misunderstanding.
 "A subtle thing happens when everything is visible: the display becomes reality. The user model becomes identical with what is on the screen. Objects can be understood purely in terms of their visible characteristics. Actions can be understood in terms of their effects on the screen. (...) In Star, we have tried to make the objects and actions in the system visible."(Designing the Star User Interface; David Canfield Smith, Byte, Issue 4/1982)
 Compare: http://g.recordit.co/8Q5oAYCaVx.gif(Outtake from a ACM CHI 1990 conference video, https://vimeo.com/61556918. Mind that the window-less bar at the top represents the global system as opposed to the document window below and its menu button(s).)
I disagree with this article that hamburger menus should be burned to the ground. I think it's useful for tucking away secondary or tertiary functionality.
* Facebook still uses it for accessing your friends list. With smartphones growing in physical size, there is more vertical real-estate to bring the tabnav back.
* Despite it not working for NBC, it seems to be working well for New York Times and not yellow. And I actually really like NYT's new page layout.
* Google Maps uses it also not yellow.
The main menu would be absolutely fine on its own; I think the hamburger menu is present because it's present on Windows, which - of course - doesn't have a universal menu. Still, I'm not letting Google off the hook here. These flagrant abuses of usability are things that the average undergrad should be able to identify, yet one of the biggest companies in the world can't? Disappointing.
But it requires some deliberate thought, effort, and app-specific solutions to replace it with something better, and that planning makes you answer all sorts of hard questions you might've not ever had to answer about your website/product, like "how are my users actually using this?"
I'd wager that everyone agrees that their own site's hamburger menu is a sore spot, suboptimal.
But the next rung up is a taller order than these types of articles admit.
I think a good follow-up blog post would be "Design patterns for escaping the hamburger menu" that showcases a variety of real-world approaches.
My first impression of these was therefore to try to grab them and pull, as if to slide the bars that they appear on. Unfortunately, even now, most implementations of "hamburger menus" do the worst possible thing when you try to slide them: nothing at all.
And then there's the weirdness of seeing them on the desktop where there is plenty of space. It's the same frustration I feel whenever I see a desktop app force content into a tiny, non-resizable box with scroll bars on a 1920x1200 screen! If I have the space, I really, really want to use it. Any design that refuses to expand to available space is simply wrong.
Nobody ever asked me - for obvious reasons, because I might be blind - but I'm partial to an icon where you have a + sign ("additional" items) on top a V ("directional clue"; could be pointed in other directions for a pull-up menu for example) to form some sort of arrow.
We decided to keep the hamburger menu on both platforms for launch. Our reasoning was that it's a common UI convention and our primary navigation options -- Home, Recommended, Hot News, Local News, and topics -- are visible in the extended app bar. An option to follow additional topics appears inline in the Home tab.
So the three functions that are only accessible through the hamburger menu are bookmarks, history, and settings, which seems like a reasonable compromise. You could use our app fully for a year, albeit with the default settings and no bookmarks/history, without ever seeing the hamburger menu.
Analytics shows that the hamburger menu is used frequently by our beta users, so I'm fairly confident that we made the right choice. On the other hand, the new YouTube Android app -- which had more in its hamburger menu than we do -- has moved in the opposite direction and eliminated it.
These things don't appear in the vacuum - the Hamburger Menu originated from the Celtic Knot Menu, which was originally at the end of the Ribbon. The Ribbon itself confused the use cases of the Menu and the Toolbar, and was rightly criticized for that.
I am just learning Emacs and it's a little paradox that this aspie guy Richard Stallman is the one who got so many things around the UI right. We are unfortunately confusing "easy to learn" with "dumbed down so much there is nothing to learn".
Their design had what looks like a menubar which is the precise anti-pattern to a menu-button. Those items, and what is showing already as top page content, is guaranteed to catch everyone's attention first. Those menu-button items are not only hidden and require an extra click, by design they have been made less important. And since so many sites have de-cluttered themselves by simplification, users' first impression is that they got rid of everything for the better... except it wasn't what they did, so basically everything under that menu was unreachable.
Two things would have been better. First, they could have kept the menu icon but had it expanded on the top page so that people would see those items as top page content, and also make the intuitive connection that there was a button that's associated with them. When the reader goes deeper, the menu items could then safely be hidden, with the user intuitively fetching them via the button as needed. Second, they could have given the button a name, instead of use the icon. For example, Amazon's "shop by department" button is the equivalent of NBC's hamburger menu. But since they have a menubar, instead of having a menu-button on a menubar, they put a menu-item instead by giving it a name and an equal member of the top selection. This upholds the primary design pattern in use.
NBC's designers went for the hamburger without knowing how to use it or understanding what made it popular. You cannot mix competing philosophies and color is no substitute for broken intuitions. Even now that they settled for the menu-bar, they don't have an at-state, under "more" we see the same items in the menu in different order, and they use the pinned menu that doesn't go away even when you scroll -- a design already falsified by the frame paradigm of 1999.
I think what we are supposed to understand is that this Firefox Hamburger Menu (FHM) is really a TOA: Toolbar Overflow Area. It's a repository of icons for doing arbitrary things.
Its Customize button at the bottom invokes exactly the same UI as View/Toolbars/Customize: a big view where you can move icons between an editable version of the FHM, the browser toolbar, and a repository of available tools (shown in the main pane as a large area).
So any item that can be on your toolbar can go into FHM, including bookmarks. Hence: TOA: toobar overflow area for items you don't use much.
It would be better if they initialized it empty, and if it somehow clearly communicated "Hey, I am a toolbar overflow area: put stuff here that would go on the toolbar that you don't need so much, when you don't have space on the toolbar."
That hard button was even better from a real estate point of view, and since it was consistent across all apps it seems like users ought to have grasped it.
The surprising and important thing here is that, even if the user knows the menu is there and that, if asked, it could help them, it doesn't appear salient and doesn't get clicked.
There are two reasons why the signs on the highway are so prominent:
1. When you are driving a car, you are basically meat bags inside 1.5+ ton collapsible metal cages moving around at 30+ or even 100+ km/h. One wrong move and meat bags risk being injured or killed. That's why the signs need to be simple and prominent.
2. A highway network has one and only one purpose: to transport people and things around, so the number of things that you can do on a highway network is inherently rather limited, which is why you can make decisions fast: go faster, go slower, stop, yield, merge, change lanes, exit a ramp, enter a ramp, turn left, turn right. That's why the signs can be simple and prominent.
Neither condition applies to websites in general:
1. If you lose your way on a website, you generally won't injure or kill anybody.
2. Websites generally don't have one and only one purpose, the number of things that you can do on a website cannot be expected to be limited. You could argue that the website menu should have one and only one purpose - to bring visitors to various pages - but that's not always true either.
list of reasons people are doing dumb thing, mostly blaming the people
Can we be honest here for a second? The reason people are still using hamburger menus is because people have to make things work for phones. Phones with screens that are vastly smaller than the screens on even the smallest laptop, even for people who are hauling around the biggest phablets they can find. And people with phones want to visit the same websites they visit on their computers and there just... isn't... room. The hamburger menu gives you close to double the space to work with, from a UI point of view.
The alternatives presented are partial solutions. It may well be true that more people are reaching for the hamburger menu than truly need it. But the tab bar example from the article only scales up so far before it stops being a valid solution. And I don't know if there is a really good answer that doesn't involve rewriting the web from the ground up.
So, should we web developers start ripping out hamburger icons on our sites. NO. Avoid groupthink. Implement and test layouts that produce measurable results. Removing hamburger icons is no panacea. What are the users doing? What does the data say? If cargo cult thinking produced an over-reliance on a single navigation icon, we aren't going to solve anything by snapping back in the other direction.
Also, there's a difference between a hamburger icon and a drawer menu. On mobile devices a drawer menu is still a fantastic way to reveal additional navigation options without a page reload over a (potentially) slow network connection. Stuffing a navigation list into drawer menu is an easy solution. But it may produce poor results.
Also this may not apply to apps, but on the web the hamburger is an indirect result of responsive design techniques where a navigation menu has to compress due to limiteds screen real estate in mobile.
But the funny thing is that as a designer I hate the hamburger because it does feel like a hack. Yet I can see the popularity is due to trying to have something work on both mobile and desktop.
In fact if you look at mobile only apps they tend to avoid the hamburger trap (example: Instagram) but if you look at any app with a desktop legacy (example: Facebook) you almost have to need it (unless you are willing to cut features or make a suite of apps).
I might also refer to it as the 'vent', since it seems to heat up after a few months of not restarting my browser.
Amazing! I could almost write a script for the meeting in which that solution was decided upon.
Idea is proposed by one individual at level N in the hierarchy. Some cursory justification is provided, based on theory from a design article they read, they think, or maybe it was a youtube video - doesn't matter: Yellow attracts attention! Green makes people want to proceed! Red makes people want to stop! It's so obvious.
Numerous objections are raised by individuals at level < N in the hierarchy, who have a fairly deep understanding of design and have thought a lot about the problem. The objections are considered briefly, and then summarily ignored.
The point I took away was that menus should have logical, semantic purposes, and common functions shouldn't be buried inside them.
I agree that if the hidden menu has very few options then it is a good idea to have everything visible but that is not feasible for more than a few navigation options.
also most android apps support swiping from the border which gives the user a quick access to actions not using any space. The author doesn't mention it?
I remember back in the early years of the web (mid to late '90s) and one of the most important factors in designing websites was realizing that users don't scroll. They just didn't, and if your site design relied on that fact then you'd be screwed. But users learned to scroll, and now scrolling is perhaps the most important and most universal method of interacting with the web. In another 10 years will the hamburger menu become so well known and universally relied upon that not doing it will hurt your usability? Or are there fundamental reasons why it will never be good?
What's wrong with moderation? Day-to-day navigation elements shouldn't be in a hamburger menu (also, an extra 'click' for common tasks is bad), but there are plenty of non-everyday things that can go in there.
http://exisweb.net/menu-eats-hamburger and followup http://exisweb.net/mobile-menu-abtest
Anecdotally, I don't use a ton of mobile social apps, and the first time I encountered this icon I thought it was some weird play on an equals sign. Never occurred to me it was a menu. Now my own dev team is using it and for some bizarre reason I cannot convince them to stop.
I can't tell by eye-balling it what the symmetry is for the first one, but its periodicity says it must be one of those. Quasicrystals with 5-fold symmetry are not exactly periodic.
There are only 17 wallpaper groups. Since this is a wallpaper, what is its group?
Wolfram Alpha also has some things about tiling:http://www.wolframalpha.com/input/?i=pentagon+tilinghttp://www.wolframalpha.com/input/?i=pentagon+type+5+tiling
It does not appear in https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_m...
Look at the yellow and blue in the OP. They are actually mirror images of each other. Maybe a mathematician would say they are the same, but certainly not someone cutting tile for a bathroom floor. And if these were proteins trying to form a cell wall, that mirroring would be a serious hurdle.
Even the example in the article can be viewed as a regularly tessellating nonagon. I don't see what's "irregular" about it? The article doesn't mention that word, but the HN title does.
Can someone point me to a proof of this?
Edit: The article is talking about building structures but isn't a triangle the most rigid form? And triangles are already used in building.
Does anyone make bricks in these shapes? Those would make an awesome paver pattern.
This is so awesome. Much love to everyone at Facebook that has made this possible. With React, React Native, Rebound, GraphQL, Relay etc... You're saving us all from drowning in complexity when buiding web/mobile apps and I love it. Keep fighting the good fight.
I'm hoping Relay solves the data fetch problem in a way that makes isomorphic applications much cleaner.
All I can say now is: Got RELAY
It is ... massive!
When React came out, the core ideas were crystalline, and I was able to see the advantages in 5 minutes and to actually start doing something in 15. I would be happy to share the excitement for Relay... anyone care to explain? :-)
I feel something that can be composed programmatically without having to deal with string concatenation like Falcor's queries or the Datomic Pull syntax proposed in Om Next  could be more flexible and robust. I may be missing something.
Sadly working as a consultant, using Relay as prescribed offers little use for me as I port from client to client with widely different data models. I am interested in maybe using Relay in parent React components to keep logical separation between my models and views.
Do they have a specific PHP-to-Node bridge on the server side? If they write isomorphic code, either they are writing apps completely separate from PHP or they have some kind of integration (Node-in-PHP?) running?
I would be grateful for hints, I'm looking into working more with FB tech but I can't do Node on the server right now. Knowing how their architecture looks like with PHP/Hack on the backend would really help.
Note: i'm mainly covering GraphQL
What i'm missing is implementations. For graphql you want a Java/python implementaion ready that can be hooked into your storage engine.
For iOS / Android you need some code generation tools that can generate your clientside business objects from the graphql schemas.
When i think about it, GraphQL combines the best of the SOAP/XML era (schemas, type safetype, client generation) with the new REST/JSON world (low footprint, simple structures).
However, it is still very difficult to adopt it.And most of the times, in a startup environment, you are faster implementing a rest api. And building your app on top of that. A schema (something like swagger, jsonschema) might help with client side code generation.
_ Give RELAY
(For anyone who's interested here was our design:http://platform.qbix.com/guide/tools, http://platform.qbix.com/guide/messages)
BreezeJS is a stand-alone data library for SPAs which takes care of managing the lifecycle of data objects; querying, fetching, caching is all taken care of. Queries use OData by default
now you should say "universal", "isomorphic" was a poor choice of words at first place and led to a lot of misunderstanding(and bad blood between js developers and mathematicians)
> He launched Mayday PAC to much fanfare in the spring of 2014, billing it as the "super PAC to end super PACs." But it failed to play a decisive role in any race that year.
As Lessig found out, money by itself cannot buy power. Money is a means for magnifying the impact of forces that are already in play.
Consider, for example, climate change. During the last debate of the last Presidential election, Barak Obama was falling over himself to be more pro-coal than Mitt Romney. Was it because he hoped to court the coal-industry lobbyists and turn their firehose of political spending in his direction? There wasn't a chance in hell of that happening, and he knew it. He did it to court the voters in central and southern Illinois whose livelihoods are dependent on the coal industry there. We're a sprawling suburban nation addicted to cheap gasoline. Energy companies would have tremendous power even if they didn't spend a penny lobbying.
The same is true for banking and finance. People complain about fancy financial instruments, but at the end of the day main street businesses are utterly dependent on payroll loans, consumers are dependent on credit cards, and everyone wants to get a fat adjustable-rate mortgage so they can buy a big suburban house. Do you think banks need to spend any money lobbying to sway politicians in their favor?
And I'll also go out on a limb and suggest that money being a factor in politics isn't as bad as it seems. At least when money can influence politics, the noveau-riche can upset the old guard. Consider the auto industry. Traditional carmakers don't need to spend money to buy political power--the fact that they employ hundreds of thousands of middle-class workers guarantees that. But as traditional cars decline, and the Teslas and Googles of the world remake the industry, it's probably a good thing that those companies can use money to overcome the inertia and political mindshare of existing car companies.
I will be surprised if he doesn't reach his $1M goal, and much more surprised if anything substantive comes of the effort.
The "launch and resign" plan smells bad -- it seems like a hack to avoid having a complete platform, implying that the government will lack a leader during that interval, and using that as motivation to pass the act seems like a bad idea. It also raises the question of who the real VP would be.
Well that will take longer than two terms. Congress doesn't even play along with the people who are incahoots in rigging the system. It's beyond ridiculous to believe they will play along with their own destruction.
Lessig still isn't a household name, so I think it's far too late for him to participate in this election cycle as a real candidate. That being said, he's also imperfect as a candidate for a few reasons. Lessig is really good at presentations and speaking eloquently, but he still doesn't quite rile people up in the way that is needed for his kind of insurgent campaign (against who, exactly?). Lessig also doesn't have the cash to get noticed nationwide. He's setting goals to raise a million, whereas Hillary is planning a billion dollar campaign, and the Republicans are likely planning a several billion dollar campaign for whoever they pick.
Also, an elephant in the room: the issues Lessig is running on (campaign finance reform, voting reform, ending gerrymandering) are not actually non-partisan in the way that he is trying to market them. Everyone (everyone!) knows that campaign finance reform, gerrymandering, and voter reform are the left's issues.
Why? Because the right in the USA needs voter exclusion and balkanization(via the false issue of voter fraud aimed at poor populations) in order to win elections. Campaign finance reform is similar; big money influences both sides heavily, but they favor the right for their business-friendly disposition. Big money favoring the right wing means that prospective candidates from the left are also vetted against how business friendly they are, pulling the mainstream left wing toward the right wing, assuming that candidates act rationally and take the money for grabs.
This series of behaviors ultimately results in the far-right wing business cartel promoters that currently comprise Congress. Claiming that Lessig isn't some kind of far-left (for the US) candidate is a tad disingenuous, even if he actually believes it. A popular and well-moneyed Lessig would be a huge threat to big money's influence on politics, to be sure-- in the way that Sanders is currently.
This is not Win / Lose or Patriots vs Seahawks.
This is forcing the most important issue to be confronted on the big stage.
> Lessig said he would serve as president only as long as it takes to pass a package of government reforms and then resign the office and turn the reins over to his vice president. He said he would pick a vice president "who is really, clearly, strongly identified with the ideals of the Democratic Party right now,"
So, wait. You don't want the "System", yet your Vice President is basically a member of the Democratic Party which is part of the precisely bi-party, rigged System right now ?
Makes a lot of sense if you want to perpetuate the said rigged System.
Look as Israel as a cautionary tale of a country that did everything right according to the liberal prescriptions. Regardless of implementing everything that Lessig calls for, monied interests still control the political system.
How does it work?
Well, take a look at Sheldon Adelson's actions. In the US, he buys his influence by being one of the biggest GOP donors. In Israel, he buys his influence by operating the largest daily newspaper (Israel Hoyim), which he runs at a loss of 20+ million a year. Israel Hoyim is the mouthpiece of the Netanyahu government. The paper never strays from the party line, in the same way that Granma never strays from party line in Cuba. This gives Adelson a tremendous amount of influence over the government. Even moreso than he's able to buy in the US. Billionaires will always find creative ways skirt the rules and buy their influence.
What did we get. Citizens United, lobbyists writing 10,000 page laws riddled with loopholes, and Bills and Administrations which do the exact opposite of what they say.
Makes it difficult when one doesn't like the VP.
The main reason I can see is that Lessig himself views his promise of reform to be more reliable than any another candidate's promise. True or not, I think it would be difficult to convince the general electorate that he should be trusted more than any other candidate.
I think it would be far more interesting to completely "vacate" the office and do nothing, without formally resigning. The point being that elected officials have far less power than people think. I think the executive would function largely the same without a president or vice.
> "Even if she did say exactly the right things, I dont think its credible that she could achieve it because she and the same thing with Bernie would be coming to office with a mandate thats divided among five or six different issues," Lessig said. "The plausibility of creating the kind of mandate necessary to take on the most powerful forces inside of Washington is zero. This is what led me to recognize that we have to find a different way of doing this.
I don't agree with this logic, that "policital capitol" is split among multiple mandates, and that having more mandates makes you less likely to achieve any of them. Having a position on many issues just means that more voters have a reason to vote for (or against) you. Many of those positions are expected of someone running for office under a certain party, and not stating a clear policy preference doesn't usually win you votes from the other party, it loses you votes from your own party.
I think Lessig's efforts are better spent continuing to advocate for an article V convention and influencing congressional elections via the Mayday PAC.
As a potential spoiler candidate, it might work by forcing more attention to campaign financing reform, but it's hard to take him seriously beyond that.
It would have been more intellectually honest to do what Jeremy Corbyn has done in the UK: running wholeheartedly, albeit assuming he won't be elected, just to inject a range of ideas in the debate.
I'd actually like to see Trump or Lessig run but people are so worried about a like-minded candidate leading to their party's loss.
So in other words four years, eight if he gets re-elected.
Awfully roundabout way of saying that....
http://lesterland.lessig.org/(there is a great video talk of Lessig on the page)
BTW: Lessig is great!
The American electorate has been conditioned to vote for Team Red or Team Blue, and within those increasingly-similar teams their preferred standard-bearers will be chosen by a consensus of large donors in a series of luncheons and closed-door meetings, primaries be damned. It's not so much a sinister New World Order conspiracy as it is a general desire by the elite to influence future governance to secure their wealth.
If this weren't the case, then Sanders' standing wouldn't be so noteworthy, and O'Malley wouldn't be concerned about his party's nebulous debate schedule. Likewise, we wouldn't be hearing as much about Jeb Bush.
I'm not saying that third-party disruption can't take place, but the time to be forming exploratory committees was months and months ago, if not years. The 2016 Presidential race is well underway, and Lessig hasn't even stepped up to the starting line.
Source: MITM your iOS traffic.
Sidenote -- a possibly unforeseen side effect of end to end encryption everywhere is that it makes it far more difficult to man in the middle your traffic and hold companies accountable for their privacy policies.
Anyway, I've confirmed this. I've disabled web search and all of the other privacy options I've seen with Windows 10 during and after install. As soon as the first character is typed into the Windows 10 search box, the request goes out to www.bing.com. It doesn't say what you searched for (as the request happens before you complete the search), but it does send a lot of info to Microsoft about your platform, including a unique identifier.
I don't know if there's any solution or if privacy is just a remnant of the past. Is Linux any better? And is there any way to own a smartphone which is built not to leak my information, either through the operating system or through 3rd party apps that request access to everything on the phone?
Wondering if I should go through all the Windows stuff there and turn them off. Edit: just did (except for Edge and obvious internet related stuff).
Is there a way to change Firewall rules with a registry tweak? That would be the ideal way to distribute this.
* before you complain I use the start menu to launch the terminal: I never remember ubuntu shortcuts, it's meta+t on my system
1. Run gpedit.msc
2. Navigate to Computer Configuration\Administrative Templates\Windows Components\Search
3. Set the State to Enabled for "Do not allow web search", "Don't search the web or display web results in Search", and "Don't search the web or display web results in Search over metered connections"
"It's not going to happen without having that data culture where every engineer, every day, is looking at the usage data, learning from that usage data, questioning what new things to test out with our products and being on that improvement cycle which is the lifeblood of Microsoft."http://www.reuters.com/article/2014/04/15/us-microsoft-ceo-d...
Not suprised that MS does this, however the sad part is for a simple search, there are literally thosaunds of bytes exchanged
Apple gives OS X away but nobody has yet got the memo that you are becoming the product. (Yosemite does exactly that by default - you can disable it though.)
I have the default search settings.
Google perhaps sets the benchmark, every single action you take in Google apps, whether native or web, is tracked extensively.
As far as I know Chrome OS isn't an exception.
Perhaps we need firewalls to protect us from our own software.
As far as what the contents of the package being sent is, I'll assume it is more information than necessary, and probably over-reaching until they get a slap on the wrist, but to call this phoning home is probably a stretch in itself.
-- Edit --Apparently the search still phones home even if search is disabled, which makes my point mostly... pointless.
I still suspect that this was an example of Microsoft (intentionally) over-reaching and that they'll backpedal on this now that it has been brought to light.
Shame is, it feels like they are breaking any goodwill that the community may have still had left for them.
I use Comodo firewall and have basically set up a load of rules to prevent phoning home of any kind except to check updates.
Luckily, you have 30 days to change your mind and return to Windows 7. I did it within hours. I never liked Windows 8 and I think I dislike Windows 10 ever more. No wonder they're giving it away because had they tried to sell it then it would have probably met the same fate as Windows 8.
I assume it's like all those sort of services, like the Google Chrome address bar, etc.
Microsoft is doing the customer is the product thing that others have done for like the past decade. It is how they can give away Windows 10 upgrades for free, even to pirated copies, and still earn money off of it.
If you don't want to be tracked or spied upon:https://prism-break.org/
You shouldn't be using Windows but one of the free or open source alternatives instead.
HIPPA compliant offices cannot use Windows 10 because of the tracking it does and patient privacy laws.
Even worse is the Wifi sharing with social networks, if even one of your corporate employees has it turned on, their friends can get access to your Corporate Wifi and it is a security breach. You'll have crackers trying to friend employees on social networks of your company just to get the Windows 10 Wifi sharing password to get into your corporate network.
Even with all of the privacy settings turned off, there is most likely more stuff that phones home.
You know that given enough time video gamers will be forced into DirectX12 and have to use Windows 10. That business apps will be written for Windows 10 and force companies to upgrade. Sooner or later most people will have to upgrade to Windows 10 in order to run the software they need.
Woe be to the person who chooses express settings during startup. They will wonder why their Internet is so slow and woe be to them if they have a tablet with a data plan and wonder why they go over it.
And the new uncertainty is predicted from the old uncertainty, with some additional uncertainty from the environment."
Crystal clear - great article, thanks!
I also recommend Ramsey Faragher's lecture notes on teaching the Kalman Filter:http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20...
Is Kalman filtering computationally more efficient (obviously particle filtering is stochastic and so trades off accuracy for compute) or does it have some other advantage?
For nonlinear systems, we use the extended Kalman filter, which works by simply linearizing the predictions and measurements about their mean.
I would recommend looking at an Unscented Kalman filter:
which sucks a lot less.
When I was playing with different compass implementations in F-droid I recognized many of them uses Kalman filter for reducing the noise from the raw sensor data. Some of them (maybe only one) had some problems near the angle 0, where the sensor data jumps between almost 2pi and slightly above 0 frequently. The problem that the assumption that the measurement uncertanity is gauss-distributed is breaking there badly, since it will be a half gauss near 0 and an other half near 2pi. I don't know what is the general approach to solve this. I would solve this with either:
- Convert the angle into a unit vector and use that as measurement input. Then predict the actual vector and use its orientation for the compass.- Move the periodic window boundaries with a slow relaxation. So if I hold my compass in 0 angle direction, then all angle data is transformed into the [-pi,pi) range. If I hold it to pi direction then the raw data transformed to [0,2pi) range.
TL;DR: Be careful when applying a Karman filter on angles (or more generally R/Z).
Nav applications are the ones you see most often; it would be interesting to see an example from a completely different domain.
Optimal State Estimation by Dan Simon helped, too:http://www.amazon.com/Optimal-State-Estimation-Nonlinear-App...
Little wonder that beginner physicists spend so much time mastering matrices and linear algebra.
Does this mean a Win10 machine setup to use something like Tor will leak the user's actual IP back to Microsoft? If you're VPN'd, is some traffic still leaking outside of the VPN?
From an engineering perspective, how is this happening? Does Microsoft have a second network interface hidden away using hardcoded settings for DNS, etc?
On a somewhat related note, if a Win10 app is cert pinning, is there a way to force it to use your cert so you can MITM it?
Since upgrading to Windows 10 she's been hit with $200 in overages.
Details of economic spying -- may not be the best article but the easiest to find:
1. Do the different versions of Windows (Home/Pro/Enterprise/Education) behave differently? If so, how?
2. Do the pro/enterprise versions behave differently when they're connected to a domain?
I'd imagine that the answer to at least one of these questions would be "yes." This kind of behaviour would be a deal-breaker in many enterprises.
That large companies accept this state of affair is extremely surprising.
That we accept that our electricity and communication bills are being diverted to serve the interest of an operating system's creator.. that sounds crazy. It's like letting the creator of your fridge eat your food and drive your car.
I think that Microsoft looked at the Google Now user experience on Android phones and decided to emulate that type of AI assistent in Windows. Google collects all sorts of user context information and Microsoft decided to do the same.
This is a guess but the difference may be that (some) people are willing to have less privacy on their smartphones but care more about privacy on their computers.
prod e5ff4669-311a-0933-dee2-9444eee86460 instrumentation.cpp Instrumentation::StartQosExperience (Utilities::HashMapContains(_qosUXScenarioDataById, scenerioId) == false) Assertfailed: (Utilities::HashMapContains(_qosUXScenarioDataById, scenerioId) == false): Instrumentation is active when we try
Well there you go. If you ever wondered whether this is happening only on the Microsoft Account(tm).
As if they're thinking we all don't give a shit. But if we all didn't, why the downturn in trust in USA tech corporations post-Snowden?
I can't help but think that this is either massively naive from their part (people/companies won't care, they will buy our stuff and services regardless) or very short-sighted (as it will hurt their cloud services offerings in the long run, the more they hammer down the trust from their own users in MS' wares.)
An operating system that is sending random internal data to random places on the internet seems to violate both a wide selection of national laws related to data privacy, and many corporate policies relating to trade secrets, privacy, internal operations and so on.
Microsoft must have thought of this. What's their plan for continuing to sell to these customers?
The "send search data to an internet endpoint even if it's patently obvious that the search is for local resources" reeks strongly of Ubuntu's Amazon Shopping Lens. Did Mark Shuttleworth switch gears from Canonical to Microsoft when I wasn't looking?
I say this, not because I think that this is OK, but to reflect, that even the change of the settings do not save you from the harm, that was done from the privacy terms!
Why downvoted? When you disagree, than give arguments, not gutless clicks!
This burden is becoming far too great, when this is the cost necessary to achieve innovation.
Either way, having another effort competing to make a great format is not a problem. Here's hoping it goes well!
It looks like the last patent on MP3 audio decoding expires next month.
 http://www.osnews.com/story/24954/US_Patent_Expiration_for_M... http://scratchpad.wikia.com/wiki/MPEG_patent_lists#MPEG-1_Au...
I think this is a great effort, and if you'll recall Google went and attempted to do the same thing with VP8, but found that people could file patents faster than they could release code. I would certainly support a 'restraint of trade' argument, and a novelty argument which implies (although I know its impossible to currently litigate this way) that if someone else (skilled in the art) could come up with the same answer (invention) given the requirements, then the idea isn't really novel, it is simply "how someone skilled in the art would do it." I've watched as the courts stayed away from that theory, probably because it could easily be abused.
 Conspiracy theory or not, the MPEG-LA guys kept popping up additional patent threats once the VP8 code was released.
That's an odd choice of phrase; it's unfortunate that a press release chooses to disparage alternatives without explanation.
I don't really care about the compression ratios achieved, or speed of compression/decompression.
Something like motion JPEG would be good, if it was actually a proper standard (AFAICT it isn't).
I want open-source to subsidize a small team of engineers to create a completely open standard where no single entity owns it and everyone is free to branch / fork it.
Has anyone tested this or has more information on the performance/quality vs other codecs?
This is a great thing to offer students and I wish my University had made this a part of the curriculum. Somehow I managed to have practically zero exposure to computer science or programming until after graduation--only to discover that I find it immensely challenging, interesting, and rewarding. I probably would have switched majors if I'd taken this class Freshman year.
While I'm an advocate for practical eduction, I'm equally an advocate for understanding the principles of your field. This book will much less vocational than your typical code-school/academy/etc and instead focus on building a foundation with which you can build upon.
I highly recommend this as a great primer to computer science.
This course reminds me a lot of the Berkeley course posted a month ago, if anyone wants to see the discussion there: https://news.ycombinator.com/item?id=9838196. From what I can tell, the coverage is almost the same, except that the Berkeley course pretty much trades computer architecture for declarative programming and some other briefly-covered topics like machine learning, map reduce, concurrency, etc.
There's a huge population of people who simply cannot effectively use a computer. Can we fix that first? Otherwise we're leaving them behind and that's not right.
You should still be able to test out, though. That way, if you're totally uninterested, you can test out of the MS Word class and move on.
I think there are enough students sufficiently interested in computers that they'd check out the harder course if its name didn't sound too obscure.
That said, this course suffers from so many CS courses.
1. It's too wordy!
2. As usual, I don't like the layout.
3. Funnel your subjects. (I'll give that a B.)
4. Funnel your paragraphs, or eliminate most of them?
5. Most people(students) find this material extremely dry. Introductory books should be "tight"! They should go through numerous edits? Take out every non-essential word?
6. I haven't yet read an introductory CS text that gets it right?
7. As to exercises? Try to use excercises that the student might have some immediate interest in, or can use in their daily life?For example, instead of some cute game example, show the student how a simple reminder application is programmed? How Google works?(just the basics). Or, how their spellcheck program works?
8. If I was going to write a introductory computer course,after explaining the hardware(that's usually sitting in front of them), I would explain an how operating systems stores their information--"The use, and location of Folders."
I would want my students competent in the Command Line before we did any Programming. I would want them to know they can have two folders named the same, but located in different sections of the hard-drive. I would want them competent in finding them, and manipulating them.
But looking inward, I wonder if our industry doesn't do the same thing in many cases - what about:
- Eye strain caused by lack of contrast due to our favorite color palette.
- Stress induced by unintelligible workflows.
- Failure to protect a user's privacy.
- Programs that induce RSI.
I realize this is a far cry from polluting the environment with toxins, but shouldn't we at least think about these factors more often?
Is that really going to deter them from doing it again? Where is the real penalty?
What applies to the Car Industry (via Dialogue from Fight Club) applies to the Chemical Industry in Spades. DuPont Knew (or strongly suspected) that C8 & (Teflon) were causing Cancers, Birth Defects etc. But the cost was going to be too high to move away so they all "kicked the can down the road".
Time to sock them with a multi-billion dollar verdict after some of these people are locked up for long periods.
So, I got cast irons. It's trivial to keep them seasoned and thus non-stick, and they can take the beating of very high heat -- searing meats -- and any metal utensils or rough substances.
People cover up, even the biggest most "professional" here (DuPont), and the public gets decades of abuse.
Why is it that medication requires FDA approval with lots of animal/human tests before you can sell it, but chemicals do not? Here the tests internally done showed prove of issue year after year, and would have been a big red flag.
I was pleased to see that they aren't hyping the risk of cooking on Teflon.
Yet another lovely thing I'm sure I ingested when I grew up drinking tap water in New Orleans what with the Ohio being the biggest tributary to the Mississippi.
So many years ago I started using a glass frying pan.
They are non-stick. But are they covered with Teflon? I honestly don't know. They are oven safe up to 240C, I can put them in the dishwasher...they are seriously the most durable pans I've ever used, unlike normal Tefal Teflon covered pans which scratch easily. Does anyone know?
How about instead of having "government" do the testing, we instead require that anyone creating a new chemical needs to test it for safety and publish the results (and raw data) before allowing it to be sold? Seems reasonable to me that you verify that whatever you're selling isn't dangerous (or if it turns out to be dangerous, inform the buyers so that they can act accordingly). Should be the same with any product too, not just chemicals.
> Oracle has told people to stop using @Veracode to test their AppSec. They already got AppSec covered [picture of JS injection attack in the blog post]
Presumably the only reason a closed source vendor would be against someone reversing their source is because they're afraid someone will steal their ideas and/or redistribute their code for free.
That not being my goal I really couldn't care less. I'll just go ahead and reverse whatever I want whenever I want. I value my security, and that of clients, over some legal piece of toilet-paper. Everyone who doesn't agree, should reconsider. Do you truly believe that people should not be allowed to look at code that is running on their systems for their security's sake? I will not redistribute what I learnt, but I will analyse it to see if it is safe.
If you didn't want me looking, you should not have put it out in the open.
It may have been a bit abrasive, but the points were well made, at least from the perspective of a closed source, enterprise software vendor
Firstly, it's perfectly aligned with the world of proprietary software. Oracle is probably more protective than the other vendors, because the restricted access to the source code is at the heart of their business model. But none of the vendors I'm aware of is very keen on reverse engineering.
Secondly, the reverse engineering is prohibited for ages - it's not that it was added to the license agreement yesterday. And there are other restrictions (e.g. on publishing benchmark results), so rather that "Oracle is bad" I'd say "people who sign accept license agreements without reading them are morons."
And thirdly, the article is spot-on about usefulness of the reports generated from a reverse-engineered binary. I've seen shitloads of such reports, usually generated by some clueless consultant with the sole competence to run an automated tool and print the result. So it's probably (at least partially) a protection against a flooding the support with bullshit reports.
And it's also true that many of the companies don't have proper security rules (like encryption, identity or password management, network security) yet pay some consultant for reverse engineering one of the components. Because it's easier to spend a large amount of money than evaluating and rebuilding their infrastructure.
So while I dislike Oracle, you can't blame them for everything - the customers are the ones choosing the vendor. If you happily accept their license agreement, you can't later complain "but we want to do reverse-engineering" no matter how many MBA titles you have. If you want such freedoms, ditch Oracle and proprietary vendors in general. That's what open-source is for.
edit: corrected my error
Are they being serious? "Uhm, yeah, sure, Mr. CSO, I deleted the file. Here, I'll show you a screenshot of a terminal where I ran the 'rm' command to delete the results. As you can clearly see, the 'ls' command does not see the files anymore."
Why did this article just disappear off of the front page after receiving 318 up-votes in 2 hours?
How does post to drop from position #1 to somewhere below #150 in less than 1 minute, unless it was deleted by HN moderators, and if that's the case, why did it happen?
Apart from the legal stuff and a lot off egocentric 'we can do it better', she has one point. There are many companies giving a lot of money for security, manually scrubbing all exploits that come out, create their own patches. While some lack the basic security guidelines. I think this money can be better spend upstream, to create tools so they can test patches for exploits better and create a faster security update release pipeline, so that all downstream and customers can rely on the security releases and that it can be released quicker to everyone. (Controversial: Maybe even adding automatic security updates to the package itself, like wordpress did, so that customer cannot be on a release with exploits)
"This cookbook is to be read by your personal chef only; if you read it and understand it yourself, you're breaking the book's license agreement."
If you pay for some string of bits, you have a right to look at them. Period.
Anyway, I've never read a better article supporting the use of free software.
That's like what 5yrs old kids say when they mom ask them something.. "Mooom I was already thinking about it! Hush!"
This is so pretentious I am completely baffled. Are people at Oracle so full of themselves?
For all Mary's entertaining points, I think likening the license agreement to marriage is a civil offense.
Every framework should take note: that's how you avoid creating another framework and fragmenting community!
It's is great to see this in action :). Amazing work.
I guess the best thing would be for me to quit complaining and just fix list-view, I just haven't had time available. I suppose the same is true for the maintainers. Is it planned for ember list-view to still be treated as part of the main ecosystem and updated to work with 2.0 soon?
1.13.8: 488K, 126K gzipped 2.0.0: 424K, 110K gzipped
Doesn't add new features Remove all depreciated features
sudo npm install -g ember-cli
And it gave me:
$ ember -vversion: 1.13.8Could not find watchman, falling back to NodeWatcher for file system events.Visit http://www.ember-cli.com/user-guide/#watchman for more info.node: 0.12.7npm: 2.13.4os: linux x64
Is cli still 1.x?
The designer (http://zelfkoelman.com/) is Dutch, and the name is actually a pun in Dutch. The word 'Ferrolic' is pronounced almost the same as the Dutch word 'vrolijk', which means 'happy' or 'joyful'.
As the site says, the device can only withstand a few months of sustained use - which is a pity.
I think it's a pretty useless expensive gimmick created out of toxic materials to excite the numb neurons of the bored inhabitants of the digital realm for 2 minutes or so.Then we'll all forget about it and move on to the next thing.I'm already looking for something else :).
Check out "Machine with Oil": https://www.youtube.com/watch?v=__GhJl_UQg0
It would also keep the cats amused; like watching a fishtank for them.
* the game of life
* someone blowing smoke rings
* Robert Patrick from Terminator 2
* maybe a waterfall
Maybe this will be my next weekend project.
I created and maintain an extension that is used by visually-impaired people around the world (it has been translated by volunteers into Dutch and Chinese, for example).
Occasionally a Firefox update breaks this extension. OK, fine, that's the cost of doing business. Of course, the automated compatibility report that Firefox creates is utterly useless; it almost never catches the breakage. But that's a side rant....
There can be a decent turnaround lag (sometimes on the order of a few days) to get a new version of an extension reviewed by addons.mozilla.org. In the meantime, I have made a habit of building a new version of the extension and giving it to anyone who asks. Some people rely on it to use the web and can't wait for Mozilla to do their thing (another side rant: I once stupidly forgot to check in a key resource. I've since changed my development process to keep this from happening again. But the non-functional extension that I pushed passed Mozilla's review just fine. Makes me wonder how much value the review process is really adding.)
If I want to be able to continue this process, I will need to sign the extension myself (and who knows what histrionics Firefox will throw if a user tries to replace an extension with one that has the same UUID but a different signature!)
"Users should have the choice of what software and plugins run on their machine."
"Firefox is dedicated to putting users in control of their online experience"
"Firefox Puts You in Control of Your Online Life".
The slogan, as found on https://www.mozilla.org/en-US/firefox/new/ , is now "Firefox is created by a global non-profit dedicated to putting individuals in control online." I believe it used to be "users" - see above - but was silently changed. I suppose these "individuals" are the people at Mozilla...?
Two details: the extensions need to be signed by Mozilla, and only US English speakers will be allowed to disable this requirement.
The point of free software is that users, individually and collectively, are free to modify it as they wish, without requiring approval from third parties. (And of course to use, copy, and redistribute.) This is a sharp turn away from the free-software ethos that made Firefox possible in the first place.
I understand the issue of users being tricked into downloading and installing malicious extensions. If you let someone program, they will be able to paste malicious code. I just dont think that taking away users ability to modify their own browsers is an acceptable solution to that.
If this disturbing move sticks, Mozilla will become an increasingly tempting target for whatever group wants to control what software you can install on your own computerwhether thats Sony Pictures, the NSA, or Amazon.
The old free software movement has died. We need a new free software movement.
Anyway, they are both measures taken to stop malware, by taking an option away from the user, that most users won't even notice, but many "power users" will be inconvenienced to varying degrees. I'm guessing Firefox's won't be as bad, since the "developer version" that will let you keep doing the old way probably won't differ from the normal version as much as Chrome's does.
There are FOUR VERSIONS OF FIREFOX WITH A SWITCH TO DISABLE THIS if you're so inclined. You can use: Nightly, Dev Edition, Unbranded Stable and Unbranded Beta. All of which have a switch that you can set to disable addons signing requirement.
In contrast there are only two versions where this is a requirement, Stable and Beta. If you doubt the usefulness of this you haven't seen a browser being hijacked by malware overriding search results, inserting all types of toolbars and more. This will prevent malware from sideloading extensions. And this is good.
The signing process is not the same as the AMO review process. The process takes only seconds and the signed addon is returned to the developer. They can distribute as they see fit.
Now, lets face the fact: Simple signing process that takes only seconds and will help prevent lots of malware, not the most nasty ones but a huge lot of sideloaded crap. Four versions of the browser for those power users who want to disable this.
Now, can someone explain to me without hate why this is a bad thing?
The assumption being that developers need to test as they develop. And are a more informed user.
Because of that, I was definitely considering to start releasing it on my own, instead of through Mozilla's add-on website. It looks like I will be able to do that, but I'll have to use the signed extension process.
I'll believe this system works when I see it. After my experience with add-on reviewing, I am very skeptical.
Tweeted to Chris Beard: "Dear @cbeard, please give your users the choice and control they deserve in @firefox. Allow extension signing to be disabled in FF42."
You want to protect the user, then start making extensions more secure and require permissions to do things. E.g. If an extension can access contents of webpages, pop up a dialog and ask the first time. There are other ways to protect users without going authoritarian on us.
Now, let's just hope that the other side of the coin is a concern for API backward compatibility, so that people don't need nightly versions of addons and a developer edition to keep their addons in a usable state...
EDIT: It passed the automated review, but my point stands. If I wrote the code, then you can be damn sure I trust it.
Fdroid is working on third party repositories, maybe that will catch on to decentralize the mobile world a bit. Something like that for browser extensions would be sweet. Take a look at Fennec Fdroid for a cleaner Firefox mobile experience at least.
When Chrome came along they decided to go in a different direction entirely slowly making it more and more painful to accomplish what used to be easy in the name of security. The review process went from automatic if you were trusted to weeks and then months and then more than a quarter year. They started demanding source code. It became scary to release to addons.mozilla.org because you never knew how long it would be before your next release would be approved.
Mozilla needs to realize they're hastening their own demise - Chrome now offers better features than when Mozilla was the leader including releasing to a percentage of users and faster nearly invisible to the user updates. They should go back to their roots and embrace developers again.
I was recently searching for user agent switcher add-ons as part of a blog post  and almost all have -signed in the name. To some people it could look like the un-signed ones are more stable and better.
I've written Firefox extensions for personal and business use, and Mozilla are preventing that from every happening again. Why? Cui bono?
I'll mention, again, that they completely broke the security of Firefox Sync: it's no longer a trustworthy place to store passwords. Why? Cui bono?
"This is not the same process that currently applies to AMO add-ons, which has been typically slower."
Also the fact that you can't seem to be able to disable it even with some "debug/developer" mode in FF seems to be a bit over the top.
What happens if you are tied to an older FF extension that isn't signed? What happens when you want to develop an extension? yes beta extensions will be signed also but what happens before the BETA what happens when i just want to make hello world and to learn what i can do?
Or using any other channel to get your extension.
!Thanks Mozilla, really.
Just like HSTS I can't turn this off and it leaves a bad taste in my mouth. Were originally I considered firebox to be a browser for power users, now I'm not too sure any more.
Mozilla currently don't provide a dev build for Android, just regular and beta versions https://play.google.com/store/apps/developer?id=Mozilla
The security problem that this "fixes" is not really an issue on Android due to Android's own app sandboxing, so maybe the Android build will allow unsigned extensions? It's not mentioned in the FAQ.
So the worst kind of threat is still there. Great job, Mozilla!
What about private add-ons used in enterprise environments? We haven't announced our plan for this case yet. Stay tuned. In the interim, ESR will not support signing at least until version 45, which won't come out until 2016.
We need new algorithms that- require communication volume and latency significantly sublinear in the local input size (ideally polylogarithmic)- don't depend on randomly distributed input data (most older work does)
It's really too bad that many in the theoretical computer science community think that distributed algorithms were solved in the 90s. They weren't.
Felt exaggerated at the time, but it often seems like the truth.
Looking at Table 2 http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183....
I wonder if one day we will find that sending all data to a data center for processing doesn't scale. I think that's already a given for some realtime'ish types of applications and it could become more important.
Obviously, the success of decentralised computing depends a lot on the kinds of connected devices and whether or not data makes sense without combining it with data from other devices and users.
With small mobile devices you always have battery issues. With cars, factory equipment or buildings, not so much. But management issues could still make everyone prefer centralisation.
Which means, roughly, that compute and storage continue to track with Moore's Law but bandwidth doesn't. I keep wondering if this isn't some sort of universal limitation on this reality that will force high decentralization.
Microsoft hasn't even caught on yet, and is still designing for bigger and bigger monolithic servers. I can't tell what Amazon is doing, but they seem to have the idea with ELBs at multiple layers.
I can imagine you can solve the throughput problem with relative ease, but the speed of light limits latency at a fundamental level, so proximity will always win there.
I tend to think that storage speed/density tech rather than networking is where the true innovations will eventually need to happen for datacenters. You can treat a datacenter as a computer, but you can't ignore the fact that light takes longer to travel from one end of a DC to another than it would from one end of a microchip.
Next time, at least share a story of something cool you have done, that would make your post much more appealing.
Now, that's the logic, and it's a sound logic, go explain that to people every time you are drinking water in a pub and they go: "ehm, uh, you don't drink?". Which, by the way, if not explained properly can seem like you are a recovering alcoholic, if explained properly will make you sound like a food/diet nazy.
He is also extremely passionate about helping reduce obesity in South Carolina especially. One of the nicest and most honest people I've ever met.
*disclaimer, I've worked with him on startups combining exercise science and mobile apps.
The NYC Dept of Health estimates that 30% of adult New Yorkers have one sugar-added beverage per day. 20 years ago there were 10oz bottle in vending machines, then 12oz cans, now also 20oz bottles, thus, those consuming a bottle a day today now consume 52 lbs of sugar per year compared with 26 lbs 20 or so years ago.
Many of the poor (and others) have no idea how many sugar calories they are consuming each year when they drink Coke and other sugar-added beverages.
Besides tobacco use, obesity and lack of exercise is one of the major contributors to our increased health care costs.
The previous NY Mayor, Bloomberg, tried to have a state tax on sugar-added beverages passed which is what is recommended by public health officials such as the CDC but that was turned down. Then he lobbied the Federal Government to not allow food stamps to be used for sugar-added beverages but that was turned down. Then the health dept. tried to ensure that in venues where they had control that sugar-added beverages would have a 16oz size limit, but they lost in court.
Ironically, the land for Centers for Disease Control and Prevention (CDC) which is located in Atlanta, Georgia was donated by none other than The Coca Cola Company.
It's funny how many scientific atheists sneer at the religious for their beliefs when there's so much corruption in their own ranks... and that's coming from someone who'd rather believe in science than any form of organized religion...
Also, I'd recommend going here and looking at the FDA's proposal on labeling %DV for added sugars: http://www.fda.gov/Food/GuidanceRegulation/GuidanceDocuments...
Combine that with the meta study that showed exercise was not something to take up as part of a weight reduction regime http://www.independent.co.uk/life-style/health-and-families/... and you can begin to understand how important it is that Coca Cola need to push this message.
The reality is, added sugar products need taxation which is then ring-fenced to support healthy eating education and healthy transport schemes (Walking, cycling and public transport). We need to recognise that added sugar, in particular fructose, has to be treated on the same level as smoking is.
And there's also now a great documentary on Netflix called "Fed Up" (https://www.youtube.com/watch?v=aCUbvOwwfWM), for anyone who has a subscription. It particularly tackles this silly idea that it's okay to eat and drink shitty food all day, so long as you exercise for a little bit too.
I think the real problem is capitalism. It makes good economic sense to sell your customers too much food. Consider Starbucks. Your parents' coffee and a donut was a 250-300 calorie breakfast. Today's latte and a scone is double that and yields a much nicer profit margin.
Perhaps their should very basic study on general health/weight of people who regularly consume refined sugars and those who abstain from refined sugar (with neither group engaging in structured exercise. Or have these scientists answer a more basic question about obesity...all things being equal, if you took a person (whether they exercise or not) would that person be more likely to be obese if they consumed a 200 calorie soda every day or replaced the soda with 200 calories of almonds. I think people would be greatly surprised to find out a calorie is not simply a calorie as is often suggested and that sugar has a lot more impact on obesity than fats.
"No connection with Fructose and obesity"Sponsored by the Canadian sugarinstitutewho is owned by: Coca Cola and Pepsi co and corn producers.
Just because some people can not resist and drink responsibly, why should the company take the blame?
Having too much of almost everything is bad for you, am i missing something here?
Same thing with most crap people eat every day.
I'm not against processed foods per se, the increase in food safety and storage time makes sense.
Now, people make meal-sized (calory-wise) snacks by stuffing a bag of Doritos in between meals, eating a whole pack of Oreos, or just eating unbalanced (usually both micro and macronutrient unbalanced) meals etc
It's not my field, but I suspect that the only original research left to be done examines other potential causes. There's lots of good work on the Microbiome, for example.
Can someone from the field tell me, if they had a big fund to counter-balance the bias caused by Coca-Cola, what original research it would fund?
When I see that a company can now create SSDs with ~16x more capacity than the best consumer option, I feel like something fishy is going on that is artificially slowing the pace of larger capacity drives making it into the hands of consumers at a reasonable price.
Having seen the move from 5.25" HDDs to 3.5" HDDs, then the move from desktops to laptops, and now seeing SSDs becoming extremely common in laptops, tablets, and phones, I have to believe that the author predicted the future when he wrote the book.
Since PC sales have dropped, people are not buying as many HDDs, and buying more SSDs, usually indirectly. Cloud infrastructure has likely gobbled up the existing HDD supply.
But even there, SSDs are preferred for many applications, such as databases, since they're faster overall, storage limitations be damned.
And now we're seeing the first SSD that has a capacity greater than HDDs, in a similar sized package. And no current HDD company has an SSD offering worth mentioning.
It's disruption happening right before our eyes. History seems to repeat itself all too often!
A lot of 1 unit rack servers can fit about 8 2.5" drives. 128TB of storage in 1U is pretty crazy storage density.
Everytime they reveal a larger capacity drive I just wonder what the backup strategy is going to be. Longer tapes?
When I look at all the moving parts in an HDD, I'm shocked they can still be produced for less.
Samsung has designed the PM1725 to cater towards next-generation enterprise storage market. This new half-height, half-length card-type NVMe SSD offers high-performance data transmission in 3.2TB or 6.4TB storage capacities. The new NVMe card is quoted with random read speed of up to 1,000,000 IOPS and random writes up to 120,000 IOPS. In addition, sequential reads can reach up to an impressive 5,500MB/s with sequential writes up to 1,800MB/s. The 6.4TB PM1725 also features five DWPDs for five years, which is a total writing of 32TBs per day during that timeframe.
2,000,000 / 48 = 41,666.66 IOps
45k IOps for 16TB limits its use cases a bit. I don't know enough about storage to make an educated guess, but anyone know what the constraint there might be? Aren't there controllers that can do 1MM IOPS on single EFDs? 45k is still a ton of operations, but I expected more somehow.
The hard drives would have ironclad firmware that keeps the RAM refrehsed until its battery goes down to 15% (or whatever the conservative 10 minutes of power is), at which point it takes the ten minutes to dump the contents of that RAM to SSD, and reverts to having that drive also be SSD until the power is reconnected long enough to charge battery back up to 80%. Then it reads it back into RAM and continues as a Lightning Fast 64 GB + Very fast 16 TB drive.
You would store your operating system on the lightning-fast drive.
The absolute nightmare failure state isn't even that bad, as even though the RAM drive should be as ironclad as SSD, in case it ever should lose power unexpectedly through someone opening the device and disconnecting the battery or something, it can still periodically be backed up, so that if you pick up the short end of six sigma, you can just revert to reading the drive from SSD rather than RAM and lose, say, at most 1 day of work.
thoughts? I bet a lot of people would be happy to pay an extra $800 to have their boot media operate at DIMM speed, as long as the non-leaky abstraction is that it is a physical hard drive, and the engineering holds up to this standard.
There is a lot of software out there that is very conservative about when it considers data to be fully written - it would be quite a hack for Samsung to hack that abstraction by doing six or seven sigma availability on a ramdrive with battery and onboard ssd to dump to.
As for yuppies vs. hackers, it helps to go back further, to understand how hippies morphed into yuppies. Hippies were mostly self-indulgent types who spouted bogus philosophy to justify their existence. Yuppies are mostly self-indulgent types who spout bogus philosophy to justify their existence. Stewart Brand, of Whole Earth Catalog fame, led the transition from hippie to yuppie, from the commune to the "lifestyle industry", from growing your own food to Whole Foods.
What happened to the hacker ethos was the absorption of computing into the advertising industry. The hacker ethos survived the Microsoft era, but not the Google era. Microsoft was about tools, which was consistent with the hacker ethos. Google is about ad clicks, and its success created a whole industry focused on ads and user exploitation, not tools for user empowerment. That's what destroyed hacker culture.
The elitists came to Northern California - a vanguard of social liberalism, student protest, and most importantly communitarianism - and brought their elitism with them.
Northern California still exists in the nostalgic hippie image of the 60s, but it's compartmentalised, like the Dropbox brogrammers elbowing out kids at a playground. Public spaces increasingly become private in the name of profit.
Over time, the feel of free love will fade away entirely in the Bay Area. Everyone interesting who isn't a millionaire will be pushed to the margins, and eventually, more welcoming spaces, like Detroit. I implore the tech elite of Silicon Valley to consider a future where an expensive tech-centered monoculture makes the Bay Area an unattractive location for long-term employees, and instead relying on mercenary college grads who put up with the cost and the crazy for a few years before moving on to a more fulfilling job and place to call home.
The writing is spot on but will cause some cognitive dissonance for some as the words ring true but conflict with the structures they have set up in their minds and in their lives.
I think that the commercialization can be good though... the culture gets to live on and propagate when there is a way for hackers to make money doing what they love. Any successful counterculture is bound to be co-opted and exploited, but that doesn't mean that true participants in that culture shouldn't be able to subsist off of it.
Author makes the comparison to hip-hop culture which I think is a good one... there is a highly commercialized side of that culture in rap music, but there are still "underground" emcees not to mention deejays, beatboxers, graffiti writers, and others who are able to build up their culture due in no small part to the money coming in. Of course, to maintain a good balance, you need keepers-of-the-faith like the author who are willing to smack down arrogant upstarts who think they can piss all over and redefine the culture they claim to hail from.
Although the author poses some interesting ideas the piece feels long and muddled and I'm not at all sure who the audience is or what the call to action might be. Voice is unclear as some paragraphs are personal statements ("I") and others are observations about culture and economics.
It might be more powerful if it was drastically shorter and simpler ... or maybe if it was three times longer with more references and a stronger set of recommendations. I really can't say.
Who cares if 'yuppies' 'gentrify' hacking. You neither have to stop doing what you like because groups you don't care for have noticed nor do you have to waste energy and time and fight against them for doing so.
Do what you want to do regardless. That is the answer to the author's questions.
If you are a hacker, or artist or music lover or anything else of a certain type merely because someone else isn't of that type you are not really that thing.
You are going to find posers as a sub-culture enters the general awareness but you are also going to find trickster godlings in suits with boring titles on their business cards if you don't let the trappings blind you.
Last time I checked my history books, hackers used to be "good" when they started out. Yes, they were counter-cultural and yes, many had more or less pronounced anarchists tendencies. But they were definitely not the rebellious threat to public safety that the author portrays. In fact, the author gets it back to front: the real corruption of the term "hacker" happened twenty-five years ago, when the media started applying that label to cyber criminals. If anything, Silicon Valley is actively countering that original corruption by their current use of the term. (Though it is quite possible that they are misusing/over-using it in smaller ways.)
In short, this is a prime example of an article about a subculture that is untainted by any understanding of the same.
This theme can be further generalized into "money is ruining <insert_whatever>".
"Money is ruining music. Bing Crosby was a true artist; Today's performers like Lady Gaga is a commercial pandering."
"Money is ruining movies. The 1970s had auteur directors but now all we get at theaters is superheroes in spandex and Disney princesses because they need ROI from international blockbusters."
Writers, thinking they have something new to say, like to write on those themes. Readers, with a predisposition to seeing what's wrong with the world, like to read them. I suppose it's some sort of 1st-World ritual of commiseration. Personally, I find those essays devoid of any insight. I can acknowledge that there are undeniable trends there but I try to avoid categorizing them into value judgments of "good vs evil". I understand the economics of why Disney's "Frozen" is the type of film that theaters prefer to show rather than Michael Cimino's "Heaven's Gate".
An example of force-fitting his observations into categories of Hackers-vs-Yuppies (aka good-vs-evil) is his claim:
"Im going to stake a claim on the word though, and state that the true hacker spirit does not reside at Google, guided by profit targets."
That broad-stroked brush is amateur writing. Google is a big place with ~57,000 employees. Sure, there are probably engineers doing soul-crushing work of parsing logs for server reliability or optimizing ad click conversions. But I'm sure there are other pockets of engineering where "hackers" are innovating and trying to change the world: driverless cars, balloon wifi, etc. It's the same contradictory pockets of bored employees coexisting with passionate hackers in different areas of large companies like Lockheed, AT&T Labs, Apple, etc.
As far as "yuppies" ruining the hackers, I'm not sure who's supposed to be an exemplar of the "hacker" that he wants to run SV. Steve Wozniak & Steve Jobs both came from middle class families. They weren't hobos living out of their cars and overturning the world with their hacker ethos. Apple took money from VC investors within 1 year of its founding. Even Richard Stallman's family background can also be considered "yuppie".
Concern has been expressed that the new generation of artists (musicians, actors etc.) in the UK seem to primarily come from upper middle class backgrounds. I have started to wonder if the same could be said of the tech startup scene, e.g. in London. This could be due to the increasing difficulties someone would have living on a period of effectively zero income, unless they had the backing of rich parents.
 Could cite lots of articles, but http://www.standard.co.uk/business/markets/confessions-from-... is just one recent one.
Even though many people try to draw parallels between hackers who creatively modify systems and hackers who break into systems, there is little overlap these days, except, maybe some common roots in history and the fact that the latter usually have ample skills to do "creative" hacks as well.
Hacker culture being subverted? With multitude of security conferences, daily news about research into new vulnerabilities and increasingly frequent criminal hacks, I think hacker culture is actually doing pretty well in many of its diverse forms.
People like nice things. There I said it. Most people like nice cedar lined floors, expensive drinks and well cut clothes. And when you have those things, it's marvelous how quickly your disdain for the 'institution' evaporates.
Most of us aren't really hackers in that nostalgic sense. We're normal people, yuppies, kids, nerds, dorks, that one dude really into Aphex Twin in your office (everyone has one). We just happen to be good with computers.
Google cache: https://webcache.googleusercontent.com/search?q=cache:5gBAj1...
So, that's the whole issue right there-- being a hacker has become a career path, and it's iteratively becoming more mainstream as the expected benefits are formalized and the stigmas exorcized. That doesn't really sound all that bad, but the problem with gentrification is that it pushes the original tenants out, which is kinda scary when we're talking about the gentrification of an idea.
Of course, I'm not really sure how much such real hackers care.It'll be inconvenient when you can no longer identify a member of the tribe by a simple shibboleth, but that is not an insurmountable obstacle.
In my opinion, l33t H4x0r status is something you earn.A yuppie having "hacker" on their business card is likely doing about as much damage to hackerdom as the self-titled programming rock stars, ninjas, wizards, etc. etc. did to those professional groups.
0. Incidentally, does anyone else get reminded of things like The Rebel Sell or The Conquest of Cool by pieces like this? All of this handwringing serves to subtly indicate that the author is the sort of person who happens on these scenes before they were cool.
1. Even if you can't get rid of the more Stallman-esque members of the tribe, they get romanticized, deified, reduced to stories instead of people who could be brilliant, visionary, and kind, but moments later gross or needlessly rude.
2. Generally by spelling with your number keys.
That's very much how it seems in NY right now.
> In this context, the hacker ethic is hollowed out and subsumed into the ideology of solutionism, to use a term coined by the Belarusian-born tech critic Evgeny Morozov. It describes the tech-industry vision of the world as a series of problems waiting for (profitable) solutions.
Trade is the ultimate form of autonomy because when someone willingly buys what you're selling you can be self-sufficient (as opposed to dependent on a beneficent family/non-profit organization/gov't). Obviously tech startups have deviated from the hobbyist "I'm getting my kicks" ethos because they're trying to hack the softer domain that is customer behavior. Solutions to real problems are always win-win, and to believe otherwise is pretty weird.
It always irks me when I hear people refer to themselves as hackers (Zuckerberg for one) and this article articulates why far better than I could.
1. Define what hacker means (prior to the yuppie gentrification), for numerous paragraphs. Bulk of article.
2. Big drop G paragraph: point actually starts here. (Just scroll down until you see a big G).
3. Fizzle on about gentrification of hacking, sort of making a point.
4. Send yuppies home.
It wasn't colonization that yuppified hackerdom. It was evolution. Most of the old school hackers became yuppies when they found out they could make lots of money off this stuff. New school hackers are entering the scene now and this is all they know.
The same thing happened to old school counterculture hippies who found out their ideas and their styles sell. Hippies founded loads of clothing brands, trendy shops, 'new urbanism', and the whole organic food movement, all of which are now massively profitable. Whole Foods Market (Nasdaq: WFM, an S&P500 component) is a direct evolutionary descendant of the dirt-worshipping weirdos that spurned 1950s white bread culture and danced in the streets on acid.
Nothing really goes extinct. The dinosaurs are still here. In America we have a custom of roasting one on Thanksgiving.
I grew up with the old school 90s cyberculture, and I miss it dearly. I remember downloading text files on phone phreaking from H/P/V/A BBSes, hacking PBXes to dial demo scene boards in Europe, and watching Second Reality (https://www.youtube.com/watch?v=rFv7mHTf0nA) for the first time on my 80386 with 4mb RAM.
I keep a few museum pieces of stuff I made back then here: http://adam.ierymenko.name/ye_olde_source_code.html
Today I am doing this: https://www.zerotier.com/
In its original form this old hacker culture is mostly dead. Its successor in an evolutionary sense is the startup scene.
If you doubt this thesis consider that you're hanging out at Hacker News, which is run by a billion dollar VC firm. I rest my case.
Yesterday we had Future Crew and L0PHT Heavy Industries. Today we have Y-Combinator and Andressen Horowitz. Today's hacker groups have cap tables.
By saying this I am not claiming that this was an entirely positive change. Evolution is not a progressive march 'upward'. The word evolution just means 'change over time.' Some features are gained, others are lost.
In evolving along these lines the hacker scene gained a lot but it also lost a lot. It lost the creative ethos of play and experimentation, replacing it with an engineering culture ruled by the hidebound plodding competence exported by top-ten universities and their engineering programs -- excellence at doing things we already know how to do. It also lost its countercultural and social ethos, replacing it with a yuppie get-rich mentality. But it gained the ability to act on the world stage. I would argue that hackerdom evolved into a global economic superpower with the capacity to influence not only global geopolitics but the future of human evolution.
You'll say it lost its soul and I won't argue with you. It certainly lost the things that made it great in its time and its place.
But that's the thing. Dinosaurs became birds because the dinosaur thing was played out. 90s hacker culture was great in its time and place. I wonder how relevant it would be today. This is not the 1980s or the 1990s. Everything has changed.
I think the question we need to be asking is what now? Where can we go from here? What might we evolve into that is perhaps more interesting than what we are today and how do we get there? The answer (IMHO) is never going back to the way things were. It's always the forward escape.
Edit: another useful question to ask is: what was it about old school hacker culture that predisposed it to evolve into this? It's particularly interesting to ask this about aspects of today's startup scene and Silicon Valley culture that you don't like. For example: I find the fratty 'brogrammer' thing irritating, but I can see its ancestry in the overwhelmingly male and somewhat sexist hacker culture of yore. It's just that minus the counterculture trappings.
Ouch. That hits hard.
I never understood why the citizens of a city are against Genetrification. It improves not only the quality of an area, but can make you money if you own property there. Creating laws against it essentially keeps the poor, poor. On top of this, anyone with a little bit of succes and/or money leave.
It's just another example of politicians decreasing social mobility under the guise of helping the poor.
The prototypical self-described hacker is an insecure person who attaches themselves to a romantic, powerful identity in order that they might attain these qualities themselves. But the power of the hacker is that of a magician: conjuring tricks in order to amaze the public and seem mysterious, powerful, skilled.
Here you see a normal web server with a firewall. It's totally secure. Nothing up my sleeve, as you can see. But wait... Alacazam! Now I have a remote shell!
If the author wanted to 'resist' traditional economic institutions they could become a circus performer. But then they couldn't fulfill the true 'fetish', which is that anti-authoritarian action through intellectual skill and craftiness is a pursuit to be proud of; one that the audience should revere.
The fact that this author's lofty rejection of traditional economic forces packaged in a sexy identity also has the ability to provide them a very comfortable living is, it would seem, totally accidental.
Here's a good one (a few MB of text) about hacker encryption: https://www.cypherpunks.to/faq/cyphernomicron/cyphernomicon.... other traditional sources are the anarchist's cookbook and anything with more of a "fight the man" sense from the 70s and less of a "give us billions of dollars" sense from the post-popular-Internet era.
Hacking is about a nerd underclass fighting an oblivious overclass. Up until the late 90s, hackers had never "won." But with Internet mania sweeping the world, the nerds started to win. They became "the new man." Now the new overclass needs to be brought down themselves. You don't win hacking, you just become a more prominent target.
Hacking is also about exactly not that.
Hacking is just ignoring everybody else and doing good work you can be proud of. It's the only reason Apple exists. Hacking is about not trying to win, it's just about being clever.
Companies promote hiring the second kind of hacker because those people pay no attention to the value they create as long as they're having fun. So, you get someone puzzle-obsessed, give them a $50 million problem to solve, they solve it, and you keep paying them their $125k/year. Everybody's happy and the CEO gets to join the three comma club even sooner thanks to the selfless hackers who enjoy subsidizing billionaires while living at the bottom of the org chart.
This gist of the article is that the hacker impulse or hacker ethic is a natural human response to large alienating infrastructures that allow little agency on the part of individuals. Hackers take different forms, but are identified by 1) a tendency towards creative rebellion that seeks to increase the agency of underdogs in the face of systems that are otherwise complex or oppressive or that limit access to experts 2) a tendency to acting out that rebellion by bending the rules of those who currently dominate such infrastructures (this is in contrast to the open rebellion of liberation leaders who stand in direct defiance of such rules). They thus are figures of deviance, seeking to queer boundaries that are otherwise viewed as concrete and static.
Having set up a definition of what the hacker ethic is, the article goes on to argue that the ethic has been corrupted due to its association with computer culture in the public eye.
On the one hand, in a world where people increasingly rely on computers for subsistence, the bogeyman figure of the criminal computer hacker has emerged, a figure of media sensationalism and moral panic.
On the other hand, the increasingly powerful technology industry has honed in on the desirable, unthreatening elements of the hacker ethic to present a friendly form of hacking as on-the-fly problem-solving for profit.This is described a process of gentrification: In most gentrification you have twin processes: On the one hand, a source culture is demonised as something scary to be avoided. On the other hand, it is simultaneously pacified, scrubbed of subversive content, and made to fit mainstream tastes. This has happened to rap culture, street culture, and even pagan rituals. And the article argues, it is now happening to hacker culture: The countercultural trickster has been pressed into the service of the preppy tech entrepreneur class.
The article concludes with a reflection on whether you abandon the gentrified form, or whether you fight for it. There is reflection on whether the hacker impulse perhaps has always been an element of capitalist commodification processes, but argues that it is an ethos that needs to be protected: In a world with increasingly large and unaccountable economic institutions, we need these everyday forms of resistance. Hacking, in my world, is a route to escaping the shackles of the profit-fetish, not a route to profit.
Doesn't look like the original source of the info is very trustworthy, will need other people to verify this.
"Go to Start, then select Settings > Privacy > General, and then turn Send Microsoft info about how I write to help us improve typing and writing in the future on or off."
Does anyone know if this stops Windows 10 from sending typing data across?
NOTE: Not that I condone what Microsoft is doing, just a little hypocritical to think that big bad Microsoft is doing anything new in the industry, especially when the products you guys are talking about jumping ship to, have the same problems. This is nothing new
That's why I use Debian. And hope they do the right thing.
It is just part of the new trend, that everything runs in the cloud somehow.
If you want a Windows 10 without Cortana, simply disable the sound card during installation (BIOS or physically).
This is not a solution, but a workaround for those having no other choice. Tested with Windows 10 Pro N.
For example, in another HN submission where someone posted a tool to delete/disable tracking services and add ip lists to the hosts file, a user has reported startup errors. To me this indicates Windows 10 is trying to communicate even during boot without the users knowledge! That's a big deal in my book... I don't know about yall.
The one reason I have suffered the slings and arrows of Windows so long is for gaming purposes, more recently because I wanted to release my hobby side-project, a game in Unreal Engine 4, on Windows and so I have kept one of my computers on Windows 8.1.
Last night that machine was compromised, and despite my fairly extensive malware fighting abilities, I couldn't get rid of it. That means a complete wipe and only moving data over that I must have, and not trusting that data, not to mention never trusting the HDD again (going to have to throw it away). I also question my bios, so I'll need to flash bios too.
I run three main computers, Windows on a Asus laptop, OSX on a Mac Air, and Linux/DragonFlyBSD dual boot on a Macbook Pro 2014. I think Windows 10 just might be the excuse I need to push myself completely away from the MS ecosystem. I've been talking about it for years, but the power of their tie-in is not to be trifled with.
I also fear for the state of linux in the same way though. At >10 million lines of kernel code, I think the many eyes theory has a weakness, namely that complex and huge codebases are antithetical to the many eyes theory working. that's why I personally think the future of computing will be in code simplicity and pairing down existing codebases. A good example of a try at this is Minix 3. <10k loc. (of course lacks many features).
That's also why, even thought I'm a huge GPL/GNU guy, I am increasingly leaning towards the top down ecosystem of the BSD's.
I think there are a lot of fundamental issues in personal computing that many of us just ignore and don't want to discuss because the implications of the conclusions could be uncomfortable. I think it's time for those of us who are considered power users to start having this difficult discussions more often and in more public ways.
Example: you are an executive at E Corp and the company will announce its acquisition in two months. You had previously set up planned trades to sell x number of shares each month before then. Because the acquisition is at a premium on the current price, you will make much less money if you go forward with your trades before the announcement. So, what do you do? You cancel the trades.
Was this insider trading according to the SEC? Surprisingly, no! Even though you're profiting from insider information, the SEC rules are such that for insider trading to occur, you actually need a trade.
Martha Stewart did exactly this before her company was acquired earlier this year:
Someone starts shorting a ton of Apple stock? That probably means something big is happen at Apple, and it's not good. It's information.
Spoofing as a technique can be used to combat and inhibit other types of trading, and is in some sense an algorithm to 'keep the opponent honest'.
As best as I can tell, the biggest reason that we as a culture are against insider trading is because 'it's not fair'. (happy to read a response that adds more depth to my understanding). It isn't fair, and the people with insider information are going to make a lot of money. But in the process of making that money they bring the information to everyone else. And insider trading incentivizes knowing as much as possible so that you can have an edge on the competition.
Isn't maybe an alternative decentralized news publishing service a better idea? Couldn't the CEO of a company publish their financial news only on their own website at the given publication date? Why is it necessary for these news to be stored in some central news database days before their publishing date? And I mean these as honest questions because I have really no idea what the advantage would be?
And another related question: wouldn't it make sense with today's Internet infrastructure to reduce the interval between earnings reports. Maybe it could even be something like a continous automatic publishing of these company finances. Always when some financials change it could directly be published. That way all investors would at all times have the same information as the insiders, so everyone would be on the same level. Of course some extraordinary news like mergers or acquisitions might still give some people insider information who prepare the deal, but at least the quarterly earnings could not be insider information.
I wish it were so simple to hand-wave all security risks. Mr. Levine's ability to find a MySQL tutorial was quite impressive, but his dismissal of very real security concerns is childish. It's like saying cars are known to crash, so quit crashing cars. It's so, like, simple!
It really makes me question the sanity of doing this illegal trading. For as much effort you could do something legal and make money. Maybe not as much but surely without the risk of going to prison.
So it's smart for Github to build tools that lower the bar for a) understanding git, and b) using git's more powerful features.
Useful metrics would be things like the percentage of users who use the pull request flow, use it with at least one comment per pull request, have pull requests that get revised, etc. etc.
I'd guess that part of the growth potential driving Github's valuation is the notion that git's power features add significant productivity/value and that github is uniquely positioned to let a significantly large number of developers and teams make optimal use of those features for development and collaboration.
Github For Windows used to be as important to my workflow as my code editor. I absolutely loved the "click to select which lines to commit" feature, and I'm pretty sure it improved the quality of my commits drastically. And it was constantly improving.
A few weeks ago came a new version that changed everything. Previous updates used to be incremental, but this one seemed to replace everything at once. Overall the UI was still good, even though important parts became hidden behind tabs. But the performance... Oh god the performance. Syncing repos became a multi-minute operation. Listing commits went from instantaneous to taking several seconds. It became so unusable I switched back to Linux for development.
Seriously, I get it, people don't love git's choice of verbs and syntax. Picking different ones arbitrarily helps no one however.
At this time, we're focused on optimizing the Mac and Windows experience. We're always thinking about potential improvements for the diverse needs of our users, though!"
Something tells me that the Linux version will come up later.
it is just insulting that GitHub continues to Treat linux has a second class citizen when with out Linux Git would not be a project...
What is actually new here? I know the native GitHub GUI app has been available for a number of years before now.
- Have other people committed things to origin on any of my repos?
- Do I have uncommitted changes on any of my repos?
- Notifications for when people push
Basically this but focusing more on collaboration.
Would it not have been better if GitHub Desktop was a seperate application that gets permissions (for example via OAuth 2.0) as an application in the "Authorized applications" section of the GitHub profile?
Or am I missing something here?
I wish it had the code review, commenting, issue tracking of the web app in a desktop application, all in one place. It's a pain to move constantly between the editor, CLI/SourceTree and Github web app for everyday tasks.
the ability to organize repos by organization or with custom folders/lists
the option to open a repo in a new window by right-clicking on one from the list in the left column
I like GitHub for the basic "see what you're committing and commit it" work flow, using the command line otherwise, but I suppose I'll switch to Sourcetree for now.
Simple operations are quick and intuitive. If you need anything more advanced, a git shell for the current project is just 2 clicks away.
Having all the visual diffs readily available in the client before committing is quite convenient, as is being able to push your branch and open a pull request for it at the press of a button.
I'm hoping issues and PR management is next on the plate. You can open PRs from the client right now, but to review and merge them you'd still need to open a browser.
But there's an immediately obvious bug, the middle pane keeps resizing to become wider every time you switch between repos.
Also, one large repo I had simply shows 0 commits. It worked just fine before. And another even larger repo works fine.
One theory I have is that our default upstream branch isn't master, but I'm not sure where to set that in the app. Anyone else noticed this?
Can't speak to any more advanced differences though. I'm still very, very new to git.
Took me a while to figure out how do do them via the UI, it's not immediately obvious that the side pane (which tries to take you to the web site for submitting pull requests) has nothing to do with local merges, which are now accomplished via an option that only appears when you're comparing branches (on the new timeline/graph thingy).
Still using GitExtensions. This has a nice interface for hobby projects, but when working with "enterprise" huge projects at work - it lags noticeably.
OS X 10.9 and later includes Git, so GitHub Desktop will no longer install Git as part of its command line tools. The version of Git you have installed through GitHub Desktop is no longer supported. It's recommended that you uninstall it as soon as possible.
>Its far better to be thought ofand to think of yourselfas a project than a company for as long as possible.
'You' are neither the company nor the project, even if you are the sole person in the company. Startups are of course typically single-project companies, but the two are still not to be conflated.
If any of these things were synonymous, there would be some serious implications. If your project fails, then you as the projectcompanyperson, are a failure and may cease to exist(instead of you still being an alive person whose project failed and whose company may or may not be on the rocks). If the company is the project, then after a pivot the company is not the same company (instead of it being still the same company with a different project). But with the real concepts, definitions and identities, these are different abstraction layers essentially, and everything is more transferable, fault tolerant, and so on.
I get that with sufficient dedication and focus these things can feel like the same. I also believe that I have been someone's project.
This being said, I agree with the general analysis of the essay, specifically with regards to throwing bureaucracy to the wind in favor of lean-and-mean experimentation (and who really wants to grow up?).
I got money from all my different projects but never thought about them as companies, I used them to learn and pay for my hobbies. I never thought they could turn into real companies so I worked on them at night after my normal job.
Right now I got back to a project and idea I had back in 2008 when I even registered the domain and created the landing page and sketched a logo. Finally, I want to turn it into a real thing, into a company, but I still think about it as a project.
Same happens when I have to explain to family and friends why I left a well-paid job on tech in Ireland and moved back to my parents home in Spain to work on a website while the country is still broke. By saying "I'm working on a project" seems less serious. They still ask if I can live out of that but if I try to explain I'm working on a company people just freak out and things get really difficult to justify taking the attention out of what's important. Hope one day they understand.
I see this sentiment expressed often, like it's a good thing that you want to pursue an idea that seems bad.
If I were to think about founding a start-up, I'd rather find an opportunity that makes sense and sounds good, validate the opportunity, talk to potential customers, and do as much diligence as possible before diving in. So essentially, if something seems like a bad idea, I'd move on to the next idea.
This may make it harder to create a unicorn (since I'd clearly miss some aspect that makes the idea not bad), but I bet it's a better, less risky way of trying to achieve a high-growth start-up with great potential. Good ideas fail as well, but I'd wager it's less often than bad ideas.
Isn't this just cherry-picking of examples that validate a preconception? What about Uber? Dropbox? Amazon? Xiaomi?
> In the fall of 2003, Elizabeth Holmes, a 19-year-old sophomore at Stanford, plopped herself down in the office of her chemical engineering professor, Channing Robertson, and said, Lets start a company.
In the current climate people want to hear what your monetization strategy is first, or they think you're a loser. And a lot of the time I don't have one, or it's really vague. I just have an idea that seems like it will be really important to at least some people.
Calling it a project also liberates me from feeling guilty about not having a corporation around it, or going to meetups to yammer with my city's alleged startup entrepreneur scene, etc.
So what's the right word?
Project - Sam gives good reason why this word works in the beginning, so long as you're not looking for external validation. At some point though, your project becomes something more.
Company - the people working on the project, and all the processes and resources that go into supporting those people.
Business - the transactional model that enables the company to make money and keep doing what they're doing.
Mission - for some people, the business is the mission. But for most success stories, even in the beginning when it was just a project, there was an underlying mission, a purpose and a plan. Unfortunately, the word feels mushy. I'd hate to hear people going around saying they're "working on a mission."
Ultimately, the word I use will likely depend on the context.
Here's a picture of the diagram:
Calling an effort a project (or even, an experiment) helps free us up to be more flexible and receptive to the market, and removes a bit of our personal identity from the mix. Sam's great point is for a "project" we're more likely to pursue an idea inside the magic overlap zone of Thiel's diagram
If you find yourself not doing something because you don't think it's worthy of the fine company you've built, you're thinking way too much about the formal company and not enough about the ideas, work, and vision that made the company what it is.
...and I'm like "idea? I have a business. I make my living off of this idea."
Company over project is probably a bigger problem in SF/SV than anywhere else. I see a lot of people in both hardware and software making very elaborate and technically successful projects that never graduate from being projects.
I agree with this, with one caveat. Some of my most productive bursts of output or problem solving have occurred right after a self-imposed, brief period of slacking off. There can be a tremendous benefit from the mental break that goes with 'controlled slacking off,' so long as it's intentional rather than an indication of something more serious (like not wanting to work on the project).
This is hardly a list of the best companies, and a few of them don't support the argument:
1. The search/portal market was already significant when Google launched. Companies like Yahoo, Excite, Lycos were already public traded at the time, and were some of the hottest issues in the tech space. So it's no wonder Andy Bechtolsheim cut Larry and Sergey a check for $100,000 before a corporate entity even existed.
2. When Facebook launched, Friendster already had millions of users and had received a $30 million acquisition offer from Google. MySpace, which was created by people who saw Friendster's popularity, was founded by people trying to capitalize on the social networking hype that was already present in 2003.
3. Even Airbnb is hardly a post child for Sam's argument. Homeaway, VRBO and Couchsurfing were all well established when Airbnb launched.
For most entrepreneurs, "start out with an idea that doesn't sound very good" isn't likely to turn out very good.
It's insane. If companies are buying ad-space, it's because they expect to get more business in return. This means that someone out there is being influenced by said ads, so that if the content cost X to put up online (hosting, funding its creation), someone is paying X+(ad company overhead) for it.
If these costs are being borne evenly, then it's complete societal waste. We could pay X for the content, and not incur the overhead. If these costs are not borne evenly, and some people are paying for the consumption of more disciplined people, it's probably contributing to terrible cycles of poverty (ie: some kid spending money on fancy new shoes he doesn't need and can't afford is paying for a well-paid tech-users YouTube habits, because it preys on their lack of education). Either way it's terrible.
Advertising isn't free. Insofar it works, for some people, it's basically coercive via psychology and simulated peer pressure.
Ads served via a centralized vendor can be blocked trivially, and people are choosing to block them. You can make a whole lot of arguments about ethics, or you can just admit that it's a broken business model.
Worse, it is becoming apparent that ads increase the attack surface. Failing to clean that up will cause armies of IT folks to actively work against you.
Maybe the business model is that you're serving ads in a non-centralized way, or maybe you're serving centralized ads to people with locked-down computers, but good luck serving blockable ads and relying on the good graces of the population to unblock your ads out of charity.
Even if I don't control my computer entirely, how about my DNS? I have a lot of the more intrusive domains (tynt, doubleclick, etc) set up as 127.0.0.1 in my dnsmasq config.
The "whose computer is it anyway" question seems key here. In order to make advertising possible, we have to take control away from owners. That seems like a generally bad outcome.
This is the crucial point to me. How can I agree to a website's trackers before I know they exist?
As a side bonus I also don't have to deal with auto playing video ads and popover boxes asking me to subscribe to content I haven't yet had a chance to see if I like.
What "we" didn't agree to was being tracked all over the web, malware being shoved down the pipe via ads, ignoring "do not track", and all of the other nefarious things ad networks have been trying to get away with. Ethics have gone out the window, if ethics ever existed on the side of advertisers. So I run an ad blocker, and I make no apologies for doing so.
"What about the little guy who pays for hosting with ads?" You mean the "little guy" who has to scrape couch change to pay for the site that contains his latest post about artisanal mayonnaise and her latest gadget acquisition? Yeah, that $100/year for hosting is really going to break her, might not be able to get next year's Apple Watch on release day.
The big boys and girls like The Verge and what have you? Well, using The Verge as an example, they could go under tomorrow and IMO the world would be no poorer, given that they've kind of turned to poo in recent days. I blame the web advertising model for part of their deterioration, but that's a long digression. Specific examples aside, what about the sites I like? I pay money to the sites I like, specifically Ars Technica, NYT, and the Economist (and some others I'm sure I've forgotten about). Some, like Daring Fireball, use unobtrusive, single-image ads that I'll occasionally click on because they interest me, as well as a desire to reward a job well done.
But at the end of the day, the whole thing isn't my problem. If a few bad actors (or, in reality, a lot of bad actors) want to crawl into my machine and have their way, I'm blocking all of them. If there's collatoral damage because of some bad actors, it's not my job to fix it. I did my part and said, "no, you don't". Don't lay the onus on me to play nice, because you're berating the wrong party.
Kant 1st Imperative -- Violates -- If everyone used Adblock, many websites would shutdown. I.e. "Adblock is okay because sites can still run if just some people do it" -- cannot be universally applied, contradiction
Kant 2st Imperative -- Violates -- You treat website developers as a means to an end -- to get content, instead of rational human beings who, given a sufficient outcry against their ads, could change their ad service or offer a different model.
Utilitarianism -- Violates -- Ad Revenue - Well being of site owner: -Site Costs / Visitors + Ad Revenue For just you. Well being of you: Site benefit - time wasted * time value. (Blocking "Ad will play for x seconds" in this specific ethical system might not violate)
Rule Utilitarianism -- Violates -- Well being of site owners: Cannot make ad supported sites, current ad supported sites -site cost. Well being of society: Less websites -- more inefficiency and less units of entertainment good.
Social Contract -- Violates -- People accept ads knowing that others will do this as well and this supports the site. Another: Site owners create sites relying on users's ability to see them and thus pay for site creation.
Virtue Ethics -- Violates -- You might feel more shame being in a room with someone who made a site supported by ads and showing them that you use adblock then if you were invisible to the site owner.
The systems above are the ethical systems allowed in the book "Ethics for the Information Age (6th Edition)" by Michael J. Quinn (the list is his, but not the theories themselves, just mentioning my source to show I'm not cherry-picking ethical systems)
Advertisement got much more power on the Internet and got much more predictable for advertisers.
But we also switched from turning pages or switching channels, if we don't like the ads, to blocking whole advertising companies with the help of software. We can now even prevent the ad from being "overseen" at all, because it doesn't even get shown to us in the first place. newspaper adds always hit your subconsciousness.
Both sides stepped up their game. Don't see any problem with this.
 I only know the basics about the http protocol but I'm guessing something in the header could be added. Which is completely within their rights as virtual "land owners".
People are blocking ads because nobody likes a firehose of garbage pointed right at their face.
To crank that tired old record, "this sector is ripe for disruption" aka somebody go already make an ad network stand-in where the user can pay the equivalent of per-impression cost and visit any participating site ad-free.
The people providing ads do a dirt-poor job curating them, so blocking ads isn't about convenience but about security.
Yes, we can say, "I consent to viewing an ad in order to receive X free service" in the same way that we consent to viewing a commercial when we watch TV or listening to an ad on the radio.
However, in those latter two examples, the information is one-way. Those advertisers don't collect any personal information (outside of perhaps our viewing/listening location).
When it comes to website ads, most consumers do not know/realize that a) the advertisers are collecting a WEALTH of your personal information and b) that information comes at a cost of your bandwidth (which, for many mobile users, is limited). There are probably many other things that happen between the end-user and the third-party that I am not aware of.
Sure, they may consent to viewing a free ad, but most of them do NOT consent to collection of information nor increased usage of bandwidth.
There's no ethics involved with me. Poor experience? Get blocked. Decent experience? Welcome to the whitelist
I personally own 12 personal domains, all for various content that I personally put up. Some blogs, some game servers, etc, etc. I don't charge for my content, and I don't advertise. I'm not in it to make money, I'm in it to share things with people, and I do it all out of my own wallet.
Why is there this assumption that all content needs to be subsidized by the readers? I mean, I get it... there's certainly value in compensating content producers for their time, and even allowing them to do it full time... but there is SO much content out there that is basically put up out of the goodness of the creators' hearts. Why can't we keep it that way?
On the issue of ethics, I'd say it's not ethical to spread out a small amount of content across six pages just to get more page views. It's bad for advertisers and for consumers.
Your agent should act in you, the user's, interest. Decidedly partisan and so what? You shouldn't have to explicitly instruct it to defend you from surveillance and pollution - it should do that of its own accord from day zero.
Or is your browser a double-agent?
A kind public service! We should really be paying them, but the advertisers inform us for free!
Asking about the ethics of hiding ads seems a little like asking about the ethics of taking shelter during a carpet bombing attack.
I wish we would steer these discussions away from economics (Do the ads work? Are there better ways to monetize, do they stabilize or destabilize markets, etc) and toward culture. What is the cultural effect of saturating the internet (and the rest of the world for that matter) with ads? I am not the first person to ask...
I think it would be nice if publishers just went back to <img> tags. Script tags and iframes and flash give to much power and result in lots of performance issues.
You can still track and consolidate with an img tag but the tracking is limited to what's in the http headers.
I understand wanting to block the ones with the trackers for privacy reasons and the malware ones because nobody wants malware, but blanket blocking all ads tars everyone with the same brush.
Edit: Personally, I used to just blanket ban but I've recently moved towards having uBlock only block the malware ones and will manually block any spammy sites.
It's possible to want to make the platform more powerful and not like some of the ways the power is being used.
Many people still don't realize it's trivial to have a DVR automatically skip commercials, but advertising companies and TV networks sued TiVo to make sure they will never implement it.
Modern web ads and trackers are far over the line for many people today,
Not just "over the line," but for over 5 years now, advertising networks have allowed exploits to be delivered over their advertising networks. There's nothing like browsing a website then having a drive-by crypto locker installed on your machine.
As of 2015, blocking advertising isn't a moral question, it's a question of do you value your own security.
But publishers, advertisers, and browser vendors are all partly responsible for the situation were all in.
People say "trust the wisdom of the free market," but they forget the important part: free markets always become corrupt and always accumulate power towards the top. A market without government oversight and intervention is just a way to exploit and abuse people for profit with no repercussions.
It has never been easier to collect small direct payments online,
That's more tricky, isn't it? We've all viewed some article at a tiny city's online newspaper then been hit with a "SUBSCRIBE TO PODUNK DAILY ONLINE TO KEEP READING, ONLY $24.99/month." It's not sustainable for every small thing to receive direct payments and we don't have a clean disaggregation of a common "subscribe to internet publicans" pool (like iTunes Match, but for writing? Still useless if you get 0.00002 cents per page viewbut, that's basically online advertising again).
Neither the content creator nor the audience bears any responsibility to the third party to ensure that the opened channel is used effectively.
If shit comes through the channel, I'm going to route it right into the sewer. If gold comes through, I'll route it into my pocket. Either way, I still care more about my relationship with the content creators than about their sponsored side-channels.
The ads do not pay for the content. The content creators pay for their own content. Then they hold their nose and make a deal with shady web-advertisers to capitalize a bit more on what they have already done. Those advertisers aren't buying content. They are buying access to the audience.
I'll be around all day to answer questions about the release (along with a few other engineers on our team).
We're very excited about this release -- it makes the lives of RethinkDB users dramatically better because they won't have to wake up anymore in the middle of the night in case of most hardware failures :) It also took over a year to build and test, and has been one of the most challenging engineering problems we ever had to solve.
It looks like they try to follow http://www.defmacro.org/2013/04/03/issue-etiquette.html, it'd be great to see other companies adopt it too.
I'd probably reach for RethinkDB before Postgres or others simply for the better administrative experience. Especially for small teams or start-ups that don't have a dedicated DBA role.
For anyone curious, the databases I would most likely reach for, depending on the situation would be RethinkDB, ElasticSearch and Cassandra. I really do like MongoDB a lot as well, but RethinkDB offers the features with far less friction, though the query interface takes a bit of getting used to.
That said, I also like more traditional RDBMS options as well. I REALLY like what PostreSQL offers, but have no desire to administer such a beast, failover isn't really baked in, and the best options are only commercially available, at a significant cost. There are also hosted options for AWS and Azure for various SQL RDBMS. That said, I find being able to have data structure hierarchies in collections tends to be a better fit for MANY data needs.
Congratulations to Slava and everyone else at RethinkDB.
 http://rethinkdb.com/docs/async-connections/ http://www.rethinkdb.com/api/python/set_loop_type/
...Doesn't seem available on homebrew yet though.
This has been a long-awaited feature for me. While I loved nearly every aspect of RethinkDB, it was the reason that made me hold back from using RethinkDB. Good to see RethinkDB keep improving!
Also, very much looking forward to trying this out!