hacker news with inline top comments    .. more ..    11 Jul 2015 Best
home   ask   best   2 years ago   
Ellen Pao Is Stepping Down as Reddits Chief nytimes.com
699 points by jonmarkgo  6 hours ago   604 comments top 59
1
nhf 6 hours ago 15 replies      
I think this was the right thing to do from a PR perspective. Having Steve back as the new CEO will definitely be good for the community.

I also applaud Reddit's announcement for calling the community out on their childish BS:

> As a closing note, it was sickening to see some of the things redditors wrote about Ellen. [1] The reduction in compassion that happens when were all behind computer screens is not good for the world. People are still people even if there is Internet between you. If the reddit community cannot learn to balance authenticity and compassion, it may be a great website but it will never be a truly great community. Steves great challenge as CEO [2] will be continuing the work Ellen started to drive this forward.

All in all, a good day I think.

2
MBCook 6 hours ago 5 replies      
I don't like this at all.

Even if she wasn't the right person (I don't know), all the worst elements of the site are going to see this as a victory for their awful behavior and it's going to get worse.

The people who attacked her with sexism and comments about her personal relationships. The people who supported FPH even though they were attacking people in real life and off Reddit, not just posting comments in their personal corner of 'discussion'.

She didn't do a good job of it, but at least she tried to stand up against some of the worst of Reddit.

I worry heavily that if the new person doesn't draw a clear line at the start things are going to get a lot worse in terms of hate/abuse/harassment.

EDIT: After posting this I saw Nilay Patel tweeted basically the same thing: https://twitter.com/reckless/status/619620964658245632

3
pkorzeniewski 6 hours ago 11 replies      
Ellen has done a phenomenal job, especially in the last few months, he said.

What exactly "phenomenal" has she done? Reddit works pretty much the same as it worked several years ago, but in the meantime she managed to piss off the majority of community, which is the only reason Reddit exists

4
bane 6 hours ago 0 replies      
Right or wrong, fair or unfair, or whatever you think about Ellen, I think most people agree that she had become personally and professionally toxic to reddit as a brand and community and even if she did a great job from here on out, it was going to be an uphill battle to restore community confidence in her as a CEO.

I personally don't believe she had the right qualifications to lead a community-driven site like reddit as it is today, but would have the right qualifications if reddit was going to start making a serious pivot to a more lucrative money making direction via commercial partnerships, advertising, etc.

Reddit may still go that direction, but Huffman won't have the same baggage weighing him down.

(note: this will also likely feed the conspiracy that her turn in the head office was a convenience for her lawsuit, now that she lost, she has no reason to stay in that position)

I agree with other comments chastising the community for the racist/sexist/whatever nature of lots of the negative comments against her. It was childish and dangerous. She had enough issues worthy of reasonable criticism that it wasn't even necessary.

I think this is a good thing for reddit.

5
minimaxir 6 hours ago 3 replies      
Ellen Pao gives the reason for leaving on /r/self: http://www.reddit.com/r/self/comments/3cudi0/resignation_tha...

> So why am I leaving? Ultimately, the board asked me to demonstrate higher user growth in the next six months than I believe I can deliver while maintaining reddits core principles.

This is believable because there have been odd business decisions under her watch, not just policy decisions. RedditMade, one of the intended revenue-generating models for Reddit, failed while she was interim CEO. Alienating /r/IAMA probably did not help.

6
notsony 6 hours ago 9 replies      
>Sam Altman, a member of Reddits board, said he personally appreciated Ms. Paos efforts during her two years working at the start-up. Ellen has done a phenomenal job, especially in the last few months, he said.

This is clearly nonsense, otherwise there wouldn't have been a grassroots campaign to remove Ellen Pao from her role.

If Sam Altman honestly believes that Ellen did a "phenomenal" job, he should reconsider his own position at YCombinator.

7
devindotcom 6 hours ago 8 replies      
Maybe I missed it, but was there ever any information on why Victoria was fired, or whether Pao actually had anything (or everything) to do with it?

From where I was sitting, it seemed like no one actually learned the full story, which might be confidential or take time to contextualize/safely explain, and everyone immediately threw it on Pao's lap and downvoted any holding maneuvers she and the rest of the staff tried to attempt. It was poorly handled, sure, but it seems like there was a lot of finger pointing before anyone knew what was actually happening. For that matter, do we even know now?

If I'm wrong, though, happy to correct my ideas here. (grammar edit)

8
noir_lord 6 hours ago 8 replies      
This entire debacle and the 'communities' (the small vocal part that acted horribly) response pretty much hammered the last nail into the coffin for me when it comes to reddit.

With the exception of a few niche subreddits and the (few) incredibly moderated major subreddit's the whole place has become a negative pit with horses beaten so badly to death Findus put them in their lasagna.

Twitter often feels the same way as well (I'm pretty much at the unfollow as soon as someone acts like an idiot stage now).

Ironically the only social network I don't hate is Facebook and that's because I have about 20 people I consider true friends on there, all signal no noise.

9
dvt 6 hours ago 2 replies      
Pretty much had to happen. To say that the Victoria situation was mishandled is a severe understatement. I wonder what will happen with communities like FPH and others (that have since moved to Voat). Will reddit lessen their censorship efforts?

Time will tell. IMO, the problem at hand is that reddit is still trying to make advertisers their bread and butter. And advertisers will never be overly attracted to censorship-free spaces.

Even though I may not agree with her aggressively politically-correct agenda (nor does most of reddit), I think it may have been a smart move from a business dev. perspective.

10
onewaystreet 6 hours ago 1 reply      
> It became clear that the board and I had a different view on the ability of Reddit to grow this year, Ms. Pao said in an interview. Because of that, it made sense to bring someone in that shared the same view.

Does this mean that the board thought Pao was being too aggressive in pushing growth or not aggressive enough? If it's the latter then the Reddit community is in for a shock.

11
puranjay 5 hours ago 1 reply      
As someone who frequents only a couple of subs on Reddit (which were completely insulated from this fiasco), I have no idea why people were so pissed off.

So she made a bad decision. Big fcking deal.

"She's killing the community!". Well, if your idea of 'community' is making public rape threads (while you use a throwaway) and threaten to kill a person, then maybe your community deserves to die.

Reddit has a lot* of good. I've been there long enough to see it. But it has a lot of absolute low-lifes clogging its sewers as well.

12
robot22 5 hours ago 0 replies      
The one take away I have from this situation is that we have an honesty problem. People criticize Reddit as a platform of hate and vitrol, but as in reality this only partially describes the entirety. They complain that people on the internet are too free to speak their minds, but perhaps this is a reflection on our society a place where honesty and the free exchange of ideas is discouraged.

Response to material:http://www.buzzfeed.com/charliewarzel/reddit-is-a-shrine-to-...

Food for thought:https://www.facebook.com/psiljamaki/posts/10153334440110516?...

13
return0 6 hours ago 2 replies      
Isn't it already too late? How can a new captain save the sinking ship? The new CEO would be standing on a double edged sword. If he reverses course immediately claiming reddit an absolute free-speech enviromnent, the people who wanted a safe-space will be disillusioned, if he doesn't the rest of the users will keep seeking for another platform.
14
ksenzee 6 hours ago 1 reply      
Reddit would do well to hire someone with experience in the association management field. Those folks specialize in managing fractious communities such that the volunteers not only stick around, they're happy to pay for the privilege.
15
ljk 6 hours ago 3 replies      
It's interesting how fast people go from hating[1] /u/kn0thing to love[2] him again

[1]: https://np.reddit.com/r/SubredditDrama/comments/3bwgjf/riama...

[2]: https://pay.reddit.com/r/announcements/comments/3cucye/an_ol...

16
mcintyre1994 3 hours ago 2 replies      
> The attacks were worse on Ellen because she is a woman,

@sama, how do you explain this claim without ignoring the community's enormous support for Victoria Taylor?

17
iblaine 5 hours ago 0 replies      
Being the CEO of reddit is a political position. And Ellen Pao has too much drama in her life to be a good politician. Losing a sexual harassment case, marrying a crook who stole millions...those are events that don't happen by accident.
18
lisper 6 hours ago 0 replies      
So... who is replacing Steve at Hipmunk?
19
trhway 6 hours ago 1 reply      
Seems like Reddit hired Ellen without checking up her references at the previous job, ie. Kleiner, and now they harvest the same results - insufficient performance and high scandals.

(note: there is nothing about her sex here - just read the case materials and you'll see that she behaved just like a jerk at Kleiner - for God sake she complained there that some assistant was using company fax to send brain scans of dying from cancer mother)

20
muglug 6 hours ago 2 replies      
Will the anti-corporate brigades in these large community-driven sites make turning a profit impossible in the long-run?
21
chaostheory 3 hours ago 0 replies      
I'm not even going to debate whether or not she was an effective CEO. At the end of the day it's about the lawsuit, and I'm not going to argue the merits of that either. The only thing she should have realized from the start was that you can't have your cake and eat it too. You either choose reddit or the lawsuit. You can't divide your focus between both or you lose both.
22
gesman 6 hours ago 0 replies      
Wow, I applaud this development!

Now, if Victoria is coming back too - that's would be 200% right move for reddit!

23
slr555 3 hours ago 0 replies      
I think the take away may be that Ellen Pao is not the executive that someone's hype machine purports her to be.
24
slg 6 hours ago 0 replies      
I will be interested to see if anything changes regarding the management of Reddit or at least the communities opinion of it. I wonder if the community will chalk this up as a win and suddenly forget all of the reasons they have been complaining which really have nothing to do with Pao in the first place.
25
tptacek 5 hours ago 0 replies      
Converting large-scale investor dollars into compelling returns using the world's most entitled and monomaniacal message board: not, in fact, an easy job. Pretty sure very few of us could do it either.
26
myrandomcomment 5 hours ago 0 replies      
As a CEO she chose to take actions in manner and method that allowed things to spiral out of control. It was her job to control the message and blowback. She failed at her job, therefore she needed to go. It is really that simple.
27
nohat 2 hours ago 0 replies      
That article was more suited for editorial than technology. Pao was hated mainly for things either done by reddit before her, or done by her before reddit. Unfortunately, as leaders often are, she was held responsible for both. That being said she was interim, and was apparently not aligned with reddit culturally. That's an internet thing, not, as this article was so quick to claim, a gender thing.
29
smitherfield 6 hours ago 1 reply      
I'm sorry to see it go down like this. Redditors treatment of her got really ugly (/r/all after the FPH banning was shocking) over the past few weeks, and it's disheartening to see people's bad behavior rewarded.
30
sergiotapia 3 hours ago 0 replies      
Ellen Pao was a scapegoat. She was the face of a lot of changes that didn't sit well with the community. Now the people clamor, they remove her, and the people are happy again.

Notice how they didn't mention anything about reverting the bad changes to the website. ;)

31
goldfeld 6 hours ago 6 replies      
So can someone summarize the ordeal?
32
luckydude 5 hours ago 0 replies      
I posted this over on reddit but it got lost in the noise:

Cool, I guess. But after having spent some time on voat.co I think reddit will get less and less of my attention (not that anyone gives a shit about me but I suspect I'm not alone).

Reddit's management has destroyed any sense of trust I had in Reddit (I'm looking at you /u/kn0thing, it's not just Ellen, my understanding is you fired Victoria, right? And then grabbed popcorn [I know, cheap shot, but it appears like you really fell out of touch]).

It appears that it is all about making money which I think is going to be the end of Reddit for some of us. Reddit could have a decent revenue stream on reasonable ads but that wasn't enough, it had to be more. That is really troubling because the next thing you might decide to "monetize" is what each of your users reads. That would make the NSA look amateurs and would be a massive invasion of privacy. It would also be very easy to monetize. Given all that has been going on, it would appear to be just a matter of time before "user optimized marketing" appears.

Welcome back but the existing management has dug you a mighty big hole. I don't trust you any more.

33
kolbe 6 hours ago 1 reply      
After seeing what Ellen went through, I think Sam will need to raise some more funds to offer a significant pay bump to entice even mediocre talent to fill her void.
34
brock_r 5 hours ago 0 replies      
Reddit: The world's largest drunken mob.
35
golergka 5 hours ago 0 replies      
So, there are two stories people use to sum all the affair up: either "the witch is dead", or "pitchfork mob got what they wanted".

But neither of these stories really fit the information we have right now. Both of them fit some of it, and look realistic unless you look at the whole picture.

The best conclusion we can have here is that we don't actually know what's _really_ going on, just a bunch of facts and a couple of theories.

36
tacos 6 hours ago 0 replies      
Yup, she did awesome Sam. Especially recently. (Makes me wonder how bad one of these people would have to screw up in order NOT to get the happy handwave as they're booted.)

I didn't even know who she was until "the last few months." Which have been a parade of increasingly-negative press and idiotic behavior. And that's from reading Reuters and the NY Times -- I don't even use Reddit.

---

Sam Altman, a member of Reddits board... Ellen has done a phenomenal job, especially in the last few months, he said.

37
15step 6 hours ago 0 replies      
Great to see Steve back in the fold
38
jstakeman 6 hours ago 1 reply      
It's remarkable how fast and how organized they were.
39
scobar 5 hours ago 0 replies      
In the 24th Upvoted by Reddit podcast, Steve and Alexis talked about all the great content and communities hiding within Reddit that go undiscovered. I'm excited to see how they'll try to solve that problem, and hope they find a great solution. Reddit is really great, and it's very cool to see both Steve and Alexis back to enjoy and advance it.
40
multinglets 2 hours ago 0 replies      
Hey I've got an idea:

Let's all cry together because someone said something mean about a public figure. In fact, I'm incapable of discussing any other aspect of this event until this unspeakable atrocity has been addressed.

41
musesum 4 hours ago 0 replies      
I wonder if a Law Degree runs counter to running a social network? Where authority bumps up against anarchy. Imagine Peter Thiel running reddit. Both Thiel and Pao have law degrees. Both have been lightning rods. I suspect a JD comes in handy for some ventures. Such as Thiel running Paypal or Pao sourcing funding for RPX. In both of those cases, it is about removing ambiguity. For social nets, the opposite holds true. Because, ambiguity is the main product.
42
atarian 4 hours ago 0 replies      
Even on the internet, mob mentality wins.
43
alhenaworks 2 hours ago 0 replies      
Perpetual PR nightmare averted.
44
osetinsky 6 hours ago 0 replies      
rough year for her
45
neur0tek 4 hours ago 0 replies      
quelle surprise
46
bedhead 6 hours ago 2 replies      
During the KP trial I had always kept an open mind towards her arguments...until I later learned she was married to Buddy Fletcher, one of the biggest scoundrels and thieves in the investment world in recent years. The character and judgment of a person who would fall in love and wed someone like that says more than I can articulate. It's oddly reassuring to see my (and many, many others') skepticism about both her judgment and motives validated.
47
generic_user 5 hours ago 11 replies      
The knife cuts both ways. The clickbait media and various groups are trying to paint the now predictable narrative of '50 white male racist misogynist neck beards' who want to chase women out of tech again. Over 200,000 people with legitimate concerns sign a petition to have Pao step down yet they still carry on with there charade.

People are sick and tired of the media and a small group of militant activists trying to silence people who they disagree with. They engage in all forms of harassment, trying to get people fired, posting addresses and family pictures etc. The most critical things against Pao and her husband I have seen are posts about there phony extortion sexism and racism law suits. All of which is factual information available to the public and even the media has to admit these things are facts.

The clickbait media has to be called out more then anyone for trying to turn every issue no matter how banal into a black and white battle between good and evil and then fanning the flames on both sides. Its extremely cynical mostly to drive traffic to there sites. There is zero accountability in the media today and zero ethics. Everyone needs to be much more sceptical about what they read in the press and there motivations.

48
ElComradio 6 hours ago 8 replies      
PR that comes out of corporations cannot be trusted. We cannot trust Altman is being honest that he appreciated her efforts. We can't trust that her and her husband were in love. All of this that comes out of spokesmen is carefully crafted as a result of a numbers game.

When do we hear "so and so CEO did a horrible job and was forced out by the board."? Never. So are we to believe there is no such thing as a terrible CEO? Will we hear Sam saying "We made a terrible decision putting her in charge."? Never. Even if it was the actual truth.

Pao does not get a pass on this dynamic for being a woman.

49
scotty79 5 hours ago 0 replies      
Corporations are like sea of cockroaches on the dark floor. They look vast. Roaches have their little fights and wars, but when they make some random noise and draw outside attention, it's funny to look how individual cockroaches run away from the spotlight.
50
justonepost 4 hours ago 0 replies      
Don't piss off Kleiner perkins, that's all I can say...
51
istvan__ 6 hours ago 1 reply      
I am opening a bottle of champagne and at the same time answer my own question: it took 3 weeks for the community to get rid off a tyrant. Well done Reddit!

Friendly reminder that if you are using downvotes for disagreement than you are doing it wrong.

52
yegor256a 4 hours ago 1 reply      
Who is Ellen Pao?
53
arprocter 6 hours ago 0 replies      
I hope reddit has good legal representation...
54
pfisch 6 hours ago 0 replies      
The real question is will Ellen Pao sue reddit now?
55
thirdreplicator 2 hours ago 0 replies      
That's what you get for trying to turn the internet into Disneyland.
56
post_break 6 hours ago 0 replies      
I think most people just let out a sigh of relief.
57
tosseraccount 6 hours ago 0 replies      
There's a fine line between being polite and being so boring that it's stifling.Every commenting site has it's herd mentality and punishment of thought crimes.

Reddit users just wore it on the their sleeves and trying to suppress them was silly.

It might turn into DIGG 2 pretty fast and might not recover.

The investors and the "community" are just too far a part on this.

58
kaa2102 6 hours ago 0 replies      
That was quick but the corporate agenda being pushed flew in the face of the Reddit community. Power to the people!
59
rocky1138 3 hours ago 0 replies      
We keep hearing over and over again about how it's a small minority of vocal people who spew vitriol in any community, but how about providing some real, hard data?

Reddit has enough data and skill to identify approximately the percetage of users who engaged in this type of behaviour at the very least.

I'd rather see the numbers myself than read a press release simply stating something and being asked to believe it.

React.js Introduction for People Who Know Just Enough JQuery reactfordesigners.com
593 points by chibicode  1 day ago   225 comments top 34
1
michaelbuddy 1 day ago 3 replies      
I haven't gone through this page yet, but thank you for at least the attempt at reaching somebody in my state of affairs. Front End work is very hard. The front end design & what I would call the back-end front-end disciplines are supposed to meld together according to job postings and over-zealous recruiters, but it's a good deal more complex than I have been willing to put up with, mostly because of the materials to get started and the unintuitive tools that people who know what they are doing swear by.

Imagine being solid at design, pushing pixels, using HTML, CSS. Then suddenly these program frameworks which are created by back-end programmers not visual designers appear and the ramp up for the average technical person is immense. For me it's been enough to turn me off them all together and make it a talking point that "my javascript skills fall off at X,Y or Z" so recruiters and employers know that despite being able to do a lot, I can't suddenly become everything UI and everything back end just because the title of anything "front end" is so versatile.

2
vezzy-fnord 1 day ago 4 replies      
Speaking as someone not intimately involved in the web development community, React looks decent but absolutely not worthy of the obscene amounts of hype it received... unless what web devs had before was really that horrible, because that's the impression I got. Though from what I can tell, jQuery and React are somewhat orthogonal tools, as I have used the former and it goes beyond rendering views.

What I cannot figure out is all the harping on about "functional style". The React examples all felt more Smalltalk-ish to me than anything else (I suppose unsurprising given JavaScript's vague influences from Self). The use of closures doesn't really change the conceptual patterns much. Is it because the tutorial can only cover so much ground, or is it that since some of the primary buzz around FP was the management of mutable state, to the point that people now instinctively associate the idea with FP?

3
dahjelle 1 day ago 1 reply      
Shameless plug: I wrote my own React introductiongeared more for people that know some JS and HTML, but not necessarily any other JS libraries. More specifically, it very gradually builds up from rendering a single JSX tag to enough components, state, and props to build React's to-do example. I found React has an unusually gradual learning curveyou can really build up concepts bit-by-bit.

EDIT: As per usual, I forgot the link: https://github.com/dahjelle/Programming-Worksheets/blob/mast...

4
ifdefdebug 1 day ago 4 replies      
Maybe slightly off-topic, but the tutorial page adds entries to my (Firefox) back button as I scroll down. So when I tried to "back" to HN, nothing happened. Second, third back, nothing happened. Opening the back list, a dozen or so entries and they all do nothing.

I think this kind of design should really be avoided, it breaks my user experience.

5
philliphaydon 1 day ago 3 replies      
As someone who knows Angular and Knockout. I found this great. Because react is a different way of thinking IMO that most tutorials I read were not easy to get up and running so I just threw in the towel and forgot about react. Most people make too many assumptions on the skill level of the reader (myself included when I blog) and it can be frustrating.

Thanks.

6
curiousjorge 1 day ago 8 replies      
am I the only one fine with using just jQuery and ajax? I've written a chrome extension with 4k lines of code and I never had any problems. Instead of breaking things into separate classes and components, I just put them in separate files or in logical sequence. I'm sure this can get a lot messy with more than one developer but so far my projects, I almost always just use some jQuery library or plugin that does 80% of what I want to do and hack the rest.
7
zamalek 1 day ago 2 replies      
Consumable content aside, I absolutely love the format of the tutorial. It's significantly more pleasant to dig into compared to what seems to be the more common tutorial format (e.g. [1]).

Good work.

[1]: https://tour.golang.org/welcome/1

8
Omnipresent 1 day ago 6 replies      
Is React meant to replace Ember/Angular type of frameworks? Can React also connect with backend APIs to fetch JSON and present them on the frontend? I've been waiting to nosedive into Angular2 (when its out), is React a better alternative to get started with?
9
hit8run 1 day ago 2 replies      
Sorry but nowadays I feel that jQuery is like php for the frontend: people get results fast but the average code quality is so shitty. Why on earth is jQuery used for things that normal js can do instantly? There are so many libs out there based on jQuery even though this dependency is absolutely unnecessary. I might sound arrogant but people should get an understanding for JavaScript and learn how to use it before stacking layers of abstraction. Ajax with normal JS for example is not so hard. One doesn't need all the fancy angular/jQuery/whatsoever helpers. If one knows the basics it's okay to use some convenience stuff from time to time but I feel that so many low quality devs are solely relying on their precious xxx kb libs just to display a simple hello world.
10
paaaaaaaaaa 1 day ago 7 replies      
I have looked at react.js a couple times after reading more and more buzz about how fantastic it is. However I'm instantly turned off of switching to it when I read that HTML (which isn't HTML and is actually something called JSX) goes in your js files.

With angular I can keep all my HTML in my HTML files. Is this normal or am I completely missing something?

11
orheep 1 day ago 3 replies      
I see the point but noone in their right mind would write the javascript like that. When written better it's a good amount shorter than the React solution.

http://pastebin.com/wbGZZs7U

12
Grue3 1 day ago 2 replies      
Can React use separate HTML templates? I find this style of mixing markup and Javascript super-ugly.
13
sancha_ 1 day ago 1 reply      
Thank you, great intro to react. Easy for me as a backend dev to follow and get a grasp of React.

One thing, the page loaded constantly itself, after reading half through the tutorial (3-4 minutes), I had about a hundred entries for the same page in the tab history. This makes the website aweful.

14
moonchrome 1 day ago 8 replies      
OK I may be off point here - but why do people who know just enough JQuery need to know about React ?

Wasn't React developed as a tool to manage complex data flow patterns inside of a big web app like Facebook ? Why does your average JQuery developer need to know or care about this whole new abstraction layer ?

Even though I've seen it a 100 times by now I'm still amazed by how fad driven programming culture is and I feel like this is a perfect example : React - come learn this complex peace of technology to solve problems you never had (and probably never will).

15
blhack 1 day ago 4 replies      
Can somebody explain to me what is so bad about jquery? I use jquery every single day to make web pages do the things that I want them to do.

But my javascript hipster friends all claim that this is wrong and stupid and that I shouldn't use jquery because of some abstract reason that nobody ever seems to be able to articulate to me.

16
marcamillion 23 hours ago 1 reply      
This tutorial is awesome! I love the tone and how it knows EXACTLY who it is written for.

This may be a bit late, but I would LOVE for someone to do this for JS in a Rails App (or even CoffeeScript). I am desparately searching for good tutorials for people that are "borderline-decent" at jQuery but are Rails devs. I can't find any.

I understand, in theory, how to render some JS on a view and all this good stuff - but once you go into creating a UI that feels like a modern UI with lots of lil AJAXy elements...it can become a pain REAL quick with a bunch of `...js.erb`s all over the place with no seeming pattern to them.

So I would love if someone had a tutorial just like this, for how to create a simple and sexy app that looks like an Angular/Ember app but just using CoffeeScript/jQuery/vanilla JS.

Anyone know of any such thing?

17
sgrove 1 day ago 0 replies      
Also interesting to see the pitch on "ClojureScript is a Product Design Tool"[0], written by Precursor's designer. A lot of this stuff can be made more accessible for designers and FE developers who don't want to have to deep dive into all the new-fangled frameworks.

[0] https://precursorapp.com/blog/clojure-is-a-product-design-to...

18
FloNeu 1 day ago 1 reply      
If you must compare Angular with React, then at least compare it to ReactJs+Flux+more or Angular-Templates with React-Templates.

React is more like the view-lib of a framework. I like them both (also ember, knockout, backbone) andunfortunately think the battle isn't decided yet. Will take some time if ever...

Complain to Steve Jobs(1), for killing Flash/Flex, as youcould to client-side apps with desktop performance, without the hassle of Cross-Browser testing and so on...In my opinion the concepts of Angular and React+Flux are heavily based on the features this platform provided.

(1) IMHO the Only good thing he ever did (for the web), asit forced the proprietary software out... but led to thegreat Framework-War :)

P.S.: Also Actionscript 3 was ECMAScript(Javascript) plus Classes, Types and more... ( I wondered about which Version that was latly and found out they implemented ECMAScript and extended it with features requested by users.)

19
arenaninja 1 day ago 0 replies      
I'm excited for ReactJS catching fire. I've been using a lot of it lately, and most recently I had to go back to a set of components I built to add functionality. In total, the front-end took maybe 20 minutes: no fishing around for the right selector or clashing functions. Just "here's a new button, attach this event"
20
aprdm 1 day ago 1 reply      
Really well written, as a backend developer who has been playing a little bit with frontend lately this hits the sweet spot :)

Cheers

21
k__ 1 day ago 0 replies      
Good article. Needed it two weeks ago :D

I did 4 years ExtJS and 1 year Ember. React was totally different but rather nice to work with. The API surface is so much smaller.

22
hive_mind 1 day ago 0 replies      
Great concept (tutorials for people who know just enough JQuery). Would love more tutorials (for Angular, e.g.) for such people.

TodoMVC was a great effort. But unfortunately, the actual Todo creations are not well documented (for beginners to figure out how and why certain things were done).

23
toxickg 1 day ago 1 reply      
Great job, man! You make some strong points here that clearly reflect the important differences of these two approaches! Well done!
24
sergiotapia 1 day ago 0 replies      
This is the first React article I read where it actually compares and contrasts the jQuery approach and the React approach. I feel like React makes sense now, even though writing my HTML inside JS still feels off.
25
jarnix 1 day ago 0 replies      
Comparing jquery to React (or Angular/Aurelia/etc) is obvious but the article is great for beginners though.
26
radiospiel 1 day ago 1 reply      
This is really neat.. one thing though: there is nothing magic about 'the "magic" state'; it would IMHO be better to just call it state (or inner state, or so), and to explain that once the state is changed react rerenders the component automatically.
27
iamflimflam1 1 day ago 3 replies      
Really great tutorial - would be good in the next steps to talk about getting webpack setup etc..
28
tracker1 1 day ago 2 replies      
I think this article demonstrates very well why I think React should be the first choice for any new web projects... what to go with it (ember, flux (and related), or other) is open to more debate.

Of course there are great alternatives with similar workflows (mercury, etc).

29
rashthedude 1 day ago 0 replies      
Would you be open to the idea of writing something similar for React Native if you are interested and/or find the time?
30
Omnipresent 1 day ago 1 reply      
That is a truly impressive tutorial. Just curious, how much time did you spend on putting it together? Additional props for doing it being a dad!
31
pjmlp 1 day ago 0 replies      
My web experience is mostly custom in-house web frameworks, JSP, JSF, WebForms, jQuery and basic Angular.

It was a nice overview of React to me.

32
jozan 1 day ago 1 reply      
This is neat! Thanks for sharing.
33
akhilcacharya 1 day ago 0 replies      
Wow, this is exactly what I needed.
34
rashthedude 1 day ago 0 replies      
Beautiful article man.
Facebooks Piracy Problem slate.com
435 points by wesd  2 days ago   232 comments top 34
1
plorg 2 days ago 8 replies      
I have a friend/acquaintance who had a similar experience on YouTube itself. He had created a large number of instructional videos on his YouTube channel, and from them he was deriving a significant passive income (admittedly from the overbearing amount of ads he enabled). One day he received a takedown notice suggesting that his videos were illicit copies. His investigations led him to believe that another YouTube user had downloaded all of his videos, re-uploaded them under a different account (with even more, similarly-ripped videos) and then used the YouTube machinery to have the originals flagged. This did, indeed, appear to be the case when I checked out the other channel - there was a block of videos in this other user's history that all clearly originated on my friend's channel, even with the original author identifying himself in the voiceover.

He was unable to get YouTube to reinstate the original videos nor block the illicit new copies. After several months of shouting at the wall that is YouTube administration, he gave up and transferred his energy to creating numerous ad-laden blogs saturated also with Amazon affiliate links and embedded affiliate stores. On the one hand, it is possible that his original channel looked more like the channel of a spammer than the one that stole his videos (from my recollection of the old channel, not entirely implausible). On the other, it is possible that YouTube itself doesn't care much for its content creators outside of the few super-rich/popular/powerful users with enough influence to get their attention.

2
hackuser 2 days ago 19 replies      
A radical idea: Maybe our model of intellectual property is wrong, or outdated. When IP was tied to a physical object, it made some sense to restrict and explicitly license each reproducer.

Now we have incredible machines that can reproduce intellectual property almost infinitely, distribute it anywhere on Earth, and find it almost anywhere on Earth. Wow! Maybe we should embrace that innovation, and find a model that encourages the spread, use and re-use of IP, for the betterment of society. Yes, motivating creators is a problem, but there are many possible solutions.

Another radical thought: The notion of IP created from whole cloth obviously was always a fallacy; we all "stand on the shoulders of giants", "good artists borrow, great artists steal", etc. Now that our IP machines make finding, copying, and distributing IP so easy, we can expect even more of that wonderful, creative larceny. As IP creators are benefitting from these amazing IP finding/copying/distributing systems and so much of their own product is stolen, perhaps they have less claim on the profits from those things they put their names on.

A third: Many creative people are motivated to do great things withhout payment. Remember, all those FOSS creators, from RMS to Linus Torvalds to Tim Berners-Lee to every little FOSS project on Github. Remember also Van Gogh and millions of other starving artists you have and haven't heard of (quick, name a poet who cashed in on their life's work). Perhaps financial renumeration, while fair, isn't entirely necessary (and perhaps we'd have less crap with less of it).

3
willlma 2 days ago 7 replies      
Ironically, at the end of the tattoo video that was pirated, Destin Sandlin, the host, is seen wearing [a shirt](http://www.amazon.com/Stand-Going-Science-T-Shirt-Scientists...) that is a blatant rip off of an Randall Munroe's (XKCD) [shirt](http://store-xkcd-com.myshopify.com/products/try-science).
4
AdieuToLogic 1 day ago 3 replies      
All of this outrage at Facebook doing what Youtube already did[1] is difficult to fathom. Don't get me wrong, Facebook is not a site I use, am affiliated with, condone, or recommend.

But screaming bloody murder because they are doing to the site what that site did to others seems a bit of a stretch.

If hating on Facebook needs to happen (probably does, BTW), how about it's done for reasons such as their laughable "Terms of Service"[2] and how it blatantly states they instantly own anything passing through them without any consideration?

Now that's a bitterness I can get behind.

1 - https://en.wikipedia.org/wiki/Viacom_International_Inc._v._Y....

2 - https://www.facebook.com/legal/terms

5
mosquito242 2 days ago 4 replies      
This is super interesting - it seems like facebook's more likely to get away with it too because the people being ripped off are smaller independent YouTube Channels.

It seems like YouTube's main incentive to build out copyright infringement tools was all of the record labels that had songs being uploaded and re-uploaded on the platform.

I can see YouTube getting really aggressive in fighting FB on this legally, because they need to defend their own content providers before they move to Facebook (or start uploading to both facebook/youtube).

6
finnyspade 2 days ago 1 reply      
I don't understand what Facebook is supposed to be guilty of...

Is Facebook supposed to somehow know the video was uploaded to YouTube before? That would require Facebook to have an index of all the content on YouTube (an unreasonable proposal).

The next best thing is to allow takedown requests which they do!

The same thing can be done by reposting on Vimeo or any of a million sites. There just is no technologically and legally sound method to detect this sort of behavior.

If you don't want your video to be reposted by someone else, post it yourself. That's not to say it's okay for pirating to happen but this isn't Facebook evil, it's people.

One might blame Facebook for prioritizing it's native videos over embedded YouTube content but their policy on that is very public and the user experience IS better.

Why is Facebook being demonized?

7
rwmj 2 days ago 2 replies      
This should be a fairly open and shut piracy case for the video maker. Zoo is a British magazine with a real company behind it. Facebook is a US-based company and is redistributing the video (and likely making money from the adverts). The video maker is based in the US. He can start with filing a case against FB (seems he has a US lawyer lined up already), and once he collects from that, he pays a UK lawyer to follow up against Zoo's parent.
8
brokentone 2 days ago 1 reply      
This seems to be shades of aggregate content farm business models. Is HuffingtonPost Comedy's recutting of cute cat videos without credit okay? How about BuzzFeed's use of stock / flickr images? What about memes -- someone took those photos once upon a time, now they don't even get a credit. How about the dumb radio station's non-original video clips circulating on FB?
9
stevenh 2 days ago 1 reply      
Facebook not only added native video support, but their news feed algorithm blocks YouTube links in favor of native videos all of the time. They created the freebooting problem themselves on purpose to keep people (and ad revenue) locked into their own site.

As far as piracy is concerned, I fail to see how Facebook is any better than MegaUpload. You might even say it's 1000 times worse, considering its alexa ranking is 1000 times better than MegaUpload's ever was.

Why hasn't Zuckerberg's house been raided yet, and all Facebook servers confiscated?

10
austenallred 2 days ago 5 replies      
This is absolutely rampant in the blackhat marketing world, (where I used to dabble, but still stay up on mostly out of curiosity).

The model goes like this: Watch a few different pages and try to identify something that's bubbling up - that can be different viral facebook pages, reddit, whatever... there are a few different ways, but basically you constantly ping and scrape and try to identify stuff that's going viral as early as possible.

Once you've done that, you have an automated script that downloads the video and uploads it to your Facebook page or scrapes the content and throws it on your wordpress site with a really weak "link back" to the original content. You build up a Facebook page that has a few million likes, cover your wordpress pages in ads, and profit. It's not incredibly difficult to create an automated-if-unethical Buzzfeed.

It's incredible to watch one viral post or video spread throughout the web. It spreads out on different sites and platforms as quickly as it spreads on social media. Few end users really care what the original source was (especially since it's usually click-baity BS anyway), and the winners are the ones who can find the content the quickest and have the biggest reach.

I talked to a guy a couple of days ago who is making $60,000/month using this exact process, and has very little programming ability. He is, however, an absolutely shrewd and ruthless marketer with no ethical qualms about much of anything.

The content producers send him DMCA requests on occasion, and when that happens he or Facebook takes it down. But that's just a cost of doing business, and 95% of the content stolen never sees a DMCA request, so who cares? (Assuming you have no ethical compass). Content creators aren't constantly searching and scraping and trying to find other places where their content is hosted. That's hard enough to do on one platform alone (i.e. YouTube), let alone monitoring other platforms (Facebook) and a bunch of wordpress sites.

It's a game of content creators vs. "marketers."

It gets even more difficult for the content creators as the "marketers" get smarter - heighten the pitch of a video a little bit so sound matching software can't find it, reverse the video and choose different thumbnails so reverse image/video searches don't find it, spin the text content (visitors aren't really there for the great writing anyway), and you beat the vast majority of software. It's up to the individual content creators to play the same game Google is playing to kill the spammers, which is not their core competency. Unless Facebook does something on its own platform, this won't change. And even if they do, the best spammers will continue to outsmart the system.

The only way Facebook (and the content creators) win is if it becomes a core competency, much the same way defeating spam is for Google. I still know guys who can beat Google, but the level of sophistication is high enough that 99% of people can't keep up.

There are a few simple ways you can beat the vast majority of the content theft though; if anyone is interested feel free to email me and I'll point out some of the breadcrumbs the marketers leave behind.

11
scotty79 1 day ago 0 replies      
Why they don't post to Facebook as well? If they have some good content and they don't push it on some market then somebody else will.

I know, I know. Piracy, copyright, "I made it, they just remixed it by cutting me out". "It's mine, where is my money?!" Internet apparently doesn't care.

Either you serve the market your content or the market gets served your content without ever knowing about you.

Is Facebook profiting from this? Sure. Was/is YouTube profiting from the same thing? Yes. Do internet providers profit from piracy in general? Of course. Same as writable DVD manufacturers were. XEROX owners and whoever.

Is that bad? It's way better than if it was to be made absolutely sure that they don't.

12
solidpy 2 days ago 3 replies      
But is it Facebook or the magazine that ripped and modified the video the one at fault? Issue a DMCA to Facebook and sue the magazine for copyright infringement. And next time post it yourself.
13
rythie 2 days ago 3 replies      
This seems pretty common in British newspapers, even outside facebook. Videos often appear on newspaper's own sites instead of embedding the YouTube video. For example this from the Guardian (http://www.theguardian.com/politics/video/2015/apr/29/ed-mil...). Others are similar and often even have their own pre-rolls.

Channel4 has whole TV programme, Rude Tube (https://en.wikipedia.org/wiki/Rude_Tube), devoted to internet clips, which are all from YouTube AFAIK. I wonder if the creators get any money from that.

14
yalogin 2 days ago 1 reply      
Youtube became exactly because of pirated videos in its early days. Viacom and others fought for a long time to keep their videos off of youtube but eventually gave in. Now FB is doing the same thing.
15
jokoon 1 day ago 0 replies      
Every company having its success is paid by having shitty practices. It is a standard. I even wonder if this is correlated with social darwinism, that, for some cultural reason, you must be bad so be successful.

Maybe the technologies of internet will improve up to the point there will be no multinational group able to reap the benefit (a more decentralized internet with bitcoin-like architectures), but I don't see it happening very soon. I can guess consumers could understand the complexity of technology only so much, I wonder if improving those technologies and making them also as much accessible is really possible.

Facebook is really the low hanging fruit of the web. They might have open sourced stuff, but they're really evil in the google sense of the word. I can already remember 2 examples: the click farms and internet.org, and I'm sure there are so many other examples.

All of this makes me really sad, because the internet is the #1 tech tool that is improving the lives of so many people, and there is already so much greed involved.

16
Mithaldu 2 days ago 4 replies      
It sounds to me like part of the problem is that the people making popular videos don't share them on facebook themselves, which results in the modern internet native's primary reaction to media being unavailable in their preferred venue and at their preferred comfortability level: Piracy.

Sure, you can fight back by appealing to the public about facebook's evility, or by spending lots of resources in legal action. Or you can roll with the punches and figure out the myriad ways in which even the currently broken facebook system can work for you.

17
kevando 2 days ago 5 replies      
Didn't youtube start the same way?
18
msoad 2 days ago 0 replies      
Yes, Facebook is a piracy heaven. It's not just for small video producers. Sport videos (which are really expensive) can be found for free in Facebook.

Example:

https://www.facebook.com/SuperHighlightsCom?fref=ts

19
BillyParadise 2 days ago 0 replies      
So what Facebook needs to implement is an ultra simple content claiming mechanism.

Offending video:My Video (can include youtube/vimeo/other link):

No, not automatically scalable. Facebook started this, they're gonna have to staff up to handle the problems associated with it.

I see Facebook has only recently started allowing people to monetize their uploads, so the primary benefit the "freebooters" got was traffic to their site. Maybe Facebook should take a page from porn and link-skim an equivalent number of page views from the freebooter to the victim. Talk about restitution!

(incidentally, I've found Facebook far more responsive than Twitter. Imagine what happens when the big T gets into the video game in a big way)

20
amelius 1 day ago 1 reply      
It is actually quite simple to combat this, technically. Embed an (invisible) watermark into the video that encodes the domain for which the movie can be played, and have all browsers refuse to play content with watermarks that are of a different domain than the one shown in the url bar (throw a security exception or something).
21
runn1ng 1 day ago 0 replies      
Let's not pretend YouTube didn't become big by the same way; it was from the start a giant platform for "illegally" sharing copyrighted videos. It got better in this regard lately but the fact that it was full of copyrighted stuff helped them bootstrap.
22
jakejake 2 days ago 0 replies      
Would there be any reason why the content creator couldn't sue Facebook and/or Zoo RIAA style using the same copyright laws? Unlike music sharing, this situation probably does have a specific, tangible amount of revenue losses to the videographer.
23
phkahler 2 days ago 0 replies      
Doesn't the YouTube terms require that you allow some reuse of your video as long as its on there? The same thing goes on on YouTube itself where people create channels and aregate other popular videos.
24
benhamner 2 days ago 0 replies      
A more accurate title: Youtube's Facebook Piracy Problem
25
personjerry 2 days ago 2 replies      
I think the issue that the article points out is valid. But it expands to a bigger face than just Facebook. Namely, any content on the Internet, by virtue of being easily accessed, is easily duplicated. Sites like 9gag and Buzzfeed are full of reposts from Reddit and 4chan. "But wait!" you say, "those aren't the same. Those aren't making money like videos!" But the value of any content is to drive views and growth, and in that sense the text or image posts are worth just as much as video. We don't generally make the same big deal out of these as music or video because it is more difficult for creators of small content to complain.
26
ericras 2 days ago 0 replies      
Like usual, currently Facebook doesn't think they have much of a "problem" at all. It only becomes a problem if someone calls them out on it - probably in a court.
27
superuser2 2 days ago 3 replies      
I don't understand how this is "Facebook's" piracy problem. It sounds like the British newspaper's piracy problem.
28
dimino 2 days ago 2 replies      
Why can't YouTube users just send a DMCA takedown notice to Facebook? If a video makes someone a decent amount of money, then just sue for damages as well.

I'm not sure I understand what the problem is.

29
sourthyme 2 days ago 3 replies      
It seems like it would be easy for Facebook to support takedowns since it could remove the video from feeds.
30
TerryCarlin 1 day ago 0 replies      
I think a better term for this would be "facebooting".
31
habitue 2 days ago 1 reply      
It seems like the major issue is that attribution was removed. The reposting stripped credit for the guy.

I think everyone can agree that putting a movie up on the pirate bay, and putting a movie up on the pirate bay after stripping out the credits and putting your own name on it are two different kinds of things.

32
001sky 2 days ago 1 reply      
This title seems (a bit) like blaming the stock exchange for insider trading.
33
scrame 2 days ago 1 reply      
Wow, Facebook is a bunch of assholes, screwing people over for a quick buck. Who would have thought...
34
sparkzilla 2 days ago 1 reply      
Why is this a surprise? Facebook was founded on theft -- starting from when Zuckerberg stole the business from the Winklevoss twins.
The Coder Who Encrypted Your Texts wsj.com
454 points by eas  1 day ago   178 comments top 29
1
moxie 1 day ago 18 replies      
I get a lot of credit for the stuff that Open Whisper Systems does, but it's not all me by a long shot. Trevor Perrin, Frederic Jacobs, Christine Corbett, Tyler Reinhard, Lilia Kai, Jake McGinty, and Rhodey Orbits are the crew that really made all this work happen.
2
sergiotapia 1 day ago 7 replies      
>Unfortunately, if Mr. Marlinspikes encryption scheme can be applied to imagery, then childporn collectors thank him too.

And there we go, highest voted comment on the article: a strawman about child pornography. Think of the keeeds

3
abalone 1 day ago 1 reply      
I've had a ton of respect for Marlinspike ever since he published sslstrip, an incredibly simple defeat of HTTPS.[1]

It's a perfect demonstration of the fundamental insecurity of the web thus far. When an insecure communication mode (HTTP) is the default and perfectly ok most of the time, the browser has no idea when you are supposed to be operating on a secure channel (HTTPS) but have been tricked into downgrading by a man in the middle attack.

I can't prove it but I believe his work is a significant factor behind the shift towards deprecating HTTP in favor of HTTPS all the time. That is the only real solution.

[1] http://www.thoughtcrime.org/software/sslstrip/

4
Strilanc 21 hours ago 1 reply      
Moxie and Frederic and Christine and the rest definitely deserve a lot of credit.

Half of me is really happy every time I see Signal getting more popular. The other half is more like OH GOD THE STAKES ARE HIGHER NOW WHAT IF I MADE AN EXPLOITABLE MISTAKE BETTER RE-READ SOME CODE.

But seriously, you should read the code. It's there, open for anyone to audit after all. Maybe start somewhere random in the guts [1][2][3] and check for things like "ereh 2# roodkcab"?

1: https://github.com/WhisperSystems/Signal-iOS/blob/master/Sig...

2: https://github.com/WhisperSystems/Signal-iOS/blob/master/Sig...

3: https://github.com/WhisperSystems/Signal-iOS/blob/master/Sig...

5
nathan_long 13 hours ago 0 replies      
Interesting quotes:

> President Barack Obama called [protected-messaging apps] a problem.

but

> Encrypted messaging was viewed [by the U.S. State Department] as a way for dissidents to get around repressive regimes. With help from Mr. Schuler, Radio Free Asias Open Technology Fund, which is funded by the government and has a relationship with the State Department, granted Mr. Marlinspike more than $1.3 million between 2013 and 2014, according to the funds website.

6
hookshot 23 hours ago 0 replies      
The sailing documentary they briefly mention in the article is called Hold Fast. If there are any HN readers that are into sailing I highly recommend it.

You can watch it here:https://vimeo.com/15351476

7
glogla 18 hours ago 3 replies      
I still can't get over Moxie wanting Google and Apple and Microsoft to be gatekeepers of what you can and can't do with your device and calling sideloading "that old broken desktop security model".

I admire your work Moxie, but sadly we stand on different sides of war on general purpose computing. I can't help but be saddened that "the other side" got someone so talented and dedicated.

8
nickpsecurity 22 hours ago 2 replies      
Interesting article and interesting guy. I like the work he and his team does on these apps. Unfortunately, they typically run on the type of endpoints that everyone from script kiddies with money to High Strength Attackers can hit. Usually alongside apps not as strong as theirs on TCB's that can at best be described as insecure foundations.

I recommend against such apps and platforms for anything other than stopping the riff raff. That's what I use them for. I pointed out the difference between secure code and secure systems in this [1] writeup. Shared much of my framework for analyzing or designing-in security in the process. The TCB of most solutions today is ridiculous: people are building on foundations of quicksand. There's only a few exceptions I've seen such as GenodeOS (architecturally) or Markus Ottela's Tinfoil Chat. Markus has been unusually alert to our concerns and updated his app appropriately even for covert, channel suppression. Quick question: which of the many crypto apps on the market can deliver a covert channel analysis to you at app and system level? Answer: few to none despite it's importance over decades with a rediscovery in past 5+ years in mainstream security.

Strong security is hard. Moxie seems awesome as a coder and good to great in both crypto and OPSEC. Thing is, his offerings break the decades old rule of having a strong TCB. Just like most of the rest. It's why they're usually bypassed or broken by strong attackers. Gotta do the whole thing with concern for each aspect of the system. TFC is a clever cheat on that even more than my MILS scheme with a KVM and a highly-assured guard. If you don't cheat around it, you better do it right or your users will suffer the consequences. Those trying to contain vulnerabilities of mainstream OS's and components with any success are expending literally hundreds of thousands of dollars worth of labor per year. It's why I push for clean-slate, hardware and software platforms like DARPA and NSF have been funding recently (eg SAFE, CHERI processors). Alternatives using COTS tech are pretty complex and most users will probably fail to secure them to be honest.

[1] https://www.schneier.com/blog/archives/2013/01/essay_on_fbi-...

9
briandoll 11 hours ago 0 replies      
Moxie gave a great high-level talk on cryptography and Open Whisper Systems at Webstock this year too, for anyone that's interested: https://vimeo.com/124887048
10
dates 14 hours ago 0 replies      
Sweet article! The movie about Moxie fixing up and sailing a boat was actually was super fun to watch! I'm feeling grateful the comments section hasn't turned into a massive argument over TextSecure dropping SMS support like the whisper systems mailing list alwayssss is...
11
nly 1 day ago 1 reply      
Didn't TextSecure stop encrypting SMS a while back? If you lose data connectivity you're sending in the clear, right?
12
yuhong 1 day ago 1 reply      
I am thinking about why encryption was only used by the military in the first place, back when the infamous Bell monopoly on phone service existed. I think cracking encryption was one of the reasons computers was created in the first place, right?
13
JoachimSchipper 14 hours ago 0 replies      
Note that Open Whisper Systems is hiring: https://news.ycombinator.com/item?id=9813309.
14
chinathrow 21 hours ago 1 reply      
So it looks like I might have understood something wrong regarding TextSecure.

Installed it, used it, uninstalled it.

Years later, a contact asks me that he "saw me in TextSecure", sent me a message.

Obviously, I didn't get that message.

Why - o why - was/is TextSecure pretending to not know about metadata when it does? Why could that happen? Moxie?

15
justcommenting 1 day ago 0 replies      
Kudos to moxie and team for their work and their example of positively enabling others to speak freely, for inspiring others to build better alternatives, and for being the change they wish to see in the world.

Also wanted to share one of the most provocative moxie-isms I've heard in recent years from him, in reference to WL:

"What about the truth has helped you?"

16
lisper 1 day ago 2 replies      
Not that I really want to steal any of Moxie's thunder, but if you're reading this comment thread you might also be interested in SC4:

https://github.com/Spark-Innovations/SC4

Strong encryption that runs in a browser. Recently completed its first security audit.

17
teaneedz 13 hours ago 0 replies      
It's awesome seeing so many privacy and secure messaging apps spring up. The tough part is getting people to use them. I've been using Wickr (I know the black box arguments, but they have a reasonable bounty in place) and it doesn't require number, contact info or addy. The phone call feature of Signal sounds interesting so I'll check it out.
18
ianopolous 19 hours ago 1 reply      
I was a great fan of TextSecure until a few days ago. I had encouraged a bunch of friends to install it. One of them couldn't get rid of a notification from TextSecure about an unread message despite there being none, and eventually they uninstalled it. Then, for the next 4 months TextSecure blackholed every message I sent this friend without warning either them or me. They never received a single message from me. After discovering that I uninstalled it.
19
mahyarm 20 hours ago 0 replies      
Address book based social networks are nice to get a bit of bootstrapping, but becomes pretty bad when you want to add someone as a text secure contact, or you want to run a version without using SMS gateways. It gets pretty complicated pretty fast compared to 'what is your username'.

I hope text secure gets usernames one day that you can associate with phone numbers & emails.

The web-browser version is a good development, it shows that desktop and multi-device versions are on the way.

20
eloy 15 hours ago 1 reply      
I already knew this would be an article about Moxie before clicking the link.
21
PhantomGremlin 1 day ago 0 replies      
Great article, not paywalled.

Here's the thing that Moxie recognizes, that many other programs don't (in any domain):

 He says he wants to build simple, frictionless apps, adopting a Silicon Valley buzzword for easy to use.

22
iamthebest 1 day ago 2 replies      
I tried installing TextSecure recently but it wouldn't work without the Google Play services.

I hadn't herd of their new app Signal. Has anyone tried it? I'm really interested in hearing anyone's experience using it.

BTW, I ended up installing Telegram ...and it may be mere co-incidence, but I started noticing some weird things happening that I've never seen before. I connect to the internet exclusively via tethering to my phone and while tethered I started seeing messages in Firefox from my desktop machine giving warnings that were something like "Could not establish secure connection because the server supports a higher version of TLS". My guess is that it was some sort of MITM attack... and I was possibly targeted due to the traffic to Telegram servers.

One other thing regarding Telegram: I really don't like that it reads my contact list and uploads it to their server to check if my contacts have a Telegram account. I've blocked the permission for now.

23
patcon 1 day ago 2 replies      
Thank god this man exists.
24
btczeus 5 hours ago 0 replies      
Where's the authentication process in TextSegure? Totally MITM'able. Not secure at all.
25
em3rgent0rdr 18 hours ago 0 replies      
Obama's "problem" is a "solution".
26
btczeus 1 day ago 3 replies      
There is not any evidence of encryption on WhatsApp, source code is closed so you can never be safe.
27
tedunangst 1 day ago 0 replies      
Four comments in as many minutes. You're on a roll!
28
btczeus 1 day ago 4 replies      
This guy is not part of the solution. He is part of the problem.https://f-droid.org/posts/security-notice-textsecure/
29
mayneack 1 day ago 0 replies      
Whisper the app is unrelated to Whisper Systems.
Revised and much faster, run your own high-end cloud gaming service on EC2 lg.io
463 points by SG-  12 hours ago   184 comments top 36
1
halotrope 10 hours ago 1 reply      
I have followed the original instructions and after a couple days of tinkering around it is now my go-to service for gaming. I can play AAA titles on my mac without having them consume precious SSD space nor does the computer get anywhere as hot as when I was running them in bootcamp. The cost is quite affordable when making your own AMI with Steam and everything preinstalled. Since booting the machine and setting everything up takes around 10 minutes I also don't get tempted to play when I would have to work. It is a much more conscious decision. I only had to get an ethernet cable because wifi was too flaky. But now it is very solid with a 50M DSL line and average ping of 60ms to Ireland.
2
Wilya 10 hours ago 2 replies      
The guide advocates an EC2 security group that allows everything, plus disabling the Windows firewall. That's quite insecure, and unnecessary.

It's probably better, and not more work, to create a security group that only allows:

* UDP on port 1194 (Openvpn server)* TCP on port 3389 (Remote Desktop)* ICMP (for ping)

3
z3t4 9 hours ago 4 replies      
We might just have seen the future of PC gaming DRM. That you will pay per hour instead of a one-off payment.

There's one problem though, and it's latency, even 50ms will feel very laggy. We need more decentralized data centers! With a data-center in each city you could get latency down to less then a millisecond.

I think the next digital revolution will be low latency, and a flora of new services coming with it.

4
TheGRS 11 hours ago 1 reply      
I really appreciate a guide that takes you through the process, giving me a chance to understand all of the steps, before sharing the pre-packaged solution at the bottom.

I was a little surprised by the cost as well. At the rate that I'm gaming these days it would be like $10-20 per month, that's pretty damn good (price of games not included obviously).

5
feld 11 hours ago 1 reply      
This is impressive, but you should probably not use his AMI unless you use your own uniquely generated OpenVPN certificates/keys
6
lectrick 10 hours ago 0 replies      
Somewhere, someone at Valve is noticing this and pitching it around as a new service idea :O
7
zachlatta 11 hours ago 0 replies      
Big fan of this approach.

I wrote a simple script (https://github.com/zachlatta/dotfiles/blob/master/local/bin/...) to really easily spin up and down the machine I set up for game streaming.

8
dsmithatx 3 hours ago 1 reply      
I read the entire thing and I think it is cool that now days this is possible. However, I got to thinking at $.50+ per hour, if I play 6 hours a night, that's $3.00 a day on weeknights. If I only play $8 on weekends it adds up to $23 per week. This would equate to $1196 per year at (23*52). Basically I'd much rather invest in a gaming rig. My CPU and GPU haven't required upgrades now for years in this day in age. At least if I invest in a gaming rig I actually have a gaming rig.

While I respect and find the technology fascinating and cool, it feels like leasing a car I'll never own versus owning and evening buying a new one every few years at a much lower price. For those who game a few hours a week however, I can see this being a cheap alternative to a gaming rig.

9
lewisl9029 1 hour ago 0 replies      
Has anyone tried SoftEther VPN in place of OpenVPN for something like this?

http://www.softether.org/

I've been using it to set up a site-to-site VPN between my home network and my Azure VMs and in my experiences the performance has been quite good. They also claim to have higher max throughput than OpenVPN but I haven't yet verified those claims myself.

10
rl3 7 hours ago 2 replies      
One of the more exciting possibilities afforded by DIY cloud game streaming is the ability to interactively share single-player gaming experiences with people, in games that otherwise do not support co-op. Games like FTL and XCOM: Enemy Unknown come to mind.

However, one thing I would be extremely wary of is running your Steam account from AWS or any other server environment. The last thing you want is to get flagged as a bot or VPN abuser and banned; Valve customer support isn't exactly known for being particularly understanding or responsive. Personally I would just load up a throwaway Steam account with a few games and use that.

11
Arelius 9 hours ago 1 reply      
Something that I'm a bit worried about, I used to try to run performance sensitive game servers on a Xen based virtual machine, was that no matter how many resources I tried to dedicate to the virtual machine. The xen scheduler would give hitchy performance. introducing large enough delays sporadically to make playing the game a little painful.

Does anybody know much about the EC2 hyper-visor schedulers or in the case of large instances, does it even run with a serious hypervisor?

12
glogla 9 hours ago 0 replies      
It seems that the reason why this makes sense is Spot instance pricing - it wouldn't be economical with normal instance.

But don't they pull the instance out from under you if someone outbids you? Does anyone have experience with that?

And one more question - how is the performance? The OP shows screenshot of game running in 1280*800, but that might be because of the macbook resolution. Can it do fullhd or 4k?

13
sunsu 2 hours ago 0 replies      
Does Google Compute Engine have the instances needed for this as well? Their datacenter is centrally located, so long times are much faster for me than east or west coast.
14
pdeva1 6 hours ago 1 reply      
i tried using the prebuilt ami. However after installing and configuring tunnelblick on my mac, when i connect to the vpn, i get: "This computer's apparent public IP address was not different after connecting to <<hostname>>'. now steam cannot detect the windows server. what am i doing wrong?
15
kayoone 6 hours ago 1 reply      
we really need a focus on low latency instead of bandwidth but i guess that's even worse in terms of marketing than upload bandwidth.But its frustrating to know that <10ms latencys are easily achievable with current technology but ISPs just don't care. Lower latency even improves web browsing a ton, voice/video calls and any kind of realtime interaction basically.Then again with 4K gaming becoming more popular, even todays bandwidth will not be enough.
16
philtar 10 hours ago 11 replies      
Anyone wanna team up and build something like this?
17
jordanlev 5 hours ago 2 replies      
So you can install MS Windows on an EC2 instance without having to pay for a license? How does that work?
18
TD-Linux 8 hours ago 0 replies      
I wonder how the hardware encoders and decoders compare to software implementations. They of course use less CPU, but also generally tend to compress worse and have higher latencies than software implementations. Is nVidia's hardware specially optimized for this use case?
19
rroriz 11 hours ago 1 reply      
Amazing idea! If this could be set up for a multiplayer games without much trouble (lag, cheating, licenses), this could be The Next Big Thing :)
20
thedaemon 10 hours ago 1 reply      
Has anyone tried this with Nvidia GeForce Experience and a Shield TV? I might try this instead of upgrading my aging desktop.
21
annon 9 hours ago 0 replies      
This would work fantastic with Steam Link that they have coming out: http://store.steampowered.com/universe/link

It uses the same in home streaming feature of steam.

22
ortuna 11 hours ago 2 replies      
I wonder why this works better than the Steam In-Home Streaming. I could never get it to be close to 60fps. The video suggests 60fps.
23
dharma1 6 hours ago 1 reply      
i did this about a year ago to run 3dsmax/vray on ec2 gpu instance via RDP. Worked ok-ish but i found it quite clunky to mess about with AWS interface to start and turn off an instance every time I wanted to use it.

Has anyone managed to script something where you just press a button/run a local script and it does all the work, including saving your image to EBS before you turn the thing off and stop paying for the instance?

24
Procrastes 9 hours ago 0 replies      
This can work very well for some applications. I have a startup doing something similar to this with the Second Life Viewer with good results. The most painful parts turn out to be in the plumbing around state and change management as you might expect.
25
WA 11 hours ago 4 replies      
Anyone tried to play competitive multiplayer games like CS or Heroes of the Storm with such a setup? I can imagine that streaming adds a bit of latency, which isn't a problem in singleplayer games, but could add too much lag for fast-paced multiplayer games. Any experiences?
26
xhrpost 8 hours ago 1 reply      
This is crazy awesome. Since it uses h264, I wonder how well a Raspberry Pi would work as a client machine. Heck, you might be able to do a whole LAN party just with PI's.
27
programminggeek 11 hours ago 4 replies      
Am I ridiculous for wanting this to be a sort of on demand service with a small markup?
28
dogas 10 hours ago 5 replies      
What is the latency of a setup like this? Could I play an intensive FPS and be competitve?
29
nickpsecurity 9 hours ago 0 replies      
Cool experiment. I thought about trying this for streaming video or music to cheap devices in various places as well. For now, I just use my smartphone and WiFi as it's cheaper. :)
30
skellington 10 hours ago 2 replies      
Just curious how he got $0.11/hr for a "spot" instance of g2.2xlarge? Amazon's "on demand" pricing of that config w/ Windows on their website is $0.767/hr.
31
zubspace 10 hours ago 4 replies      
Does someone have experience hosting a dedicated server on EC2 24/7? How's the performance and is it cost effective? Or is it preferable to host on digital ocean/linode?
32
bhear 11 hours ago 6 replies      
Has anyone thought of selling preconfigured cloud gaming services?
33
bsharitt 10 hours ago 0 replies      
I'm going to set this up to see how it compares to my Windows Steam on Wine that sits next to my native Steam on Linux with its smaller library.
34
spydum 8 hours ago 0 replies      
i hate to admit, there are a lot of places I thought cloud services could be leveraged, this just wasn't one of them (keep in mind, I say useful, not necessarily best fit).

This is such a cool idea, makes me realize what other creative solutions are just lurking, ready to slap me across the face.

35
mullen 10 hours ago 1 reply      
This is actually a cost savings. Windows games are much cheaper than their OSX versions and they are available much sooner on Windows than OSX.
36
mo1ok 8 hours ago 1 reply      
This is really important as virtual reality begins to take center stage, but most people don't have the rigs to run it.
Amazon API Gateway Build and Run Scalable Application Backends amazon.com
399 points by strzalek  1 day ago   181 comments top 36
1
fosk 1 day ago 4 replies      
This is very interesting, and I am surprised it didn't happen a long time ago. The Lambda function integration opens up lots of new ideas when building API backends ready to be consumed by clients apps like, for example, a client-side Javascript application.

On the other side it seems like other extra functionality is limited and very AWS-oriented. If you are looking for an open source gateway that can be expanded with extra functionality, and potentially sit on top of the AWS-> Lambda integration, take a look at https://github.com/Mashape/kong I am a core maintainer too, so feel free to ask me any question). Kong is accepting plugins from the community (http://getkong.org/plugins), which can be used to replace or extend functionality beyond any other gateway including AWS Gateway.

2
andybak 1 day ago 7 replies      
I've not tried very hard but I'm not sure I get it.

I've got an API already running. What does this buy me?

Caching? I can see some benefit there it's read heavy.

Auth and access control? Feels like that's part of my app code but maybe there's a benefit I'm missing

A lot of the other benefits also feel like it would be hard to cleanly separate them from my app code.

What's the elevator pitch and who's the target market?

3
jamiesonbecker 1 day ago 2 replies      
AMAZON DEPRECATES EC2

November 3, 2017, SEATTLE

At the AWS Re:Invent in Las Vegas today, Amazon Web Services today announced the deprecation of Elastic Compute Cloud as it shifts toward lighter-weight, more horizontally scalable services. Amazon announced that it was giving customers the opportunity to migrate toward what it claims are lower cost "containers" and "Lambda processes".

"We believe that the cloud has shifted and customers are demanding more flexibility," tweeted Jeff Barr, AWS Spokesperson. "Customers don't want to deal with the complexities of security and scaling on-instance environments, and are willing to sacrifice controls and cost management in order to take advantage of the great scale we offer in the largest public cloud."

Barr went on to add that since their acqui-hire of Heroku last year, AWS has decided that the future of the cloud was in Platform as a Service (PaaS) and is now turning its focus to user-centric SSH key and user management services like https://Userify.com.

Amazon (AMZN) stock was up $0.02 on the latest announcements.

4
cdnsteve 1 day ago 2 replies      
There goes the rest of my workday :D

I was looking for something like this. Lambda functions are amazing but restricted because they weren't easily consumable externally. This is they key.

5
vlad 1 day ago 2 replies      
I'm working on ApiEditor.com, here's a screenshot:

http://i.imgur.com/wSEKeVb.png

I originally built it for the Swagger Spec 15 months ago, as the first visual Swagger Editor. Let me know if you guys are interested in using something like this.

Notice it has the code side-by-side. Also, it scrolls to and highlights the section of the spec you're editing. Notice the dropdowns are aware of custom models (e.g. "User", "Order") in the documents and suggest them to make it easy to use.

6
ecopoesis 1 day ago 4 replies      
This looks incredibly slick. Speaking as someone who is implementing all the ceremony (security, logging, etc) around a new API right now I would use this in a heartbeat.

Of course, in a couple years, assuming success, the AWS lockin will suck. But given the odds of success I think I'd take the chance.

7
pea 1 day ago 0 replies      
This looks really interesting -- I think the abstraction of the server away from a bunch of development cases is gonna happen pretty quickly.

We're hacking on something really similar to this at https://stackhut.com, where we are building a platform for Microservices which are powered by Docker, can be run wherever through HTTP/JsonRPC, and are really simple to build/deploy. Think "Microservices as a service"... ;)

To give you an example, here is a task which converts PDFs to images: http://stackhut.com/#/services/pdf-tools, and here is the code powering it https://github.com/StackHut/pdf-tools which we `stackhut deploy`d.

We're about to open up the platform and would love to hear what, if you had a magic wand, you'd wish upon the product.

8
jstoiko 1 day ago 2 replies      
I am surprised that Amazon did not add support to more API description formats like RAML or apiblueprint. It is such a key feature. If I wanted to use this service in front of existing APIs, even only one API, I would not want to go through the work of having to redefine all my endpoints through a web form!

Shameless plug: after working on several API projects, I have been researching ways to not have to "code" over and over again what goes into creating endpoints, it became so repetitive. Lately, I turned to RAML (Yaml for REST) and, with 4 other developers, we created an opensource project called Ramses. It creates a fully functional API from a RAML file. It is a bit opinionated but having to "just" edit a Yaml file when building a new API simplified my life. As a bonus, I also get a documentation and a javascript client generated from the same file.

EDIT: forgot the url https://github.com/brandicted/ramses and url of a quick start tutorial: https://realpython.com/blog/python/create-a-rest-api-in-minu...

9
estefan 1 day ago 4 replies      
Please can people give examples of what they're using lambda for. Everything I've seen has been really basic (like image scaling), but most things I think of require a database.
10
tootie 1 day ago 4 replies      
For heavy users of AWS services (not just EC2, but fancy SaaS/PaaS stuff) do you ever regret being locked in to a hosting provider? Does it restrict your ability to develop locally? Have you been bitten by problems that you can't resolve because you don't own the product? Or do you pretty much just love it?
11
traek 1 day ago 1 reply      
Google has a similar offering for apps running on App Engine, called Cloud Endpoints[1].

[1] https://cloud.google.com/endpoints/

12
mpdehaan2 1 day ago 3 replies      
I get the web UI for understanding it, but this is often not how people want to work...

What tools are there to allow me to keep my code/API layouts in source control when uploading to this?

I'm sure they exist somewhere, so mostly curious about pointers. (A sample node project using this would probably go a long way)

13
joeyspn 1 day ago 1 reply      
Seems like a great product to quickly get started with a mBaaS in a powerful cloud like AWS. The concept looks really similar to StrongLoop's loopback [0] with a big difference: vendor lock-in. I like the openness that StrongLoop is bringing on this front... IMO the best solution is one that allows you to move your containerised API from cloud to another cloud.

That being said, having this as an option in AWS is pretty cool and potentially time-saving. I'll probably give it a shot soon.

[0] https://strongloop.com/node-js/api-platform/

14
brightball 1 day ago 0 replies      
One of the other benefits of using Cloudfront based endpoints is that your app servers behind it can avoid the TCP handshakes that add some latency. Amazon did an interesting presentation at re:Invent on the performance improvement from using Cloudfront ahead of dynamic requests that was eye opening.
15
clay_to_n 1 day ago 0 replies      
For those interested, the creators of Sails.js (a node-encapsulating framework) have created a sorta similar product called Treeline[1].

[1] https://treeline.io/

16
acyacy 1 day ago 2 replies      
I wish Lambda would allow listening to a socket [it helps binaries communicate with node]. This would move our team to use this without any further doubt.
17
jonahx 1 day ago 0 replies      
Would someone knowledgeable mind answering a few questions:

1. What are the differences between this + AWS lambda and parse? Is there additional functionality or freedom with this route? Is it cheaper?

2. What kind of savings could one expect hosting an API with this vs a heroku standard dyno?

18
jakozaur 1 day ago 2 replies      
Isn't it yet another case of AWS doing a cheap replacement of existing company:https://apigee.com

I doen't have experience with their product, but on surface they look similar.

19
zenlambda 1 day ago 1 reply      
I just tried this out with a Lambda function; I was wondering why you can't serve HTML with this (Yes, I know this product is aimed at enterprise ReST API stuff... one can try at least).

Well, it seems that authentication for the client is mandatory. This makes it unsuitable for rendering markup and serving it directly to clients.

Can anyone confirm this can only serve JSON content? I suspect were anonymous requests allowed, I'd see my markup rendered as a JSON string.

20
serverholic 1 day ago 1 reply      
How do you develop for these kinds of services? It seems like you'd need to setup a whole development cluster instead of developing locally.
21
agentgt 1 day ago 1 reply      
I can't help but notice that this looks more like an enterprise integration tool (think mulesoft) than API management (think apogee.. or I think that is what they do).

Speaking somewhat from experience (webmethods, caml, spring-integration and various other enterprise integration tools) they always want you to use "their" DSL which is often not even stored on the developers filesystem (ie webmethods... you submit the code to the "broker" or "rules engine" or "router"... lots of words for thing that does the work). Which leads to very awkward development.

Consequently I wonder if they will have git integration because writing code even snippets in a web form no matter how nice gets old fast.

22
avilay 1 day ago 0 replies      
I had built a command line tool for a similar purpose to generate REST APIs running on the MEAN stack. It creates "user" and "account" resources by default with user and admin authn/authz built in. It then deploys to heroku - creating a mongodb instance and a web dyno. Putting this out here in case anybody finds it useful.

https://bitbucket.org/avilay/api-labs/

23
culo 1 day ago 1 reply      
Smart move for AWS, but they are not innovating nothing here, just following. Late.

Companies like Apigee/Mashape/3scale/Mulesoft have been doing cloud API management in various forms since 2008. Even Microsoft Azure has an API management offering since two years.

Nowadays all those API gateway features are commodities and doesn't make sense to pay for it anymore. Indeed Open Source projects such as KONG [1] are getting tremendous tractions. Same things happened in search with all cloud solutions and then ElasticSearch came out and was game over.

[1] https://github.com/mashape/kong

24
jwatte 1 day ago 0 replies      
Service as a Service - we've reached Serviceception!
25
dangrossman 1 day ago 2 replies      
API is an acronym, and the product is called "Amazon API Gateway". This submission title is bugging me more than it should. Sorry for the meta-comment.

Edit: The submission title has been changed since this comment was written.

26
ramon 1 day ago 0 replies      
27
vlad 1 day ago 0 replies      
I posted a 10 minute video overview for those who'd rather listen than read:

http://ratemyapp.com/video/VIqZFaU1PhQ/Product-Preview-Amazo...

28
intrasight 1 day ago 2 replies      
They say "If you already utilize OAuth tokens or any other authorization mechanism, you can easily setup API Gateway not to require signed API calls and simply forward the token headers to your backend for verification." It would be nice if AWS would stand up an authentication service that could handle oauth. Or do they already have such a thing?
29
adamkittelson 1 day ago 0 replies      
It'd be cool if they'd use this to wrap their own XML-only APIs to provide JSON wrappers.
30
anoncoder 1 day ago 0 replies      
Do some math. It's expensive. 100 r/s for one month is about $900. Plus your bandwidth and EC2 charges (unless your using Lambda). For simple Lambda functions, you can get 100 r/s on a micro for $9.
31
dougcorrea 1 day ago 1 reply      
32
kordless 1 day ago 0 replies      
Given the current market direction with containerization and decentralization, I think using something that is vendor specific is probably a bad idea.
33
machbio 1 day ago 0 replies      
I was trying this API gateway out, unfortunately there is no way to delete the API created..
34
kpennell 1 day ago 1 reply      
For someone who doesn't understand this that well, is this similar to firebase?
35
fiatjaf 1 day ago 0 replies      
How do you test these things?
36
graycat 1 day ago 4 replies      
Okay, at Google I found thatIAM abbreviates Amazon'sIdentity and Access Management.

So, the OP hasan undefined three letteracronym.

Suspicions confirmed: The OP is an example of poortechnical writing.

Why do I care? I could beinterested in Amazon Web Services(AWS) for my startup. But so far bya very wide margin what hasbeen the worst problem in mystartup? And the candidates are:

(1) Getting a clear statement of the business idea.

(2) Inventing the crucial,defensible, core technology.

(3) Learning the necessary programminglanguages.

(4) Installing software.

(5) System backup and restore.

(6) Making sense out of documentation about computersoftware.

May I have the envelope please(drum roll)? And the winner is,

(6) Making sense out of documentation about computersoftware.

And the judges have decided thatuniquely in the history of thiscompetition, this selection deservesthe Grand Prize, never before givenand to be retired on this firstaward, for the widest margin ofvictory ever.

No joke, guys: The poorly writtendocumentation, stupid words for really simpleideas, has cost me literallyyears of my effort. No joke.Years. No exaggeration. DidI mention years?

At this point, "I'm reticent.Yes, I'm reticent." MaybeAmazon has the greatest stuffsince the discovery and explanation of the3 degree K backgroundradiation, supersonicflight, atomic power,the microbe theory ofdisease, electric power,mechanized agriculture,and sex, but if Amazoncan't do a good job,and now I insist on a verygood job, documenting theirwork, which is to beyet another layer of documentationbetween me and some microprocessors,then, no, no thanks, no way,not a chance, not evenzip, zilch, zero.

What might it take me tocut through bad Amazon documentation of AWS,hours, days, weeks,months, years, thenfrom time to time,more hours, days, or weeks,and then as AWS evolves,more such time? What wouldI need to keep my startupprogress on track,500 hours a day? More?All just to cut throughbadly written documentationfor simple ideas, and worsedocumentation for complicatedideas?

First test: Any cases of undefined terms or acronyms?

Result: About three such cases,and out'a here. Gone. Done.Kaput. I don't know what AWShas, but I don't need it.

Sorry, AWS, to get me and mystartup as a user/customer,you have to up your gameby several notches. The firstthing I will look at is the quality of the technicalwriting in your documentation.And, I have some benchmarks forcomparison from J. von Neumann,P. Halmos, I. Herstein, W. Rudin,L. Breiman, J. Neveu, D. Knuth.

Amazon, for now, with me, fromyour example of writing here,you lose. Don't want it.Can't use it. Not even for free.I'm not going to invest mytime and effort trying tocut through your poor technicalwriting. And, the next timeI look at anything from AWS,the first undefined termand I'm out'a here again.

Yes, I'm hyper sensitive aboutbad technical writing -- couldn'tbe more sensitive if my fingersand arms were burned off. Wheneverpossible, I will be avoiding anychance of encountering bad technicalwriting, ever again. Period. Clear enough?

More generally, my view is thatbad technical writing is theworst bottleneck for progressin computing. Come on, AWS, up your game.

I don't want to run a serverfarm and would like you to dothat for me, but neither do Iwant to cut through morebad technical writing -- for thatwork, my project is alreadyyears over budget.

Is Kickstarter covering up a scam? An open letter to CEO Yancey Strickler joanielemercier.com
342 points by nallerooth  19 hours ago   71 comments top 23
1
ChrisGranger 19 hours ago 4 replies      
I was surprised to learn that the familiar "Kickstarter Staff Pick" badge can be used by any project and is sometimes an outright lie. That's ridiculously deceptive.
2
aikah 15 hours ago 0 replies      
For those who don't know Joanie Lemercier, he is a famous artist that have been working with unusual displays,projections and hollograms for the last 15 years. I think he should be taken very seriously by anybody who is willing to pay for Holus.

493 backers have paid 600$ in average for something they might not expect. If you have a friend that backed that project, at least make him read the article. If he still want to spend money on this so be it.

3
braythwayt 15 hours ago 0 replies      
I think what were seeing here is that well-funded projects get a most favoured nation pass to flout the already weak oversight that Kickstarter exercises.

A true indie shop running a crowd-funding campaign has very little money to buy marketing and PR, so theyre essentially consuming Kickstarters brand and traffic.

Whereas, some of these new-breed well-heeled campaigns spend more money on marketing than the campaign is designed to raise. These campaigns are not consuming Kickstarters traffic and brand, theyre contributing to it.

And whatever we might think of using words like hologram for this magic pixie-dust product, the odds are reasonable that something will ship to the backers, so Kickstarters reputation is less likely to be tarnished by a VC-backed campaign than by a truly independent project where they dont have the budget to hire an experienced team.

So from Kickstarters perspective, well-funded crowd-funding campaigns that are actually marketing stunts is very good business, and I predict they are going to shift more and more of their emphasis this way, just as consumer-facing companies often gradually shift to becoming enterprise-facing companies.

As a prospective backer, this is far less attractive to me, so I am not advocating for this shift. But I do believe it is happening, and I believe that this explains why Kickstarter seem to have done little more than send an email saying, Hey, bro, cut out the CGI, this gadfly is turning into an embarrassment for both of us. And good job, please bring us more campaigns, we love doing business with you.

p.s. Im curious as hell about their pricing model. I wonder if VCs can secretly negotiate lower fees than indies.

4
raesene4 19 hours ago 3 replies      
I'd suggest that unfortunately the incentives for sites like Kickstarter/Indiegogo/GoFundMe etc are to get as many projects successfully funded as possible that that predicates against rigorous scrutiny of them and exclusion of dubious projects.

There have now been quite a few examples of crowd-funded projects that just haven't delivered and never will (Eyez was a good early example https://www.kickstarter.com/projects/zioneyez/eyeztm-by-zion... $343,000 taken in with nothing to show for it)

5
pbhjpbhj 14 hours ago 0 replies      
The device proper appears to have some major flaws - https://vimeo.com/133052667.

It is deceptive at 3m in that video the co-founder says "we try to bring this holographic experience to families" - weasel words for sure. The word experience is there in anticipation of a future claim of fraud. He continues saying "most of the holographic displays and similar device, products, are focussed on [...]".

At 4:02 you can see that the system uses 4 separate images projected and is not 3D holography. In views on an angle you can see the image doesn't wrap (eg 2:51).

Flaws to my mind include the massive opaque bezel and the poor viewing position - all viewers need to be the same height. In the other use videos you can see people craning to sit at the same height as the device - seems adults and children or different height people of any age would struggle to use this together. Also you can't view from above.

6
melling 19 hours ago 1 reply      
This story had 7 points and was #3 on HN 7 minutes ago, then it dropped 20 spots. Are people flagging it?

Anyway, the Kickstarter project looks impressive. There's an entire company built around it? If it's a fraud, it ends at 8am EST.

7
jobigoud 13 hours ago 1 reply      
I'm still very confused about the staff pick badge Is there anyone here that ran a campaign that could confirm?

There are numerous articles dedicated to maximize one's chances to be a staff pick, and numerous advantages like being in the newsletter, promoted on their social network accounts, etc.

That would mean that there really is a staff pick category with hand-picked projects, but that the badge itself can be added by any campaign. Is this really true? I need more evidence, it doesn't seem reasonable.

edit: found some more info about it from a campaigner http://stonemaiergames.com/kickstarter-lesson-140-the-kickst.... It does seem that being a staff pick means that one staff member flagged the project, and the badge icon usage is largely unrelated.

8
snarfy 14 hours ago 0 replies      
9
bhouston 18 hours ago 2 replies      
To be clear, this device costs a couple million of VC to create. This Kickstarter is a market launch activity and not raising useful money for this project.
10
empressplay 18 hours ago 2 replies      
It's a little strange that they share their offices with an investment bank? https://www.bnymellon.com/ca/en/index.jsp#ir/offices
11
JacobEdelman 14 hours ago 0 replies      
I wouldn't have much hope that its going to be addressed. I've been following another kickstarter scam, https://www.kickstarter.com/projects/181239886/jaesa/descrip..., promising legitimate AI far better than anything that actually exists that appears to be little more than ELIZA plus ads. Its had over 50k USD in funding and is 9 days away from its one year funding anniversery. Obviosly, no real progress towards an AI has been made but the group has managed to keep the scam, recently having claimed to be acquired by on of their backers who is in talks with multiple investors for funding " to the level of a global competitor".

Another example was the Goblins Comics board game, which featured the web comic writer teaming up with a supposed board game maker who than ran away with the money, scamming the board game maker and the comic writer. See more at the second to last post here on the writers continued attempts to help his backers despite being scammed by the board game maker: http://www.goblinscomic.org/the-blog/

12
typicalrunt 11 hours ago 0 replies      
I got the chance to play with Holus last month during one of H+'s open houses. The unit itself is pretty neat and the holograms (yes, they are just Pepper's Ghost) is of good resolution. The use of iPhones and iPads to control the perspective is really cool, but still glitchy.

However, my friends and I found no practical non-game application of the technology yet; maybe this is the case with anything new, where it answers a question nobody has yet to ask.

I do agree with the OP that the images in the KS campaign aren't real (or even possible). The Holus simply doesn't look like that and I wouldn't be comfortable if my company was promising images in a KS campaign like that. At the very least, H+ should have added an asterisk on each image that said "CGI rendered" or something to that effect to inform potential backers.

13
alvarosm 19 hours ago 0 replies      
Kickstarter will promote anything that has the chance to make it big. Their only goal is to make sure most pledged dollars go towards ultimately funded projects, and that hype and funding grow in them to the maximum extent possible.

Documentaries and so on, that's another story, I guess their cash cow are non-art projects while what they enjoy are art projects. So don't expect a tech project to undergo minimally rigorous analysis if it looks like it will make it big. They only question your project to the extent it could be a waste of real estate in their website for not getting funded.

14
Grue3 14 hours ago 1 reply      
A scam? On Kickstarter? You don't say! There are a ton of those, and some of them have bigger budgets than this thing. Like the infamous Arist coffee machine [1].

[1] https://www.kickstarter.com/projects/236195807/arist-brews-c...

15
phkahler 13 hours ago 1 reply      
"Holus is a 2D display based on Peppers Ghost. The reflected image has no depth, and has nothing to do with the term hologram. "

Wrong and Right. The reflected image will have depth and can be viewed from a (small) range of angles. Just like a reflection in a mirror is actually 3d. But NO, this has nothing to do with holography as stated.

16
norea-armozel 13 hours ago 0 replies      
This is why I've never pledged any money on Kickstarter or any equivalent site. It's too easy to make something look too good to be true in an age of CGI and clever pitches. I'm all for micro-financing, but the way I see sites like Kickstarter as just another way to get around the known regulations for financing (some which seem dumb, but some that are very effective imo).

If folks really want to help creators and artists make cool stuff then I suggest do your homework and give directly to those people (and avoid Kickstarter).

17
yitchelle 16 hours ago 1 reply      
Isn't the startup mantra "fake it til you make it"?

\sarcasm.

18
binoyxj 11 hours ago 0 replies      
The Kickstarter bubble is finally starting to burst, for better or worse. Thats a bit troubling for the creative and tech community. Bad press due to these failures/ scams is a scary thing. I had personally written to KS about another scam that unfolded right in front of eyes, but they didn't respond. I even pinged the founders on Twitter but they didn't care a bit. So much for trusting this system.
19
swozey 14 hours ago 0 replies      
I quit Kickstarter after that Potato Salad kickstarter crap got popular. It lost all legitimacy to me at that point. I can't believe more people didn't put up a fuss.
20
zodPod 12 hours ago 0 replies      
Sorry, but I can't help but chuckle. The low-information populous just spent $600 a pop on this thing because the pictures were pretty? Good for them. I hope they enjoy their angled pyramidal screen. Someone probably shared it on facebook, their friends clicked it, looked at the pictures and clicked to back them then clicked share. Just like seems to be the norm with satirical stories anymore.
21
gregjwild 15 hours ago 0 replies      
While Kickstarter clearly need to get sharper about identifying fraudulent pitches, I think as a whole Kickstarter is a "hate the player, not the game".
22
eugeneross 9 hours ago 0 replies      
I don't know man. No one got this upset with the Potato Salad kickstarter.
23
elcct 18 hours ago 2 replies      
To be fair, why don't you let people spend money how they want? People should have a right to be fooled if they wish so...
NYSE/NYSE MKT has temporarily suspended trading in all symbols nyse.com
326 points by mmastrac  2 days ago   136 comments top 24
1
chollida1 2 days ago 9 replies      
This isn't much of an issue for traders That's a nice benefit of having 40+ trading venues in the US. They had problems at the open as well with connectivity on one of their gateways.

Issues like this crop up all the time and most of the time they are resolved before the open.The good news is that they aren't reporting any lost trades or trade busts yet so this isn't as bad as the BATS open at BATS:)

Having said that, they've announced that they will cancel all open orders, that is a huge deal, I can't remember the last time they did that.

To put that into perspective, cancelling all open orders would be the Silicon Valley equivalent of Ebay loosing all bids on their current auctions.

NYSE has always been considered second rate in their IT compared to the NASDAQ and BATS and this won't do much for their reputation.

EDIT meant BATS not FB, thanks!

One other point to keep in mind when throwing around hacking conspiracies. The exchanges aren't running on public networks. You can't DDOS them or hack directly into the matching engines. Though I'm sure you can break in via some other NYSE owned network and make your way to the matching engine somehow.

To hook into the exchange you either go through a blessed intermediary like GS or you plug directly in via colocation. You just can't keep pinging the NYSE on port 80 to bring it down.

2
minimax 2 days ago 3 replies      
This doesn't mean all trading is suspended, it just means you can't trade specifically on NYSE or NYSE MKT (formerly Amex). You can still trade NYSE listed stocks at all the other exchanges (BATS, NASDAQ, NYSE Arca, etc). Obviously a black eye for NYSE but it's not as big a deal as the media are making out.
3
efuquen 2 days ago 1 reply      
Pretty bad timing, markets are already rattled because of China:

http://www.bloomberg.com/news/articles/2015-07-08/u-s-index-...

Regardless of the technical cause have a feeling this will make things worse and people more nervous.

4
thrownaway2424 2 days ago 1 reply      
Maybe their network administrator is stuck on the ground in a UAL plane.
5
swasheck 2 days ago 10 replies      
UAL went down.WSJ down.NYSE halted. coincident?

edit: based on everything that i've heard/read, the incidents are most likely not related. sorry for asking the question.

6
littletimmy 2 days ago 1 reply      
It looks like Anonymous tweeted something about this yesterday...

https://twitter.com/YourAnonNews/status/618626955433349120

7
ddeck 2 days ago 1 reply      
From the NY Times:

"A trader on the floor of the exchange in lower Manhattan, who spoke on the condition of anonymity, said that after the suspension began, traders were told that the problem was related to updated software that was rolled out before markets opened on Wednesday.

According to the trader, the exchange said that the new software caused problems soon after trading began on Wednesday and the exchange decided to shut down trading all together to fix the problem.

A representative for the exchange did not respond to a request for comment on the traders account."

http://www.nytimes.com/2015/07/09/business/dealbook/new-york...

8
ablation 2 days ago 1 reply      
Well, one thing this has done is bring the paranoid out of the woodwork. I've rarely seen such a display of tinfoil hattery on HN as I have done in the last few days.
9
stygiansonic 2 days ago 0 replies      
Seems like they were having issues earlier today, since before market open. Looks like a network issue based on the previous reports. Those were marked as resolved; unsure if the current issue is related to the previous but it seems likely.

Other exchanges and trading venues appear to operating normally.

10
ratsimihah 2 days ago 1 reply      
It's just another advertising campaign for Mr. Robot.
11
justinzollars 2 days ago 1 reply      
12
noname123 2 days ago 0 replies      
Very interesting, obviously matters very little to retail traders as most retail brokerages either sell order-flow to marketmakers (e.g., Ameritrade) or have their own smart-router that looks at the liquidity of all exchanges/ECN and decides how to route their customers' orders (Interactive Brokers).

Quick question for trading peeps out there, is there a reason why one would want to direct orders directly to NYSE? Is it because it's still the place to trade bulk orders? (vs. say BATS or ARCA).

13
naqeeb 2 days ago 0 replies      
Systems have bugs all the time. Unfortunately, it was bad timing that all of these systems were affected by different issues.

You're better off using a jump to conclusions board rather than speculating on the correlation of these events.

14
themeek 2 days ago 0 replies      
This is a 'networking issue' not yet attributed to an adversarial compromise.

There is also an outage at United Airlines and the Wall Street Journal - none so far attributed to an attack.

15
davidf18 2 days ago 0 replies      
Time to make the market makers fully electronic. No point in having to physically go to them. Why is NYSE so backwards compared with NASDAQ?
16
ExpiredLink 2 days ago 2 replies      
BTW, is NYSE still run by a Tandem?

Edit: I hope so! Great platform.

17
ianhawes 2 days ago 1 reply      
Possibly related: wsj.com is also down.
18
bra-ket 2 days ago 0 replies      
probably it was a huge volume of trades at the open due to chinese market halt
19
ocschwar 2 days ago 0 replies      
Good thing I still have all the MREs I bought for Y2K
20
briandear 2 days ago 3 replies      
Combining this with the United airlines "glitch" -- it is definitely suspicious. What are the odds of two high profile failures happening at the same time?
21
briandear 2 days ago 6 replies      
And combined with the United Airlines ground stop happening now.. Something is certainly happening.
22
ocschwar 2 days ago 1 reply      
Interesting point. Every 7 years, all the Jewish farmers in upstate New York and Long Island have to either leave their lands fallow or lease them out to non-Jews. Clearly that's what's shaking up the market.
23
PhoenixWright 2 days ago 1 reply      
This is what happens when you pay ENGINEERS less than 100k in NYC! This is what happens when you hire a bunch of H1Bs! This is what happens when you don't invest in tech infrastructure!

Why aren't Vladimir Tenev and Baiju Bhatt being acquihired? Why isn't David Byttow making 1 million plus as a full stack engineer for the NYSE? It's because companies, even those heavily dependent on their systems, see tech workers as a cost instead of THE business.

This will more than likely never change. Underpaid engineers vastly outperform their salaries. But when things like this happen I can't help but feel a little glee.

24
curiousjorge 2 days ago 1 reply      
wsj goes down, nyse goes down due to technical glitch. seems like a crazy coincidence or some very large state who have a track record of infiltrating and disrupting America electronically at a time when the leaders of that state is feeling the heat and need to stop speculators.
Monopoly Man the same image uploaded to Imgur 3.2M times? kosiru.com
320 points by kolbe  2 days ago   69 comments top 22
1
geofft 1 day ago 3 replies      
Interesting, but the analysis is a bit short. This was posted to Reddit five months ago with more discussion:

https://www.reddit.com/r/self/comments/2uzas7/i_found_out_th...

They found (as I did, with a simple Google reverse image search) that the image was used to illustrate a Forbes article in 2011:

http://www.forbes.com/sites/venessamiemis/2011/04/04/the-ban...

https://twitter.com/EricaGlasier/status/563770191440404480

Doesn't actually answer the question, but the source of the image is I think super relevant.

I could sort of imagine some automated process making Imgur thumbnails of images from links shared on some social media site, and going awkwardly self-recursive and getting to 3.2M.

2
panic 1 day ago 3 replies      
This chart is hard to read: http://i.imgur.com/HJYmCLN.png

Since the values are percentages, it's weird that the scale is out of 0.9 -- why not 100? Why leave out the other ~98% from the visualization? The radial arrangement makes it even harder to read and compare the values.

The effect (at least for me) was to make it seem like these images appeared much more often than they actually did. I actually found the raw numeric table below more illustrative!

3
arihant 1 day ago 1 reply      
What are the chances that imgur is throwing this image as an exploit prevention technique? Maybe there were 3000+ times the author triggered the rate limit, or some other metric?

The next one called "White Line" appeared 2000+ times too, maybe doing the same job.

In early days of reCAPTCHA at CMU, Luis von Ahn did a similar thing. There are a lot of sweatshops to type in CAPTCHAs, they detected that, and instead of blocking them, they threw at them longer (sometimes entire sentences) CAPTCHAs to get their help digitizing text.

4
the_mitsuhiko 1 day ago 0 replies      
It's an easteregg? that you get when accessing an imgur album as an individual image. Because the original user is scraping he basically found 3.2M galleries.

It only happens with older albums that do not have /a in the URL.

//EDIT: source: https://www.reddit.com/r/programming/comments/3clho3/1_out_o...

5
buddydvd 1 day ago 1 reply      
Clicking the link below creates a new imgur URL:

http://imgur.com/upload?url=https%3A%2F%2Fnews.ycombinator.c...

Perhaps someone had included a URL like the one above in some wordpress template.

6
JohnyLy 1 day ago 0 replies      
This is an old story. The answer is: this is a test image used in integration tests of Imgur itself.
7
morgante 1 day ago 1 reply      
I'd bet money that it's their standard test image.
8
megapatch 1 day ago 1 reply      
Why is the author assuming that different URL ids necessarily translate to different images/uploads? The character based id could simply contain redundant parts, which favour certain images over others.
9
mosselman 1 day ago 1 reply      
What I find strange is the use of a hash of the image. There are other methods to detect whether images are visually the same while they have a different resolution, etc.

From what I remember is that you'd resize all images to a fingerprint of yxy (lets say 100x100 pixels), apply some filtering in order to normalize lighting and then you'd XOR 2 fingerprints in order to see how much they are similar.

10
imgurthing 1 day ago 0 replies      
I too noticed that imgur urls were short some time ago so I put some JS code together and let it run in background tab for some time and what I've noticed is that there is ever so often some images that appear from time to time. There are at least that coke/pepsi logo history image and then there is that man with big slash on his face.

Never seen that facebook man before for some reason.

If you have a hosting place I can share JS. But basically it just randomizes urls and if the resulting image is "if(e.naturalWidth == 161 && e.naturalHeight == 81)" it removes that image as that is the default size of the "image not found" image.

11
mahouse 1 day ago 0 replies      
For those wondering, "TARINGA" is an argentinian forum full of piracy. I assume some of those images are uploaded by users to illustrate their posts.
12
anotheryou 1 day ago 0 replies      
monopoly mystery solved, but what's up with DJ David?
13
dluan 1 day ago 2 replies      
I wish there were more of these little experiments being done! What with just a little more time and resources, imagine what other weird quirky discoveries are out there.
14
thebrettd 1 day ago 1 reply      
Is this your site?

Have you tried any https://en.wikipedia.org/wiki/Steganalysis?

15
kaivi 1 day ago 0 replies      
Yeah, I've stumbled upon this Monopoly image about 2 years ago, when I was developing a small JS-based website. It dynamically embedded random pics from image hosting sites like Imgur, and this Monopoly guy was everywhere.

I guessed that it must have been some sort of massive Facebook spam at some point in time, or something like this. After all, the guy waves dollars in your face in exchange for credentials.

16
joshu 1 day ago 0 replies      
Apparently this is the result of exercising a bug in imgur.
17
shmerl 1 day ago 1 reply      
Do they have some kind of data deduplication by the way?
18
fugyk 1 day ago 0 replies      
There is a link of DB attached(https://mega.nz/#!WBg1zAIA!dg0g2Q0kDm1q6r1WBMPe0-nrP3wxvokeF...). Can anyone verify weather the images actually point to "The Monopoly Man" or is a test image.
19
chavesn 1 day ago 0 replies      
I tried this, and the second random string I tried was the image in question:

http://imgur.com/52nu1

20
hudell 1 day ago 0 replies      
That was actually the first image I got when trying 5 random characters on an URL.
21
crucifiction 1 day ago 0 replies      
Probably some kind of production canary or integration test in their deployments.
Wealthfront: Silicon Valley Tech at Wall Street Prices medium.com
306 points by prostoalex  1 day ago   183 comments top 28
1
frinxor 1 day ago 5 replies      
the short of it: Wealthfront markets itself as cheaper than vanguard etfs, but they actually charge 0.25% + ETF fees on top of that.

one of my favorite quotes:

"Here it is: If you open a retirement account, and you invest some of your paycheck each month into a Vanguard Target Retirement Fund, and you justleave ityou just leave it right there until retirement

you dont do anything when the folks on CNBC announce that the sky is falling; you dont do anything when Cousin Eddy calls from a secure underground bunker in the badlands and says that the fed is printing money and its time to liquidate and ammo up; you dont think its a sign that your parrot said fuhgeddaboutit but you thought she said get a nugget and surely that must mean a gold nugget? and you looked online and noticed that the price of shiny yellow metal was crashing and wait your parrot is also yellow and Ill be damned if that isnt a sign to buy

no, if you just leave it there to compound over decades"

2
blufox 1 day ago 2 replies      
From The little book of common sense investing by John Bogle.

"Thus, the recent era not only has failed to erode, but has nicely enhanced the lifetime record of the worlds first index fundnow known as Vanguard 500 Index Fund. Let me be specific: at a dinner on September 20, 2006, celebrating the 30th anniversary of the funds initial public offering, the counsel for the funds underwriters reported that he had purchased 1,000 shares at the original offering price of $15.00 per sharea $15,000 investment. He proudly announced that the value of his holding that evening (including shares acquired through reinvesting the funds dividends and distributions over the years) was $461,771. "

3
somberi 1 day ago 0 replies      
A father, who is a banker shows his son "This is our Yacht and those are our brokers''". Son asks "Dad, where are the customers' yachts?".

A book, written in 1930s, based on this quip still remains one of my favourite books about Wall street. Link:http://www.amazon.com/Where-Are-Customers-Yachts-Street/dp/0...

4
jycool 1 day ago 2 replies      
As much as I think the article makes great points (tax-loss harvesting seems oversold, and Vanguard is a much cheaper way to get a target date fund than Wealthfront), I gotta raise my hand here and point out the absurdity of this argument against proportional fees. Wealth management is like any business-- a combination of fixed and variable costs. And like many businesses, it amortizes those fixed costs against basically a variable fee structure because the value of the service to the customer (debatable as it may be for some investment products) is proportional to the account size.It's not an SV vs WS thing. Nobody charges according to the marginal cost of the service. Why does Airbnb charge a percentage of the rental cost? Why does Uber get a percentage of the fare? Why does the restaurant charge me the same for a burger at 10pm as a burger at 7pm (after all, the staff are already being paid for the day and ingredients already purchased)?It might sound good to say "hey, the software's not working any harder so the marginal cost is zero", but that doesn't work in almost any business.
5
arbitrage314 1 day ago 1 reply      
Wealthfront is indeed a ripoff for the well-informed.

For most folks, though, something that makes saving simple is something that will lead to HUGE gains over not having that thing in the first place.

Without Wealthfront, most people would simply let that money sit in a checking account or in a user-made nondiversified portfolio (most people don't understand the incredible benefits of diversification, including myself most days, which is why we work at startups :) ).

Therefore, I still believe the value of Wealthfront is positive, and likewise, I believe the value of most financial planners is positive.

6
troydavis 1 day ago 0 replies      
If you do want financial advice as an hourly service, it exists. In the US, look at the National Association of Personal Financial Advisors: https://www.napfa.org/.

Many provide hourly consulting, just like you'd pay an attorney or accountant.

By far the biggest risk of doing this is not having a constant presence during bear markets to coach you to stay the course. If you're actually comfortable enough in the amount of risk you've taken, this is less of an issue.

The best doc I found on where or how wealth managers can add value was written for wealth managers (by Vanguard, which provides custodial services): http://www.vanguard.com/pdf/ISGQVAA.pdf

Page 4 has the table. The biggest benefit - in their analysis, up to 1.5% of possible value added - is coaching. Deciding what's right for you depends on understanding what parts of planning and executing you're actually comfortable taking primary responsibility for.

7
Kiro 1 day ago 4 replies      
I don't understand the three first paragraphs (about A/B testing). I install an app costing me 1 dollar and then the developer use this profit to install an app ruining him/her? What?
8
dataker 1 day ago 3 replies      
Wealthfront markets itself as an innovative fintech startup, but I see it as nothing but an asset mgt company that hired some developers to "prop it up".

Not to commit an ad hominem, but I don't see tech-driven founders and/or a very strong engineering culture.

https://www.wealthfront.com/management

9
rgarcia 1 day ago 2 replies      
Here's my understanding: if you use Wealthfront direct indexing, you mostly hold stocks, so you pay a minimal amount in ETF fees. This means your expense ratio is pretty close to the Wealthfront fee of 0.25%.

If you put everything in a Vanguard target retirement fund, your expense ratio is something like 0.18%. [1]

So as long as tax loss harvesting adds a tiny fraction of a percent (~0.07%) to your returns, it seems optimal to use Wealthfront, no?

[1] https://personal.vanguard.com/us/funds/snapshot?FundId=0699&...

10
throwaway3830 1 day ago 4 replies      
So about a month ago I inherited just short of $30k. I don't need it right now, and don't plan on using it until either I buy a house (maybe 5 years away?) or retire.

My plan was to invest it with Betterment, because I felt like that was the easiest and cheapest "set and forget" solution. But would there be substantial upside in going directly with a Vanguard fund instead? I'm really new to this and the convenience of a dedicated interface like Wealthfront/Betterment feels valuable to me, but if I'm leaving substantial money on the table doing that I'd rather go to the extra effort of figuring something else out.

11
swang 1 day ago 10 replies      
OK I don't know shit about investing/financing and would like to change that. Where do I start? I read the article and nod my head and go, "OK ETFs!" But then I go and there are tons of them. Is the one he suggested the one I'm suppose to go with regardless?
12
vasilipupkin 1 day ago 3 replies      
Let's say we kill proportional fee and replace it with a fixed fee. Well, then people with lower balances will end up most likely paying more than they do now. Another issue, it is proportionally harder to execute a larger portfolio than a smaller one. So, proportional fee makes sense. Having said that, yes, it is true that it is not clear to me why as a Vanguard customer I should switch part of my portfolio to Wealthfront.
13
pbreit 1 day ago 0 replies      
Key point: 99.9% of US individual investors would be wise to make a Vanguard Life Strategy or Target Retirement fund a core holding.

Otherwise, the screed was a little nasty (a bit nastier than the Wealthfront post).

14
icedchai 1 day ago 0 replies      
So Wealthfront is for people who can't open their own Vanguard account and set up automated transfers/investments?
15
caseyf7 1 day ago 6 replies      
Has anyone seen actual performance of Weathfront portfolios? You would think people would publish their results, but they've been hard to find.
16
digitalzombie 1 day ago 1 reply      
There's a PBS video about this. It also talk about Vanguard being a very good mutual fund and every other mutual funds are making bank off of you through fees.

The video also advocate either Vanguard mutual funds or ETF.

17
w23j 1 day ago 0 replies      
Here is a shot at a back-of-the-envelope calculation of the sum of fees paid in his first example. I hope I haven't misunderstood.

 var gain = 1.08; var fee = 0.0025; var feeSum = 0; var s = 100000; for (var i = 0; i < 35; i++) { s *= (gain - fee); feeSum *= gain; feeSum += s * fee; if ((i + 1) % 5 == 0) console.log('year: ' + (i + 1) + ', saved: ' + s + ', fees: ' + feeSum);}; year: 5, saved: 145240.0514760841, fees: 1823.9448097195082 year: 10, saved: 210946.72552775557, fees: 5329.071699986484 year: 15, saved: 306379.13274362596, fees: 11677.706523607205 year: 20, saved: 444985.2101088223, fees: 22746.568357507604 year: 25, saved: 646296.7482230144, fees: 41538.4561823357 year: 30, saved: 938681.7298073637, fees: 72821.71593023202 year: 35, saved: 1363341.8275688116, fees: 124120.0285076505

18
akeefer 1 day ago 3 replies      
This post kind of misses the main point of services like Wealthfront. You're paying 0.25% in exchange for automatic rebalancing, asset class diversification, and tax optimization. If the combination of those factors is going to increase your yearly return by 0.25% or more (say, from 6.8% to 7.2%), it's worth it. If they won't, it's not.

It's silly to focus entirely on the fee aspect: the point of using Wealthfront is not because it's lower-cost, it's because you expect it to be better-performing from a total-return perspective. They may oversell themselves, and that's a valid criticism, but the OP fails to really analyze the raison d'etre of Wealthfront as a service. Comparing it to a single Vanguard ETF is not a proper comparison.

The Vanguard target retirement fund for 2035, for example, includes four underlying ETFs (US stock, international stock, US bonds, international bonds), whereas Wealthfront portfolios typically have more like six or seven asset classes (differentiating between developed and emerging international markets, and adding in natural resources and real estate). Left to my own devices, I don't have the time or inclination to do the research necessary to do that sort of additional asset diversification myself, determining the ideal allocation and then avoiding too much drift while not incurring too much tax. I also don't have the time to deal with tax loss harvesting, which might not matter for retirement accounts, but does make a difference for taxable accounts: you're likely incurring some taxation issues when it comes to portfolio rebalances, for example, since that necessarily involves asset sales. If you have $100k in wealthfront, you're paying $250/year in fees. If tax loss harvesting can only harvest $1k in losses during the year that offset (or at least avoid) $1k in capital gains, that still pays for the wealthfront fees by itself (if you assume 15% federal and 10% CA state tax on long-term capital gains). So, sure, the fees you pay wealthfront compound over time, but so does the money you pay in taxes. (You do eventually pay the taxes on that gain, of course, but the money you save now compounds over time, so you still come out significantly ahead).

So the question is: does additional asset class diversification plus tax optimization yield at least a 0.25% increase in your total return (net of taxes and fees) in an average year? I believe it does in my case, hence why I have my money with them. It's not because I'm some idiot taken in by slick marketing who can't do math and doesn't know about Vanguard ETFs: Wealthfront's core market is really people who can do math and understand what they're getting for their 0.25%.

Similarly, you can quibble over asset-based fees over fixed fees, but that also misses the point: as long as they make me more money than I pay them, I come out ahead. If I come out ahead, why would I be complaining? If I don't come out ahead, then there's still no point in complaining: just don't use the service. Capitalism at its finest.

Again, it's sad that the OP and most of the comments in this thread don't even attempt to tackle the real value proposition here. Just saying "Stick it in a Vanguard ETF and you'll pay less in fees" is not at all addressing whether or not the core value proposition is valid.

19
kenferry 1 day ago 4 replies      
I've often wondered about precisely the author's issue why do wealthfront, betterment, et. al. charge a percentage of assets under management? Is it just that no competitor has offered a flat rate yet, or is there intrinsically more work involved in trading larger sums?

To this one point:

> All these cases neglect to mention that you will probably only see the maximal gain if you are maximally messing up already, by needlessly churning your account to generate capital gains. As Vanguards founder advises: Dont just do something; stand there.

The author is listed as "Former Director of Product @ Facebook". If he received a portion of his compensation in Facebook stock, he needed to sell out of that position regularly to avoid being over invested. That should have had generated more in capital gains than tax loss harvesting could offset.

20
rgbrgb 1 day ago 0 replies      
> the financial advisor who treats his esteemed clients to FREE! luxury box seats at the big game (* when you pay him $10,000 a year).

Always be suspicious of lavish gifts from someone who's services you employ... often means you're overpaying. We see this in real estate all the time. A broker who just made $80k on a week's work will often send their client a $200 gift card or take them out to a fancy dinner to "celebrate" (and alleviate guilt?).

21
hundt 1 day ago 0 replies      
The author's points about advertising and hidden fees are well-taken.

Regarding "Stop charging proportional fees": if Vanguard, who charges proportional fees, is operating "at cost" as the author claims, then what is all that extra money paying for? The implication is that technology could be a game-changer because it unlinks the size of the effort from the size of the effect, but surely that is already happening with Vanguard's 12-digit funds.

22
hchenji 1 day ago 0 replies      
Does anyone here use wisebanyan? They claim $0 fees and still do the robo advising (i.e., solving the portfolio allocation problem).
23
vasilipupkin 1 day ago 4 replies      
Wealthfront should compete not just on trying to execute the same old indices, but on creating new indices on the fly. For example, REIT Index Sans Illinois, Emerging Markets Sans Russia, etc. Infinite number of possible index portfolios. Anyone want to build that? send me a message
24
mooreds 1 day ago 2 replies      
I would love to see the proportional fee killed. Even Vanguard charges this (though their fees are smaller, they are still a percentage). But the fact is the effort required to manage $10M is not 10x the effort required to manage $1M (especially with software). So why do we pay 10x as much? Because we always have!

That's the opportunity in front of Silicon Valley with regards to wealth management.

25
jsprogrammer 1 day ago 0 replies      
>There are other options available that would enable you to stop working years earlier

The only real option that enables people, on average, to stop working is automation.

26
spacecowboy_lon 1 day ago 0 replies      
Not sure that 100,000k account for $20 a month is that good a deal. I have about that in my UK TD ISA and don't pay any account charges
27
Bostonian 1 day ago 0 replies      
Has someone created a web site that uploads your portfolio and makes tax loss harvesting suggestions for a fixed fee?Competition is what brings prices down.
28
gohrt 1 day ago 2 replies      
> the kindly rabbi who performed your bris

mohel performs a bris, not a rabbi

> Its why youre receiving 2% cash back on your credit card while your neighbor pays 12% on his.

This is mostly because merchants pay a fee, and the CC company kicks it back to you.

> Im not asking why Wealthfront helps itself to such margins, which is obvious and perfectly normal, but rather why the market bears it

AirBnB and Uber and real estate agents do the same thing [price-proportional fees, not cost-proportional fees or flat fees]...

Anyway, it's not as bad Blake rants about. Flat fees hurt the least-wealthy customers.

1. It's progressive. Wealthier customers pay more. That's good for sociery.

2. It's not inherently a scam, anymore than airline tickets or home sales. Ultimately, they compete in the market, and have to price competitively. Companies will lower prices to attract customers (if competition arrives), but maintain their profit to stay in business.

Why We Shut Down Reddits Ask Me Anything Forum nytimes.com
302 points by uptown  2 days ago   303 comments top 23
1
imjk 2 days ago 4 replies      
"We feel strongly that this incident is more part of a reckless disregard for the companys own business and for the work the moderators and users put into the site. Dismissing Victoria Taylor was part of a long pattern of insisting the community and the moderators do more with less."

I think this really gets to the heart of it. The moderators of the site only learned of the termination after a celebrity flew out to NY to meet with Victoria and was told that the meeting was cancelled. As expected, panic ensued among the subreddit's moderators. Whether the firing was justified or not, the fact that Reddit's leadership didn't immediately see the consequences of their action on one of their most popular communities just shows their disregard. Or even worse, they realized the consequences and just didn't bother to help facilitate them. I mean these are real people with real meetings spending real dollars for the community, which is all run but volunteers, and Reddit's leadership didn't feel it was important enough to communicate with them to help avoid unnecessary consequences. I understand the frustration.

2
zedpm 2 days ago 4 replies      
Maybe this will be enough to silence the folks on here who insist that nothing is wrong with Reddit management and that it's just a bunch of angry children complaining without cause. The mods in question are adults and professionals, and they've clearly and succinctly explained their grievances with Reddit management.

This piece doesn't touch on some of the other issues that have angered users, particularly the matter of heavy-handed censorship that appears to be applied inconsistently. That too is a legitimate complaint, one that shouldn't be shouted down or conflated with shameful behavior on the part of relatively few individuals in the community.

3
spodek 2 days ago 1 reply      
Never surprise your team.

This incident looks like it comes from inexperienced, counterproductive leadership, as do the several incidents leading up to it and that will follow. Or maybe authoritarian. Several days after a firing, people are complaining in the NY Times that they are still surprised.

Not surprising your team is one of the top principles I've learned in teamwork. If, as a manager, your firing someone surprises them or their team, you almost certainly mismanaged the process. If you have a reason for firing someone, you should be able to create a process everyone understands even if they don't agree to it. The people left should certainly not be surprised, especially after the firing.

If your team is surprised by your strategy, if your customers are surprised by your product, and so on, you probably managed poorly. You should only surprise your competition.

At least the rest of us can learn from Reddit's management what not to do: motivating competitors to satisfy their users and customers while they're alienating them.

4
embik 2 days ago 1 reply      
What really baffles me is the way reddit management handled this situation. There were many things they simply fucked up - They did not install the new AMA team properly (hell, the subreddit mods did not know about it), they reacted childish to the shutdown (something along the lines of "popcorn tastes good" - one of your biggest communities just shut down because you failed to do basic management, in what universe is this a proper response?) and they won't address the real issues, it's all PR speech (take a look at the "we' sorry" post, a lot of important questions raised are not answered).

This is just horrible - You cannot anger your community in such a "business model". reddit depends 100% on their users and especially the content creators and moderators. They do very little by themselves, and most of it is stuff around the core functionality only a small percentage is even using.

5
nanny 2 days ago 7 replies      
When you consider exactly how much information we have to go on, I think people are overreacting by immense proportions.

Do we even know why Victoria was fired yet? Maybe she was about to blow the lid off a vast internet conspiracy. Or maybe she's actually the bad guy, and she went crazy and tried to destroy the office. We just don't know, and we might never know.

Besides, what is reddit corporate supposed to do when they want to fire an employee? Message the mods and say, "Oh, btw, we're going to fire Victoria in a couple days and Xyz will take over her duties, just thought you'd like to know."? That's simply out of the question, and not enough people even thought about this scenario.

The responses to this event are entirely unjustified, because there is no information at all to base a reaction on.

6
gesman 2 days ago 3 replies      
>> ...We are disheartened by the dismissal of Victoria Taylor, who was one of the most high-profile women at the company and in the technology field. We hope Reddit recruits someone with the talent and necessary background to fill her position in a similar capacity...

May I propose a good candidate? Victoria Taylor.

7
mildweed 2 days ago 0 replies      
People often say that the users of a site like Reddit are the product, not the customers. In the case of Reddit moderators, they are also essentially employees. Employees who need to be treated more as customers, due to the fact they're volunteers.

Anger the masses of Reddit all you want, but don't piss off the volunteers that hold your product together.

8
jonknee 2 days ago 0 replies      
The problem with volunteers is that you can't easily fire them. It's great that reddit gets a lot of free labor from moderators, but their sense of entitlement is a huge drag. Regardless of why Victoria was let go, it's very difficult to run a business when you can't make staffing changes without threat of open revolt.

I would hire a replacement for Victoria and swap out every moderator of /r/IAmA who was part of this coup. It's not their property, it's reddit's. It will be ugly (not that it isn't already), but in the end there are a few moderators and millions of users who literally could not care less and just want to read interesting stories.

9
xaa 2 days ago 0 replies      
Apparently Reddit has exactly 2 board members: Alexis and Sam Altman [1]. Why would Sam Altman, and by extension YC, have thought that Ellen Pao would make a good ("interim") CEO for Reddit?

This scenario should really be causing shareholders/board members to think about how dependent Reddit really is on moderator goodwill [2]. There needs to be a CEO and leadership team that can at least create a credible perception, if not the reality, that Reddit values moderators who donate their time to make the business viable.

For this purpose, it would seem you want a CEO who is going to be non-inflammatory (i.e., not Pao) and perceived as a relatively neutral arbitrator between the needs of shareholders for monetization and the needs of moderators for adequate support (i.e., probably not someone from a VC background). Considering how little it should actually cost the company to provide a reliable support network for mods, including honest, non-HR-speak communication, this hardly seems like a demanding task, but somehow they continue to manage it very poorly.

Does anyone know if there have been public comments by YC about any of these issues?

[1] http://www.bloomberg.com/research/stocks/private/board.asp?p...

[2] https://www.reddit.com/r/announcements/comments/3cbo4m/we_ap...

10
throwaway_97 2 days ago 0 replies      
I think the management is doing the right thing for the long term. They will try to make it more and more social and maybe even try to act as a news portal; things that bring in profits. It should attract a lot of new people. It won't be the reddit you remember but it will be a more profitable reddit.On a personal note I feel nothing as I find reddit to be quite a distraction to my productivity and many subreddits of my interests are dead.
11
jmount 2 days ago 2 replies      
Rambling and self contradictory (presumably even after editing).

"We did not anticipate or intend for other communities to follow our lead as part of a protest."

"The secondary purpose of shutting down was to communicate to the relatively tone-deaf company leaders that ..."

Own up to one or the other.

12
robrenaud 2 days ago 6 replies      
How hard would it be to fork reddit AMA on a third party site? I'd imagine if you got Victoria on board, a lot of the mods/community would follow.

Certainly some custom support for the AMA would be nice, like getting cleanly summarized final outputs and highlighting direct conversation with the askee, and making it easy to find highly upvoted non-answered questions.

13
sparkzilla 2 days ago 0 replies      
Article>The issue goes beyond Reddit. We are concerned with what a move like this means for for-profit companies that depend on the free labor of volunteers and whether they truly understand what makes an online community vibrant.

It's time for companies to stop treating the free work of contributors as a given, and pay them for their contributions: http://newslines.org/blog/reddit-and-wikipedia-share-the-sam...

14
kjs3 2 days ago 0 replies      
There seems to be an incredulous "we gave our time to a for profit company for free and got fucked" attitude that I can't help, undoubtedly because I'm a bad person, but think 'duh!'.
15
paulhauggis 2 days ago 5 replies      
This article makes it seems like the moderators actually work for Reddit, which isn't the case. This is the problem, actually. Because it's not a paid position, the company can't use that as leverage in situations like this. The moderators essentially have nothing to lose.

Even in the article, it states that it was shutdown because of the abrupt termination of Ms. Taylor. Anybody that makes these sort of emotional decisions shouldn't be anywhere near a position of power.

If I were the CEO of Reddit, I would be making it my next goal to slowly take away the power away from these moderators.

16
theklub 2 days ago 2 replies      
I can't believe people take reddit so seriously. Its amazing to me.
17
hoopd 2 days ago 1 reply      
A discussion between an admin and the science mods leaked: http://www.reddit.com/r/Blackout2015/comments/3c4x6h/leaked_...
18
zxcvcxz 2 days ago 0 replies      
Why is it okay for a corporation to do it, but when mods do it everyone accuses them of pushing their own agenda?
19
pasbesoin 2 days ago 1 reply      
Based upon the facts if and as described here, I would have to consider Pao entirely incompetent in her current role. Regardless of how one feels about her as a person.

I regret a bit jumping on the bandwagon, here, but if things occurred as described, the facts alone are damning.

P.S. I also have to question what the hell Alexis is up to. I've read elsewhere that he conducted the actual termination. Is he really so clueless at this point about his own site? (Even if he agreed with the termination, for whatever reason, its manner and fallout is just simply unacceptable.)

There has to be some serious dollar play behind this. Which does not speak well for the future of reddit; it may make it to the other side, but only based upon the gigantic momentum it has built that may sustain it through to some better policy -- if Management has the brains to see the light.

P.P.S. Is voat back up, now?

20
theklub 2 days ago 0 replies      
Its really not important at all. Just a lot of people use it and its kinda going down in flames. Pretty funny to watch honestly. The AMA section was a great advertising tool for Reddit IMO, they def should take better care of that one area.
21
alecco 2 days ago 1 reply      
An article critical of reddit? Let's see how fast this is taken out of HN frontpage.

Edit: Same as top post (NYSE) 1h and same votes, and it's at 9th position... Bring me your downvotes. Truth hurts?

22
throwaway_97 2 days ago 1 reply      
23
morganvachon 2 days ago 0 replies      
You need a proper SSL cert for edenboards.com, it's flagged as untrusted in Firefox.
Why does Gmail hate my domain? bitbin.de
301 points by stbenjam  2 days ago   141 comments top 34
1
ciaoben 1 day ago 5 replies      
Hi, I'm Nicol and I work in the deliverability team of Qboxmail.com. We have run into similar problems in the past. The reason of the behavior of Gmail with your domain is not easy to understand, a theory could be that in the past someone has used it for sending spam, and even if it was not your server, you are paying for this.

The solution we have used, it was bring Gmail to understand that your emails have been sent from a real user. You can simulate a conversation between your domain and other Gmail accounts. Send a first email, with a realistic text, to a Gmail account, if it goes in spam, mark as safe (remove it from spam), and reply to it, always with a realistic text. Continue to simulate a normal conversation, sending 4/5 emails between the 2 account. Re-do the operation from another Gmail account, but this time do the opposite:start from the Gmail one.

It not bulletproof, but it has worked in the past for us.

2
Animats 2 days ago 1 reply      
Notice the comment on the story:

"I had the exact same problems in the past. For me it helped just to register and immediately cancel a trial of Google Apps for the domain. Its annoying to have this as a necessary step but at least it was like that done in just a few minutes (after I tried to find a contact email of Google for a much longer time). I tried it because I had other domains on the same server which didnt had this issue. All domains which werent filtered had a Google Apps account in the past. So I thought its worth a try, and yeah it was solved."

3
ceejayoz 2 days ago 5 replies      
If the domain is bitbin.de and the mail server is hosted on the same server, it's probably Hetzner being scorched-earth for deliverability, just like any other major hosting provider will be. Email from AWS, Rackspace, Heroku etc. get a drastically higher starting spam score because people fire up servers there to spam.

Running the email through something like Amazon SES or Mandrill would probably be helpful. Both have generous free tiers.

4
petercooper 2 days ago 2 replies      
I send a LOT of emails each month (email newsletter business - yes, legit!) and ran into an separate but topically related and amusing problem recently.

My newsletters are aimed at developers, and one issue went out and was considered by Gmail to be a 'phishing' attempt. I couldn't figure it out. Several issues later, another one was picked up the same way and I figured it out.. In both issues, one of the items was linking to domains that looked a bit like this "www.0x10abcdef.com" (this is NOT the actual domain) - basically a domain that looked like a hexadecimal number. I ran numerous tests and Gmail always considered mails with links to domains like this to be phishing attempts.

I reported this as a bug (since nothing was wrong or reported with the domains in question, it was basically Gmail's filter being in error) but no idea if it was ever resolved.

5
cvs268 1 day ago 0 replies      
Part of the gmail's spam filtering appears to be "crowd-sourced". People clicking on "Report Spam" or "Not Spam" on your emails in their gmail-inbox.

How about asking a few of your friends to select your emails in their gmail spam folders and click "Not Spam". Hopefully that gets the ball rolling and the situation improves...

6
runin2k1 2 days ago 2 replies      
That's all a bit presumptive and inflammatory for what amounts to pure speculation on the part of the author.

Google stands to lose a lot more from a potential PR disaster for burning former customers who move away from hosting than they do from trying to convert a tiny portion of users to a free mail hosting service.

7
codezero 2 days ago 0 replies      
I've had the same problem myself.

My best guesses as to why my domain has been dinged:

1) It's on a VPS, that IP may be flagged already

2) I often use a VPN, and sometimes send emails out through my server using it. This probably raises the red flags.

3) It's a non-standard TLD, .co

4) I don't use any Google services with that email address/domain (I assume doing so adds some level of measurable trust)

5) I'm not in the address book of a lot of gmail users, because this is my private server, that I use only for job seeking, and personal communication.

This has really done damage in the past. I've applied for jobs, and heard back weeks after they hired someone, letting me know, woops, I ended up in their spam folder.

Trying to meet with a friend? I can't email them, I need to use Facebook or Gmail... welp.

This is super annoying. I kind of understand why it happens, but it's just a little sad that building your own fort, so to speak, is so impossible.

Things I tried to do to mitigate this:

1) Made my web domain https only (why not?)

2) set up DKIM and SPF (didn't seem to have any effect)

3) proper SMTP authentication, secure port only

4) Reached out to Google via the typical forms they offer, and heard nothing, obviously.

8
lm741 2 days ago 2 replies      
I hit a similar problems when sending automated internal emails to a Google Groups address at my company. The problem was fixed by adding the following footer: "To unsubscribe, email <my-email-address>."
9
raintrees 2 days ago 1 reply      
Anecdotal information:

I run my own mail server and manage a number of small business clients who have mail servers. Email trust is getting more and more tedious.

Recently I was resolving a domain registrar issue with Network Solutions. They required forms filled out and signed, a copy of a utility bill from my client, a copy of my ID...

I bundled up everything and emailed them the scans. I contacted them 5 business days later, they claim to never have received it.

I sent it again while I had a rep on the phone, it went into their spam hole, probably due to size of attachment.

They helpfully suggested I get a GMail account to send the same message.

They are my registrar, they host my DNS, including my MX record. I have an spf record...

I thought it was pretty farcical, and a sad statement of digital trust/authenticity.

Some of my clients are giving up and just going with the flow, I have had several conversions to Google/Microsoft cloud-hosted solutions for email...

10
dasil003 2 days ago 0 replies      
I'm very curious about that as well as I am thinking of moving my personal domains away from Gmail and it would really suck to start landing in spam simply on the basis of not being with a major mail provider.

I hope this story gets traction and someone on the Gmail team finds it and comments.

11
thrownaway2424 2 days ago 3 replies      
The author jumps to an unfounded conclusion which is pretty irritating and probably will make everybody who could help him not want to help him.

That said I've never seen a DSN like the one in the screenshot. It certainly is not generated by the gmail spam checking system, because gmail does not bounce spam. Gmail either rejects spam at SMTP DATA time, or delivers it to spam folders.

12
krick 2 days ago 0 replies      
I'm still wondering how does it happen that I never ever had as many problems with spam as I have with anti-spam systems and goddamn google in the first place. Actually, at this point it well might be only google. And it's hard to ignore when more and more emails in your contact book have "gmail" in them. Makes it seem that something as simple and fundamental as e-mail now belongs to google. Just ridiculous.
13
y0ghur7_xxx 1 day ago 1 reply      
I know that on HN "me too" comments are not well seen, but I just have to this time, because I am really frustrated by this as well (as said before: https://news.ycombinator.com/item?id=9812157).

This part sums up as I see it as well:

I can only think this is intentional on Googles part they have a near monopoly; the vast majority of mail I send these days goes to Google and if a small company is running their own mail server is too much of a hassle, then maybe theyd buy Google Apps. Its bad, anti-competitive behavior on Googles part. Shame on them if its true. I dont know if it is, I can only guess, but they certainly have an incentive to make it difficult for the little guy.

Im just a geek that likes running my own servers. My pleas to Googles impersonal forms fall on deaf ears, and Im getting tired of telling everyone I e-mail to check their spam folders.

14
_cbdev 2 days ago 1 reply      
I've had mail sent via my own server rejected by gmail because of a missing Message-ID header. The 550 reject message was the standard "Unsolicited Mail detected" text, the same mail was accepted without causing any fuzz once the Message-ID was added.
15
bitJericho 2 days ago 2 replies      
Seems like author hasn't done enough troubleshooting and has jumped to a conclusion. My recommendation is to run through google's troubleshooting steps on the issue:

https://support.google.com/mail/troubleshooter/2920052?hl=en

Also make sure a DMARC record is setup.

16
tedunangst 2 days ago 0 replies      
Spam filtering/tagging is more annoying than outright bans. Microsoft banned my IP from their email services, but helpfully sent bounces so I could appeal. This is the email equivalent of hellbanning.
17
sashk 1 day ago 0 replies      
I've had and still have similar issue like you do. I have my own smtp server, which sends some alerts to my google apps account. I've setup my Google Apps account to white -list all email arriving from my mail server, whitelisted source domain, added spf records, etc - but my mail still being blocked. First, Google told me that's because I use IMAP and I should stop using it - only webmail, or exchange on iOS devices - fine, still blocking. Then I was told, that I can't attempt connection to the second MX server if first fails - done, still have problems. Two or three month later, they confirmed issue on their end with firewall blocking DDoS by blocking whole /24 subnet, with no fix in sight.

Remember, I am paid user and I went through rounds of bs with their support.

I feel sorry for those, who have to send email to Google Apps/Gmail as part of their business.

(I've posted this as a comment on original story, but decided to duplicate here, if someone will find this info helpful)

18
teekert 1 day ago 0 replies      
I have the same with my own server and Yahoo, after my first announcement of switching from Gmail to my own server Yahoo sent me a message that I was blocked forever. I have SPF set up but not DKIM. I also don't have a valid cert (I'd have to renew every year). Gmail never gave me any problems though.

Yahoo apparently has procedures to deal with this but they are difficult to find. The annoying thing is that now I'm not even receiving any notification of the blocking anymore. The mail just disappears.

Btw, mail from my Drupal system also always ended up in spam. But I kept removing the spam tag and now it is just goes into regular mail.

19
EmployedRussian 23 hours ago 0 replies      
Google just posted http://gmailblog.blogspot.com/2015/07/the-mail-you-want-not-...

"but sometimes these wanted messages are mistakenly classified as spam. When this happens, you might have to wade through your spam folder to find that one important email (yuck!). We can help senders to do better, so today were launching the Gmail Postmaster Tools."

https://gmail.com/postmaster/

20
pbhjpbhj 1 day ago 0 replies      
I've had similar problems over the last year or two with Gmail and with Outlook/Live Mail.

Our 2 person small time company sends a few hundred mails a month at most. We replay to someone on Gmail and get spam-binned. They whitelist us, we reply again and get spam binned. We send the mail via a major provider and it gets through.

On Outlook a website mail form sending emails to a hotmail.co.uk address was getting blocked - the server has the same IP it's had for years, the form has been used for years, the recipient has whitelisted the email address. I forget what eventually fixed it, think it was addition of a reply-to address; quite ridiculous.

In both cases they are long term domains with real ID info that hasn't changed, the domains have been on the same IP held by the same ISP for at least 3 years and owned by the same owner used for the same businesses for at least 10 years. Both domains are long-term registered in (Google|Microsoft) analytics.

Yes I can see that such domains could be purchased by spammers and the prior owners may not change their ID info and the new holders may be able to purchase space on the old server and so keep the IP address (despite the established ISP having strong anti-spam policies) and may be able to then send out spam emails, but who would whitelist those emails???

IMO on either Outlook or Gmail if you whitelist something, even if it were spam from a known spammer, then they should let it through (sanitised if needs be). If they wanted to they could add a "99.999996% of others blocked this but you have whitelisted it, do you want to block emails from YourBestFriendWhoSendsSpam@theirISP.com in the future?".

/rant

21
jmount 2 days ago 1 reply      
Network effect. Google blocks non-GMail email, eventually you are forced to use them as your service provider. Can either be deliberate or happy unexpected outcome for Google.
22
trusche 1 day ago 0 replies      
I've had similar, unfathomable problems with Gmail delivery for a long time, until I added a PTR record for reverse IP lookup for the IPv6 version of my IP address. That did the trick, I haven't had delivery trouble since, even without DKIM (but with SPF).
23
cnst 1 day ago 0 replies      
My 2c: I know it sucks when your mail doesn't get through, I've had the same issue with att.net, but gmail.com has never been a real problem for me.

I have a lot of mail that gets forwarded to gmail, including from cron that goes to my own mailbox at gmail. I sometimes have to unmark it as spam, but not too often. My IPv4 doesn't even have a custom rDNS -- only provider-specific one -- nor have I bothered to implement DKIM, although I do have SPF and am also registered for Webmaster Tools (although I somehow doubt that really matters).

24
z3t4 1 day ago 0 replies      
Some years ago I ran a mail server on a dynamic IP. I basically contacted Google and had them white-list me. And at the same time, some small providers totally refused to accept mail from me because I used a dynamic IP :P

Check-list before you attempt to get white-listed/ban-lifted on Gmail: Send bulk with "precedence: bulk" header. Use spf and dkim (optional). Always send from the same domain. Have your users "opt-in" to receive mail. Have a public e-mail policy. Basically, don't require users to enter their e-mail address.

25
dools 1 day ago 0 replies      
I had a problem when I sent a message to about 20 associates and used BCC. Google apps for business then just marked me as spam. I would get lots of rejected email messages. The people I sent to were all people I knew who were attending an event. Just gmail being a dick about BCC. So now I send email through sendgrid from within gmail. Have you tried using an external smtp relay? It's more likely your IP address than your domain that is flagged.
26
kazinator 1 day ago 0 replies      
> A few days ago, I attempted to e-mail a company regarding an online e-commerce order I had placed, from my personal address.

What? How does e-mailing a company end up with a bounce from ... Google Groups? (See attached screenshot of the bounce.)

Maybe the original outbound message had something funny in the To: or Cc: recipient lists.

27
fahadalie 1 day ago 0 replies      
Please make sure that your domain or email address is not blacklisted by Spamhaus. You can check the status here:https://www.spamhaus.org/

It is the blacklist removal center. I removed my email address from this list and since then, everything seems to be working perfect.

28
mortenlarsen 1 day ago 0 replies      
I feel your pain. I have had similar problems over the last 15 years that I have been running my own mail server, but mostly with hotmail when they were the gmail of their time. (they would accept the mail and then silently drop it)

I would maybe change the SPF record to fail instead of soft-fail (~all to -all).

I have a @gmail.com account that I use for testing whenever I change something.

29
loupgarou21 2 days ago 0 replies      
I never had an issue with gmail blocking my mail server, but I was getting flagged as spam by AOL every sixish month because my server wasn't sending them enough mail... The company I work for resells an antispam solution with outbound filtering, so I just relay through that now
30
Tepix 1 day ago 0 replies      
What network are you hosted at? Perhaps your netblock is notorious?
31
xedarius 1 day ago 0 replies      
Of course it's deliberate, it's part of Google's ongoing business model of ring fencing the internet.

People tell you not to run your own mail server as it's a nightmare to keep on top of all the security aspects, and yes that's a thing. The greater problem is getting your mail delivered to the vast majority of people with gmail/hotmail/yahooo mail.

32
amelius 1 day ago 1 reply      
I always wonder how companies like Mailchimp stay out of Google's blacklists.
33
logicallee 2 days ago 1 reply      
>What can I do except move to a hosted provider ?

To answer your question about what you can do, you can send mail to Google legal asking them to accept your mail gmail users - just copy your whole blog post, I'm sure they'll action it right away:

Antitrust is serious business. (They already have trouble in the EU for it - https://www.google.com/search?q=antitrust+eu+google - by the way I think it's completely unfounded.) It takes them seconds to whitelist you, and this really is a "15 seconds could save you $15 million on your next antitrust case" and anyone in Google legal can probably see that.

EDIT: I don't see how this got downvoted. This literally answers OP's question about what else he can do.

34
arihant 2 days ago 2 replies      
"The needs of the many outweigh the needs of the few." -- Spock.

I'd rather have a few geeky people like me not have the fun of self hosting an e-mail server, than waste millions of man hours round the world because of people dealing with spam.

If you must hack, however, use something like Mailgun. It is more hacky in a way that you can program your incoming mail the way you want, not just install a mail server with a few commands.

DigitalOcean Raises $83M in Series B Funding digitalocean.com
323 points by beigeotter  2 days ago   180 comments top 19
1
dotBen 2 days ago 8 replies      
DO has really benefitted from Linode stalling over the past few years. I admire Linode for remaining a bootstrapped business but it feels as though the owners lost their fighting spirit and energy... Perhaps because the small pool of Linode owners felt they made enough money already.

DOs announment talks about a storage product, which is strategically important and crucially something Linode has sorely needed for a long time. And yet the biggest development in recent years at Linode has been a proprietary stats and monitoring system built as an upsell, which doesn't really do anything distinctive that Nagios or another package couldn't provide.

Instead Linode is now switching their entire platform from Xen to KVM, a curious move which will create risk and cost velocity that could have been spent on product development.

I have been a huge supporter of Linode over the years, and the startup I co-founded is one of their biggest customers, but at this point DO seems like the winning horse to back.

2
chrismarlow9 2 days ago 3 replies      
I used digital ocean for a while. My experience was bad reliability and random technical issues. I had various very experience ops people verify with me that it wasn't an issue I introduced into the systems running.

I went back to dedicated servers at a smallish provider and forgot how nice it can be to not have all the cloud virtualization stuff get in the way. It's just too fragmented among providers in the way they setup for me to use the service and not have a fear of lockin. Does it take me 3 or 4 days to get new boxes? Yes. Is it causing a massive headache for me? No, because I plan things and order them ahead of time.

Just my 2 cents, I know others who use DO and love it.

3
icpmacdo 2 days ago 2 replies      
I wonder if the this pushes there valuation over 1B, if so I think that means that Techstars is the first accelerator outside of YC to produce a unicorn.

Personally I hope so, Digital Ocean is a great product and I think one of the really smart things they did was be generous with there free credits as it was at least a great way for me to get on there platform and later on drop a fair amount into hosting with them.

4
buckbova 2 days ago 4 replies      
I've got two droplets now, one for email/owncloud and another for personal projects with automated backups. It's pretty easy to use, but I worry I don't have the sysadmin chops to keep it secure.

Edit: I followed tutorials on auto-updating packages through cron, securing ssh, and setting up ufw for only services needed when I set it up. It's been about 2 years now so maybe I shouldn't worry.

5
joeyspn 2 days ago 0 replies      
I've been with many VPS providers: KnownHost, RackSpace Cloud, OVH, Linode, etccc and DO has been a pleasure to work with because of all the integrations and tooling it has due to the increasing popularity/community.

I think this is a great step for a transition from a "developers cloud" to a "production cloud". I hope they continue to go in the same direction and soon offer multi-container blueprints as easy to deploy as their pre-built images.

0.02

6
NiftyFifty 2 days ago 0 replies      
My only question, is the investment rounds the new form of private equity bubble fixing? How diversified are these investments and how does the interoperations of a company get changes to meet the revenue influx to ROI? I never really got this jist and how culture DOES change by these rounds. The pitches must damn near printing money kind of stuff made of magic Mike XXL and pixie dust to stick.
7
vruiz 2 days ago 1 reply      
> The $83 million is going directly into growing our team and expanding our product offerings with networking and storage features.

Great to hear. Real private networking, object/shared storage and most importantly HA (IP failover/load balancing) is all DO is missing to start really competing with AWS for "big business".

8
3pt14159 2 days ago 0 replies      
> expanding our product offerings with networking and storage features

I'm so excited for this. I'd previously commented about how the lack of non-SSD storage meant I had to screw around with S3 when I really just wanted to keep everything on DO.

Great company. Been with them for two years now, and couldn't be happier. Combined with Cloud66 I worry less about deployments and servers and backups, and more about just getting the code out.

9
alberth 2 days ago 9 replies      
Has anyone used Vultr.com?

I ask because they have all the same features as DO + way more (e.g. dedicated hosting w/ same great panel, BYO ISO, etc).

10
usaphp 2 days ago 0 replies      
I've been using RackSpace for couple years before moving to DigitalOcean, I've had a good experience with Rackspace when I started but my bills kept growing and server started to have constant issues every now and then, so I've decided to move to DigitalOcean couple years ago. My traffic since then grew quite a lot from 100K/month to around 1 million visitors/month and my bills from DigitalOcean are still not much higher than they used to be at the later stages on RackSpace, and performance is much better for me with DigitalOcean.

The only thing I dont like about DigitalOcean droplets is the requirement to shut off the server before resizing, Rackspace allowed me to do it without a need to shut it off.

11
Killswitch 2 days ago 0 replies      
Great work Ben and team! I've been a customer for 2 years now, absolutely love the service and see no reason to leave it.
12
ape4 2 days ago 0 replies      
I'd like a harddrive option. To get a large amount of storage is way too expensive.
13
lsc 2 days ago 2 replies      
hm. Interesting. From what I know of the industry, their size and pricing, I would have thought they would be profitable enough that raising this sort of money wouldn't be particularly interesting.

Does this mean that they are operating at a loss?

14
r0naa 2 days ago 1 reply      
I love DO, they are doing great work. The only thing I regret is the relatively poor choice of platforms they support.

For example, there have been a really big demand for NixOS for two years now but still no announcement whatsoever.

15
curiousjorge 2 days ago 1 reply      
it's pretty amazin what they've managed to do, what was essentially an over saturated market, they've managed to pull ahead of incumbents like linode.
16
bpg_92 2 days ago 0 replies      
Well to be sincere, DO is implementing features most people actually care about. Working charms, it still has a long way to go :
17
ablation 2 days ago 0 replies      
I'm pleased for DO. Seems like a decent company doing things well. I've never had a complaint with their services.
18
arca_vorago 2 days ago 1 reply      
Slightly offtopic, but I am curious if anyone has any insights into the legal side of hosting profit seeking services on top of VPS's in general. Is the boilerplate contract(s)/eula/tos good enough generally or do you seek to actually make changes to a custom one?

What about hosting websites vs reselling access for some other purpose (eg. similar to game hosting services that allow full customer control of the instance?)

It seems to me like there is a lot of room for a tool that can spin up an instance over multiple VPS providers, because sometimes one will have a colo close to where you want and sometimes another will.

Anyone aware of comprehensive location based benchmarking of all the VPS's?

19
gshakir 2 days ago 4 replies      
If all goes well, looks like they might be competing directly with AWS soon.
OpenSSL Security Advisory openssl.org
281 points by runesoerensen  1 day ago   136 comments top 27
1
pilif 1 day ago 6 replies      
> OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails

I think the latest big thing I've learned in my career is that trying to fix broken input data silently is always bad. Fixing stuff silently isn't helpful for the callers, it's very difficult to do and it produces additional code which also isn't running in the normal case, so it's much more likely to be broken.

Additionally, your callers will start to depend on your behaviour and suddenly you have what amounts to two separate implementations in your code.

I learned that while blowing up (though don't call exit if you're a library. Please.) is initially annoying for callers, in the end, it will be better for you and your callers because code will be testable, correct and more secure (because there's less of it)

2
judemelancon 1 day ago 3 replies      
I am hardly astonished that a 319-line function that opens by declaring x, xtmp, xtmp2, chain_ss, bad_chain, param, depth, i, ok, num, j, retry, cb, and sktmp variables had a bug.

Before someone provides the standard "submit a patch" retort, I'll note that the variable naming is in full compliance with https://www.openssl.org/about/codingstyle.txt even if the function length isn't. A quick sample of other files suggests the function length matches actual practice elsewhere, too.

3
jgrahamc 1 day ago 3 replies      
Interesting part is that the bug was introduced in the latest versions and has been fixed by the person who inserted it :-)

Bug added: https://github.com/openssl/openssl/commit/da084a5ec6cebd67ae...

Bug removed: https://github.com/openssl/openssl/commit/2aacec8f4a5ba1b365...

Although that's just the committer: https://twitter.com/agl__/status/619129579580469248

4
acqq 1 day ago 1 reply      
We probably don't need to worry this time:

https://ma.ttias.be/openssl-cve-2015-1793-man-middle-attack/

"The vulnerability appears to exist only in OpenSSL releases that happened in June 2015 and later. That leaves a lot of Linux distributions relatively safe, since they haven't gotten an OpenSSL update in a while.

Red Hat, CentOS and Ubuntu appear to be entirely unaffected by this vulnerability, since they had no OpenSSL updates since June 2015."

5
mykhal 1 day ago 1 reply      
from test/verify_extra_test.c:

 Test for CVE-2015-1793 (Alternate Chains Certificate Forgery) Chain is as follows: rootCA (self-signed) | interCA | subinterCA subinterCA (self-signed) | | leaf ------------------ | bad rootCA, interCA, subinterCA, subinterCA (ss) all have CA=TRUE leaf and bad have CA=FALSE subinterCA and subinterCA (ss) have the same subject name and keys interCA (but not rootCA) and subinterCA (ss) are in the trusted store (roots.pem) leaf and subinterCA are in the untrusted list (untrusted.pem) bad is the certificate being verified (bad.pem) Versions vulnerable to CVE-2015-1793 will fail to detect that leaf has CA=FALSE, and will therefore incorrectly verify bad

6
d_theorist 1 day ago 1 reply      
So, updating server side OpenSSL will not close this vulnerability (for servers offering https-protected websites)? Is that correct?

If I understand the advisory correctly then this means that somebody could set up a webserver with a specially-crafted certificate and pretend to be somebody else, assuming that the client is running a vulnerable version of OpenSSL.

Is that right? I wish they would write these advisories in a slightly more helpful fashion.

7
jarofgreen 1 day ago 0 replies      
In case it's slow:

OpenSSL Security Advisory [9 Jul 2015]

=======================================

Alternative chains certificate forgery (CVE-2015-1793)

======================================================

Severity: High

During certificate verification, OpenSSL (starting from version 1.0.1n and1.0.2b) will attempt to find an alternative certificate chain if the firstattempt to build such a chain fails. An error in the implementation of thislogic can mean that an attacker could cause certain checks on untrustedcertificates to be bypassed, such as the CA flag, enabling them to use a validleaf certificate to act as a CA and "issue" an invalid certificate.

This issue will impact any application that verifies certificates includingSSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication.

This issue affects OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o.

OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2dOpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p

This issue was reported to OpenSSL on 24th June 2015 by Adam Langley/DavidBenjamin (Google/BoringSSL). The fix was developed by the BoringSSL project.

Note

====

As per our previous announcements and our Release Strategy(https://www.openssl.org/about/releasestrat.html), support for OpenSSL versions1.0.0 and 0.9.8 will cease on 31st December 2015. No security updates for thesereleases will be provided after that date. Users of these releases are advisedto upgrade.

References

==========

URL for this Security Advisory:https://www.openssl.org/news/secadv_20150709.txt

Note: the online version of the advisory may be updated with additionaldetails over time.

For details of OpenSSL severity classifications please see:https://www.openssl.org/about/secpolicy.html

8
coolowencool 1 day ago 1 reply      
"No Red Hat products are affected by this flaw (CVE-2015-1793), so no actions need to be performed to fix or mitigate this issue in any way." https://access.redhat.com/solutions/1523323
9
aninteger 1 day ago 5 replies      
Why has the adoption of alternative SSL software been so low. We have libressl, boringssl, something from Amazon? Very few Linux distributions seem interested in shipping alternative SSL software.
10
Mojah 1 day ago 1 reply      
11
0x0 1 day ago 1 reply      
Debian stable/oldstable is not affected. Only in unstable: https://security-tracker.debian.org/tracker/CVE-2015-1793
13
mykhal 1 day ago 0 replies      
Changes between 1.0.2c and 1.0.2d [9 Jul 2015]

 *) Alternate chains certificate forgery During certificate verfification, OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate. This issue was reported to OpenSSL by Adam Langley/David Benjamin (Google/BoringSSL). [Matt Caswell]

14
benmmurphy 1 day ago 1 reply      
An interesting coincidence is I noticed what I thought (and maybe is) a similar bug in the elixir hex module on the same day that this bug report was submitted to openssl. If you look at the hex partial chain method (https://github.com/hexpm/hex/blob/master/lib/hex/api.ex#L59-...) you can see it goes through all the certificates the other party supplied starting from the first one and tries to find one that is signed by a certificate in the trust store. it then explicitly returns it as the trusted_ca which effectively means the certificate has the CA bit set on it.

in order to exploit the attack in hex you need find a CA that will directly issue certificates off of a certificate in a trust store. apparently, this is not the recommended policy for CAs. so I made this tweet: (https://twitter.com/benmmurphy/status/613733887211139072)

'does anyone know a CA that signs directly from their root certs or has intermediate certs in trust stores? asking for a friend.'

and apparently there are some CAs that will do this. in the case of hex i think the chain you need to create looks something like this:

 RANDOM CERT SIGNED BY ISSUER NOT IN TRUST STORE | V VALID_CERT_SIGNED_BY_CERT_IN_TRUST_STORE (effectively treated as CA bit set) | V EVIL CERTIFICATE SIGNED BY PREVIOUS CERT

15
runesoerensen 1 day ago 1 reply      
It's worth noting that only releases since June 2015 are affected
16
kfreds 1 day ago 2 replies      
The latest version (2.3.7) of the official OpenVPN client is vulnerable, as is Tunnelblick for OSX. No fix has been published yet. The OpenVPN clients for Android and iOS are not affected.

See https://mullvad.net/en/v2/news for more details.

17
0x0 1 day ago 0 replies      
This sounds really familiar to the old IE bug that didn't check the CA flag - http://www.thoughtcrime.org/ie-ssl-chain.txt
18
mugsie 1 day ago 0 replies      
Seems to be OK for anyone using non beta versions of Ubuntu as well:

http://people.canonical.com/~ubuntu-security/cve/2015/CVE-20...

19
eatonphil 1 day ago 2 replies      
I've got a few sites using OpenSSL certs; do I need to do anything?
20
mrmondo 1 day ago 0 replies      
Good work on finding and fixing the bug to those involved. I don't think this is said often enough.
21
ericfrederich 1 day ago 0 replies      
At first I thought this was the result of that Hacking Team dump, but it seems this was reported prior to that.
22
Sharker 1 day ago 2 replies      
Its for old version. For example actual debian not affected. https://security-tracker.debian.org/tracker/CVE-2015-1793
23
arenaninja 1 day ago 0 replies      
Well this isn't how I wanted to start my morning
24
ck2 1 day ago 1 reply      
Nothing I can find in yum for CentOS 6 or 7
25
api 1 day ago 0 replies      
A lot of these SSL vulnerabilities show that complexity is an inherently bad thing for security. In general, bugs in a system are exponentially not linearly proportional to system complexity. With security that means that the addition of a feature, option, or extension to the security layer of a system exponentially decreases its trustworthiness.
26
tomjen3 1 day ago 2 replies      
How is it that we still depend on something so broken?
27
sneak 1 day ago 1 reply      
agl++;
Mapping the U.S. By Property Value Instead of Land Area citylab.com
266 points by Thevet  2 days ago   163 comments top 31
1
jbattle 2 days ago 13 replies      
Isn't this just the same as population density? I think there's an XKCD about this ...

https://xkcd.com/1138/

2
davidw 2 days ago 7 replies      
> The demand to live in these places is soaring, but the desire among incumbents to accommodate newcomers is low

Sums up why we ended up in Bend, Oregon rather than Boulder, Colorado. In the latter, there is a small but significant group of people whose idea is that the area needs fewer jobs, not smarter housing.

Edit: http://journal.dedasys.com/2015/06/18/boulder-colorado-vs-be... - more about our choice, for the curious.

3
kpennell 2 days ago 0 replies      
I recommend following Kim-Mai Cutler's Twitter if you want to find interesting articles/takes on SF's housing crisis.

https://twitter.com/kimmaicutler

I especially liked this Vox piece on what actually happens in the process of trying to build more housing in SF:

http://www.vox.com/2015/6/15/8782235/san-francisco-housing-c...

4
personjerry 2 days ago 1 reply      
How can the random large splotches be translated into usefulness or meaning? The gif seems to imply that the areas are resized based on their value, that is SUPER not helpful. And the bucket $40b - $1tr?! Almost everything falls in that bucket! I don't think this map is of great value.
5
aflyax 2 days ago 0 replies      
The fact that people don't all prefer everywhere equally is a "troubling inequality"? Property value is just a reflection of demand vs. supply. People want to live in some areas of the country more than in others, but, since the area is limited, the increased desire drives prices up.

Is it really so shocking that more people would rather live in San Francisco than Alabama?

6
iskander 2 days ago 1 reply      
>Folks who cant afford to live in those places dont get to take advantage of those labor markets. The demand to live in these places is soaring, but the desire among incumbents to accommodate newcomers is low. Hence NIMBYism, high housing costs, severe inequalitythe whole shebang.

NYC has had a massive residential construction boom (see Williamsburg, downtown Brooklyn, Long Island City, &c). Almost all of the housing that goes up is luxury and seems to do very little to bring down the city's extreme housing costs. Maybe severe inequality is driven by factors other than just NIMBYism? The new condos seem to attract wealthy outsiders.

7
tuckermi 2 days ago 0 replies      
From the standpoint of at least one "user", I would have gotten a lot more value out of the animation if I could control it (e.g. with a slider). The pulsing back and forth makes it more difficult for me to pinpoint something of interest (e.g. a less expensive city like Detroit) and then track it.
8
rmxt 2 days ago 2 replies      
Going down into the rabbit hole, here is a JSON file [1] containing the county level data from the Economist. I think that this is the source of the data used by the author.

[1] http://infographics.economist.com/2015/ASBTest/Land/js/count...

9
rwhitman 2 days ago 1 reply      
The conclusions from the article seemed a bit rushed. NYC and the Bay Area are pretty different when it comes to NIMBY policies and commutes from low income areas to high income.
10
jschulenklopper 2 days ago 0 replies      
A well-known prior art of this idea -- and perhaps not even the first -- is Worldmapper at http://www.worldmapper.org/. "The world as you have never seen before" contains striking world maps with their areas proportional to measures like population (even the population in AD 1), income, aircraft flights, toy exports, nuclear weapons, languages, people killed by floods and 600 more.
11
trhway 2 days ago 2 replies      
i kind of understand why people here frequently against high-density - the way it is done in US ends up with pretty unlivable space of towering boxes surrounded by concrete and asphalt (which is just obvious result of profit maximization while obeying height limits, etc.. While i think having a 200 stories tower surrounded by a park would be better than a bunch of 20-30 stories mid-towers sticking out of concrete/asphalt space)
12
lubesGordi 2 days ago 5 replies      
> "The stubborn unwillingness of incumbent homeowners in highly productive placesnamely San Francisco and New York City, which are barely visible on the land-area map, but dominate the housing value mapis a huge drain on the nations economy." Are they trying to say that if people didn't have to spend as much on housing then there would be greater GDP?
13
galfarragem 8 hours ago 0 replies      
Supply and demand explain most of phenomenoms..
14
shasta 2 days ago 0 replies      
I'm guessing you generate these by fixing three points along the boundary and then find the conformal map with the prescribed scale ratio at each point. Cool tech. If only this was a useful way to present the information. Color contours work much better.
15
herdrick 2 days ago 0 replies      
This appears to be housing value, not property value. So this is leaving off commercial property, which is probably usually proportionate to housing value, and agriculture land, which isn't.
16
tripzilch 1 day ago 0 replies      
While it looks extremely cool, is this really the best way to visualise this data?

Every method of visualisation has its strengths and pitfalls. One of the pitfalls of this method is that it always looks rather dramatic, regardless of the data. Changing the shape of well-known things gives an uneasy feeling, regardless of what you map.

Only data that is perfectly equal will not result in arbitrary distortions. The amount of distortion, magnitude of the local scale factor, is (or should be) a parameter of the visualisation, just like the decision of using a fiery red-yellow colour gradient.

Linked source has a bit more info on what exactly they did. Which is simply substituting area for value in dollars. Only makes sense if the data somewhat follows a normal distribution. And I'm going to guess here, property value does not, at all. It's not even bounded. I'd have picked log value, because an exponential distribution for the value is a much more reasonable assumption.

In case of a visualisation like this, I might actually decide to do something that is generally frowned upon: change the "origin" of the data. That is, add some constant value to the scale factors, to smooth out the severity of the distortions a little. If I were mapping the log value that wouldn't be necessary since it'd be equivalent to scaling dollar values to $1000 or $1M, etc.

I'm trying to remember other examples where data was mapped to local scale in a non-shape preserving way.

The only thing I can come up with was a sort of homunculus visualisation (I forget if it was just a drawing or actually made into a 3d clay statuette). It scaled our body parts roughly proportional to the volume of our brain dedicated to it. So you'd get a giant head with huge bulging eyes, etc. It looked weird, funny, still somewhat human/cartoonish. It showed things as "this is MUCH bigger than that" or "huh I didn't realise my tongue was that important". It wasn't a very clear visualisation, but I'm also hard pressed to come up with a better way to do it.

In other words, this type of visualisation helps to show the data in a mostly qualitative way, not quantitative. And like the homunculus example, the data doesn't need to be super exact (we can't estimate relative area/volume of irregular shapes very well).

But it looks cool.

17
nickhalfasleep 2 days ago 0 replies      
It would be interesting to see areas cross referenced by job creation and relative affordability over time. Perhaps there are counties whose plans did provide growth without economic isolation?
18
amenghra 2 days ago 2 replies      
I wonder how things will change once self driving cars become the norm.
19
SilasX 2 days ago 0 replies      
Awesome idea! If they could make it less ugly, I would prefer standardizing on this as a way to plot geographic data in certain cases.

Graphing by land area often means spending huge chunks of the map where nothing (relevant to a particular purpose) happens, and cramming all the interesting stuff into a few places on the coasts.

(Note all the hedges and caveats; I don't want to trivialize anyone's home here, but we definitely see this effect a lot.)

20
angersock 2 days ago 4 replies      
It's kind of delightful watching NYC and SF in the gif inflate like gigantic pustules, cysts of real estate.
21
transfire 2 days ago 0 replies      
This situation is exacerbated by government cost of living increases which take location into account. (i.e. New Yorkers get bigger raises)

It is also a consequence of the lack of a quality passenger rail system.

22
bobbles 2 days ago 0 replies      
I'd love to see this for Australia, so much of the population is in like 5 cities it would just look like one of those plastic ball molecule model things
23
lordnacho 2 days ago 1 reply      
How does the transformation work? It must be something that keeps the same borders regardless of what numbers you put in.
24
ggchappell 2 days ago 0 replies      
Interesting.

BTW, there is a glitch in the animation. One city -- Lincoln, Nebraska, I think -- does not expand smoothly.

25
rufugee 2 days ago 1 reply      
Where would one find the source data for property value analysis like this?
26
birk5437 2 days ago 0 replies      
It'd be nice to see the values adjusted for household income.
27
petercooper 2 days ago 0 replies      
So the US mapped by property value looks like China.. :-)
28
sgnelson 2 days ago 1 reply      
I don't think I've ever seen a worse Cartogram. At least they did an animation to make it easier to understand, but a regular map with simple choropleth would be a thousand times better.
29
lubesGordi 2 days ago 1 reply      
Why should land values be homogenous?
30
vegabook 2 days ago 1 reply      
Talk about big city distortion.

I can't help thinking this trend is at its zenith. Where economic growth is faltering, we're seeing de-urbanization, and I would be long the yellow areas and short the red, because if there is any upset to the JIT way our cities operate (London for example is said to have a mere 4 days worth food in stock), for reasons of climate change or political upheaval or some other reason (no more opportunity in overcrowded cities?), the rural areas on which we still enormously depend for food and water may suddenly revalue upwards.

31
michaelochurch 2 days ago 0 replies      
s/value/price/g
The military tested bacterial weapons in San Francisco businessinsider.com
259 points by tomkwok  22 hours ago   127 comments top 20
1
alexggordon 10 hours ago 1 reply      
I grew up in St. Louis and I found out about chemical testing[0] that went on here awhile ago.

I think the scariest part was that I'm sure it was specifically tested in a low-income area purposely, because the people there could probably do less about it. A relevant bit from the article:

> Spates, now 57 and retired, was born in 1955, delivered inside her family's apartment on the top floor of the since-demolished Pruitt-Igoe housing development in north St. Louis. Her family didn't know that on the roof, the Army was intentionally spewing hundreds of pounds of zinc cadmium sulfide into the air.

> Three months after her birth, her father died. Four of her 11 siblings succumbed to cancer at relatively young ages.

The fact that there probably are still people in the US government making similar decisions makes me a lot more nervous about life.

[0] http://www.businessinsider.com/army-sprayed-st-louis-with-to...

2
abhinai 17 hours ago 7 replies      
"... the court held that the government was immune to a lawsuit for negligence and that they were justified in conducting tests without subjects' knowledge.".

I sincerely hope there was some rationale to this decision that I do not know or understand. Otherwise it would be very hard for me to have any faith left in our judicial system.

3
bayesianhorse 18 hours ago 7 replies      
To their defense, they seem to have thought that they were using harmless bacteria. The background seems to be that they wanted to test the distribution of the bacteria rather than the effects on humans (which they thought to be zero any way).

Add to that the fact that bacterial infections often hitchhike on other injuries, infections or conditions, it's not clear that the death was entirely the fault of the particular bacterium.

In any case, it was a violation of the Nuremberg code, and I'd hope no democratic state would do such a thing in the current time.

4
iamben 17 hours ago 1 reply      
The UK did the same between 1940 and 1979 in various tests with zinc cadmium sulphide, e.coli and bacillus globigii. I suspect these kind of things are largely the basis for the Chemtrails conspiracy theories.

More: http://www.theguardian.com/politics/2002/apr/21/uk.medicalsc...

5
akie 17 hours ago 1 reply      
If you get a redirect loop on this article, like I did, click here to access Google's cached version: http://webcache.googleusercontent.com/search?q=cache:9c7coIh...
6
onewaystreet 17 hours ago 2 replies      
The US government did all sorts of crazy tests on people in the 50s: https://en.wikipedia.org/wiki/Project_MKUltra
7
golergka 14 hours ago 3 replies      
They did indeed break the code, in particular, provisions 1 and 9 (it is unclear whether they implemented facilities for point 7 or not), however: they reasonably believed that the experiment was harmless (and they were mostly right, given the experiment's scale), and the data they collected would be very usable for saving civilian lives in case of bacterial attack. So, I'm not saying that it's OK, but it's unreasonable to paint this as something completely evil and vicious.
8
fapjacks 8 hours ago 0 replies      
The important takeaway from all of this stuff is that no, the government did not suddenly stop doing all kinds of secret mass-experimentation on civilian populations. If you look at the history of these kinds of experiments and believe anything the government says, you would think they just stopped doing things like MK-ULTRA and Project SHAD thirty or forty years ago. Well, folks... They didn't. Fifty years from now on whatever the new HN is, there will be a post about some kind of massive psychological (or biological, or chemical, or whatever) experimentation the US government conducted fifty years from then on the unknowing civilian population of San Diego, or Portland, or whatever. Mark my words...
9
effdee 18 hours ago 1 reply      
They did the same in NY City and Washington DC [0].

[0] https://en.wikipedia.org/wiki/Project_112

10
igonvalue 14 hours ago 1 reply      
> This is a crazy story, one that seems like it must be a conspiracy theory.

Well, it is a conspiracy theory, isn't it? Just because it's true doesn't make it not a conspiracy theory.

11
Houshalter 16 hours ago 1 reply      
I'm just asking because I am curious. If someone's immune system is so weak they can die of a common "harmless" bacteria, would they probably have died of something else anyway? We are surrounded by and filled with millions of bacteria at all times.
12
brycemckinlay 13 hours ago 0 replies      
The bacteria in the air are what's responsible for the unique, delicious taste of San Francisco sourdough.
13
CognitiveLens 4 hours ago 0 replies      
can we change the title to be a little less click-baity? even just adding "in 1950" would make it less like a tabloid cover article.
14
staunch 17 hours ago 2 replies      
The people of San Francisco had their personalities permanently altered through the Navy's inadvertent distribution of peace juice? The U.S. military created the hippie movement and ended the Vietnam war. Impressive.

What other great wrongs can be righted through careful mass infection of populations?

15
elchief 12 hours ago 1 reply      
So, do the astroturfers get a heads-up when articles like this appear so they can prepare?
16
contingencies 14 hours ago 0 replies      
That's not all: they raised a Nazi flag in SF city hall, too! See John Gutmann's photograph The News Photographer (1935), where he takes the micky out of the subject photojournalist for missing the real story. https://www.pinterest.com/pin/465559680200258166/

Note that I've read the reasoning behind the event was apparently the visit of a German navy crew and 'protocol' (the US and Germany were not yet at war) rather than the weak line discussed at that URL. Still, a little known fact.

17
dmritard96 15 hours ago 0 replies      
that explains it...
18
briandear 18 hours ago 2 replies      
We're going to test the effect of nuclear weapons on San Franscisco so we can understand the effects of nuclear weapons on San Franscisco so we can protect San Francisco from a future attack by nuclear weapons or so we can understand the effect of the nuclear blast on Vladivostok. Sounds rather like the Tuskeegee Syphilis Experiment. To be fair though, the United States of far from the only country that has done this sort of thing.
19
notNow 17 hours ago 5 replies      
Does this have anything to do with the measles outbreak and the ensuing vaccination/anti-vaccination debate in California?
20
crimsonalucard 12 hours ago 0 replies      
Back in 1996, some terrorists almost launched VX gasarmed M55 rockets from alcatraz into the middle of san francisco. Fortunately, chemical weapons specialist, Doctor Stanley Goodspeed saved the day.
Pirate Bay Founders Acquitted in Criminal Copyright Case torrentfreak.com
240 points by adamnemecek  12 hours ago   7 comments top 5
1
bo1024 10 hours ago 1 reply      
So, no ruling about the legality (in Belgium) of the website itself. It's just that the case concerns 2011-2013 and there's no evidence the founders were involved with the site after 2006.
2
mrebus 8 hours ago 0 replies      
Does anyone know more about this case. Did the state bring any evidence they were involved in site. Did this case get thrown out before trial? Torrentfreak is a little bias as they don't think torrent hosting sites are wrong.
3
jszymborski 5 hours ago 0 replies      
Does this change anything about Gottfried rotting away in solitary?
4
UserRights 4 hours ago 0 replies      
meanwhile facebook goes big as video stealing platform. zuck must go to prison, too!
5
runn1ng 10 hours ago 1 reply      
Another win for tax-haven offshore companies!
IBM says it has made working versions of 7nm chips nytimes.com
237 points by mattee  1 day ago   120 comments top 16
1
awalton 1 day ago 4 replies      
Let's all realize that all of these research branches have been playing around with 10nm and 7nm chips for years now - the fact IBM cobbled together some working chip isn't surprising. Getting it to production is really the vastly more important part.

This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

The timing of this press release is entirely to boost investor confidence in IBM and GlobalFoundries given Intel's recent announcement of delays at the 10nm process node.

edit:

The Ars article is vastly better than the above link: http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industr...

2
RoboTeddy 1 day ago 3 replies      
Great talk that describes how modern (as of 2011) computer chips are manufactured: https://www.youtube.com/watch?v=NGFhc8R_uO4
3
jtchang 1 day ago 4 replies      
No mention of Intel anywhere in the article and how far along they are. Also 7nm blows my mind. I mean current CPUs already blow my mind with how tiny the transistors are getting.

And specially stabilized buildings? "NOBODY MOVE! WE'RE ETCHING!"

4
leni536 1 day ago 1 reply      
The lattice spacing of silicon is ~0.54nm so 7nm is around 13 lattice spacing, it's really impressive. Slowly but surely we will hit atomic limits.
5
BinaryIdiot 1 day ago 4 replies      
Wow, 7 nanometer is incredible! I wonder how small they can get silicon / silicon-germanium based chips before we have to resort to other techniques such as light processors (since light can be closer and even cross each other without issue). 10 nanometers that they're introducing next year is also incredible, at least to me since I'm not a hardware engineer and can't imagine how difficult manufacturing these are.
6
JoachimS 1 day ago 0 replies      
Very interesting. Good to see that the article points out that going from working transistors to commercial viable industrial process is also a big challenge. There are a lot of technologies and industry players that need to solve big problems before the node can start deliver. But that is what ITRS is for.

Also, interesting to see how things like e-beam litography is pushed once again at least a node into the future. We (as in they) are still able to tune and optimize on the same infrastructure.

7
icanhackit 1 day ago 2 replies      
Time to start working on the 7km chip. Fibre everywhere, content delivery servers everywhere, game servers out the wazoo so my crappy media streaming gadget or VR headset can remotely pull in the latest movies and games in 4K with minimal lag. You could outfit a few of the world's major cities for the cost of a new fab.

Unfortunately this won't sell new consumer hardware on an regular basis.

8
nicholas73 1 day ago 0 replies      
The press articles about this generally are misleading in that they use Silicon-Germanium as the catch phrase that's represents the breakthrough. Whereas in fact SiGe processes have been available for at least a decade. I know this because I developed chips for an IBM SiGe process a decade ago, and in college I did a research paper on semiconductor "superlattices" using an old textbook from our school library. It's not a new technology by any means.

IBM's 7nm is a great accomplishment for sure, but we really don't know anything about how it was made from the articles. Essentially SiGe is a bit more conductive and can switch faster than normal Si chips, thanks to quantum tunneling.

9
LoSboccacc 1 day ago 1 reply      
Still in research phase, from a company known for having 10% yields last time they innovated in the processor space

Also hasn't Ibm just sold its division to global foundries? So are they double dipping as usual by licensing them new tech separately?

10
rurban 1 day ago 2 replies      
YES! Kill Intel, PPC64 everywhere :)

It will not happen, I know, but "Wouldn't it be nice" was always one of my favorite Beach Boys song (Pet Sounds!) https://www.youtube.com/watch?v=ofByti7A4uM

This is THE chance.

11
zxexz 1 day ago 0 replies      
I know nothing about silicon fab, but I can't help but wonder how they mitigate the effects of quantum tunneling at such a small scale?
12
nickpsecurity 1 day ago 0 replies      
It's neat but will only benefit the largest companies with the most elite developers. I've learned a lot about hardware development in past year for purposes of imagining clean-slate, subversion-resistant chips. The work it takes to get the 90nm and below chips working, especially inexpensively, is pretty mind boggling with many aspects still dark arts shrouded in trade secrets. Many firms stay at 130-180nm levels with quite a few still selling tech closer to a micron than a 28nm chip. Tools to overcome these challenges cost over a million a seat.

So, seeing another process shrink doesn't excite me given we haven't tapped the potential of what we already have. Lots of technologies help: EDA; FPGA's: S-ASIC's; multi-project wafers; ASIC-proven I.P. And so on. Yet, even 350nm still isn't very accessible to most companies wanting to make a chip because the tools, I.P., and expertise are too expensive (or scarce sometimes). Yet, the benefits are real in so many use-cases (esp security). I'd like to see more companies dramatically bringing the costs down and eliminating other barriers to entry with affordable prices.

Example of the problems and what kind of work we're looking at:http://eejournal.com/archives/articles/20110104-elephant/

Example direction to go in:http://fpgacomputing.blogspot.com/2008/08/megahard-corp-open...

I think the best model, though, is to do what the EDA vendors did: invest money into smart people, including in academia, to solve the NP-hard problems of each tool phase with incremental improvements over time. I'm thinking a non-profit with continuous funding by the likes of Google, Facebook, Microsoft, Wall St firms, etc. A membership fee plus licensing tools at cost, which continues to go down, might do it. Start with simpler problems such as place-and-route and ASIC gate-level simulation to deliver fast, easy-to-use, low cost tools. Savings against EDA tools bring in more members and customers whose money can be invested in maintaining those tools plus funding hardest ones (esp high-level synthesis). Also, money goes into good logic libraries for affordable process nodes. Non-commercial use is free but I.P. must be shared with members.

Setup right, this thing could fund the hard stuff with commercial activity and benefit from academic/FOSS style submissions. With right structure, it also won't go away due to an acquisition or someone running out of money. Open source projects don't die: they just become unmaintained, temporarily or permanently. Someone can pick up the ball later.

Thoughts?

13
chriswilmer 1 day ago 0 replies      
I don't think the diameter of a DNA strand is 2.5nm...
14
graycat 1 day ago 2 replies      
As I recall, there is microelectronicsfab work in Taiwan, South Korea,and, in the US, at IBM and Intel,at least. And maybe China andRussia are trying to get caught upin fabs.

I wonder: What organization, really,is mostly responsible for the newerfabs? I mean, do each of Samsung, Intel,IBM, etc. do everything on their own?Or is there a main company, maybeApplied Materials, with some helpfrom, say, some small company for UV sources, some opticsfrom, maybe, Nikon, some mechanicalpieces, etc., that does the real workfor all the fabs?

7 nm -- what speed and power increaseswill that bring over 14 nm, 22 nm or whateveris being manufactured now, etc.?

Long live Moore's law! It ain'tover until the fat lady sings,and I don't hear any fat lady yet!

15
santaclaus 1 day ago 1 reply      
Return of the PowerPC Mac?
16
warrenmiller 1 day ago 0 replies      
Did someone say ASIC?
Iowa Makes a Bold Admission: We Need Fewer Roads citylab.com
223 points by atomatica  1 day ago   136 comments top 15
1
mholt 1 day ago 16 replies      
I'm from Iowa. There are a handful of population centers, and a sprinkling of homes and small communities between miles and miles and miles of farmland. The thing is, most people don't travel between the small communities - most driving takes people to or from town. If they're not going to town, they're going to visit neighbors or their fields, in which case gravel roads work great. Gravel roads work better than deteriorated pavement and have much lower maintenance costs.

I think "the entire system is unneeded" is a bit of a stretch, but I agree that, outside of cities, most routes don't need to be paved - you can safely travel 50 mph on a flat, straight gravel road. Of course the main arteries - Hwy 52, Hwy 20, I-80, and many others need to stay maintained. But there are so many small roads that, although quaint and a pleasure to drive, are probably unnecessary from a utilitarian/practical point of view.

2
w1ntermute 1 day ago 0 replies      
Charles Marohn of Strong Towns (http://www.strongtowns.org/), who is quoted in the article, did a great podcast interview a while back on "how the post-World War II approach to town and city planning has led to debt problems and wasteful infrastructure investments": http://www.econtalk.org/archives/2014/05/charles_marohn.html
3
cjslep 1 day ago 1 reply      
"The [Iowa] primary highway system makes up over 9,000 miles (14,000 km), a mere 8 percent of the U.S. state of Iowa's public road system." [0]

So while laudable, it would be very nice if North Carolina followed suit with its ~79,000 miles of maintained roads (largest of any state) [1]. But I doubt that would happen, my friend at NCDOT says the culture emphasizes building new roads (or the ones that get wiped out by hurricanes out on the outer banks), and change intersections in a manner that borders on the whimsical.

We like to build roads in challenging places, it seems [2].

[0] https://en.wikipedia.org/wiki/Iowa_Primary_Highway_System

[1] https://en.wikipedia.org/wiki/North_Carolina_Highway_System

[2] https://en.wikipedia.org/wiki/North_Carolina_Highway_12

4
kylec 1 day ago 1 reply      
Per capita driving may have peaked, but as long as the capita is still growing there will still be more and more cars on the road.
5
darkstar999 1 day ago 2 replies      
So how do you let roads "deteriorate and go away"? Wouldn't there be huge unsafe potholes in the transition?

What kind of roads would they abandon? I didn't click through to all the references, but this article doesn't give any solutions.

6
jsonne 1 day ago 0 replies      
The article is referring to Iowa not Kansas.
7
gremlinsinc 23 hours ago 0 replies      
I think the a lot of places should focus on expanding major roads/thoroughfares, and cities.. But look into bricks/dirt/gravel for country/side roads. would be nice if after self-driving cars, comes self-flying aerocars, cause then we won't need roads at all except in the city where air traffic would get super bogged down.
8
raldi 1 day ago 1 reply      
The article has a map showing which states have already hit peak traffic; does anyone know of a per-municipality or per-county list?

I'm really curious about whether this has happened in San Francisco.

9
dredmorbius 1 day ago 0 replies      
Related: I need to confirm the trend held, but as of a year or two ago, US FAA RITA data showed peak aviation fuel in 2000. Total departures and passenger miles have been higher since, but due to smaller and more fully loaded aircraft.

By 2010- 2012 or so, actual fuel use was ~50% of year 2000 forecast estimates.

10
programminggeek 1 day ago 1 reply      
At one point in time an extensive road system is a competitive advantage. At another, it makes less sense.

The same thing happened with Railroads during their heyday. I remember seeing an old railroad map with stops at all these small towns in Nebraska. Now, railroads are almost entirely commercial with very few passenger stops in small towns.

It makes sense that at some point you just don't have the need for so many roads. If more people move to urban or even suburban city centers, things like public transportation, ride sharing, Uber, and even self-driving vehicles start to make a lot of sense and cut down a lot on driving volume and the need for roads.

11
dataker 1 day ago 1 reply      
I briefly studied in South Dakota and Iowa without a car and it was a living nightmare.

Relying on friends and "taxis", I had to go through negative temperatures to get a simple can of soda.

After that, I could never complain about BART.

12
mark-r 1 day ago 2 replies      
I've always thought that total vehicle miles are capped by the availability of gas. Since fracking has expanded that supply, at least in the short term, I'd expect those mileage charts to start upticking again.
13
closetnerd 1 day ago 2 replies      
This may make sense in Iowa but it makes no sense in California. Gravel roads would would slow the effective max speed down to a crawl which would further exasperate traffic. If anything we need a higher driving speeds.
14
ashmud 1 day ago 0 replies      
One thing I learned, whether accurate or not, from the original SimCity is road maintenance is expensive. I almost invariably ended up peaking city size as the roads entered a constant state of disrepair.
15
AcerbicZero 1 day ago 3 replies      
A bit old, but still relevant -http://archive.gao.gov/f0302/109884.pdf

I'm no expert on the topic, but it seems to me that if heavily loaded trucks are causing a disproportionate amount of damage they should be taxed at a rate which allows for proper maintenance of those roads.

How We Deploy Python Code nylas.com
246 points by spang  1 day ago   120 comments top 29
1
svieira 1 day ago 3 replies      
Back when I was doing Python deployments (~2009-2013) I was:

* Downloading any new dependencies to a cached folder on the server (this was before wheels had really taken off)* Running pip install -r requirements.txt from that cached folder into a new virtual environment for that deployment (`/opt/company/app-name/YYYY-MM-DD-HH-MM-SS`)* Switching a symlink (`/some/path/app-name`) to point at the latest virtual env.* Running a graceful restart of Apache.

Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn't go out and I'd try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).

These days the things I'd definitely change would be:

* Use a local PyPi rather than a per-server cache* Use wheels wherever possible to avoid re-compilation on the servers.

Things I would consider:

* Packaging (deb / fat-package / docker) to avoid having any extra work done over per-machine + easy promotions from one environment to the next.

2
morgante 1 day ago 1 reply      
Their reason for dismissing Docker are rather shallow, considering that it's pretty much the perfect solution to this problem.

Their first reason (not wanting to upgrade a kernel) is terrible considering that they'll eventually be upgrading it anyways.

Their second is slightly better, but it's really not that hard. There are plenty of hosted services for storing Docker images, not to mention that "there's a Dockerfile for that."

Their final reason (not wanting to learn and convert to a new infrastructure paradigm) is the most legitimate, but ultimately misguided. Moving to Docker doesn't have to be an all-or-nothing affair. You don't have to do random shuffling of containers and automated shipping of new imagesthere are certainly benefits of going wholesale Docker, but it's by no means required. At the simplest level, you can just treat the Docker contain as an app and run it as you normally would, with all your normal systems. (ie. replace "python example.py" with "docker run example")

3
Cieplak 1 day ago 3 replies      
Highly recommend FPM for creating packages (deb, rpm, osx .pkg, tar) from gems, python modules, and pears.

https://github.com/jordansissel/fpm

4
doki_pen 1 day ago 0 replies      
We do something similar at embedly, except instead of dh-virtualenv we have our own homegrown solution. I wish I new about dh-virtualenv before we created it.

Basically, what it comes down to a build script that builds a deb with the virtualenv of your project versioned properly(build number, git tag), along with any other files that need to be installed (think init scripts and some about file describing the build). It also should do things like create users for daemons. We also use it to enforce consistent package structure.

We use devpi to host our python libraries (as opposed to applications), reprepro to host our deb packages, standard python tools to build the virtualenv and fpm to package it all up into a deb.

All in all, the bash build script is 177 LoC and is driven by a standard build script we include in every applications repository defining variables, and optionally overriding build steps (if you've used portage...).

The most important thing is that you have a standard way to create python libraries and application to reduce friction on starting new projects and getting them into production quickly.

5
remh 1 day ago 2 replies      
We fixed that issue at Datadog by using Chef Omnibus:

https://www.datadoghq.com/blog/new-datadog-agent-omnibus-tic...

It's more complicated than the proposed solution by nylas but ultimately it gives you full control of the whole environment and ensure that you won't hit ANY dependency issue when shipping your code to weird systems.

6
kbar13 1 day ago 1 reply      
http://pythonwheels.com/ solves the problem of building c extensions on installation.
7
tschellenbach 1 day ago 3 replies      
Yes, someone should build the one way to ship your app. No reason for everybody to be inventing this stuff over and over again.

Deploys are harder if you have a large codebase to ship. rSync works really well in those cases. It requires a bit of extra infrastructure, but is super fast.

8
sandGorgon 1 day ago 3 replies      
The fact that we had a weird combination of python and libraries took us towards Docker.And we have never looked back.

For someone trying out building python deployment packages using deb, rpm, etc. I really recommend Docker.

9
perlgeek 17 hours ago 0 replies      
Note that the base path /usr/share/python (that dh-virtualenv ships with) is a bad choice; see https://github.com/spotify/dh-virtualenv/issues/82 for a discussion.

You can set a different base path in debian/rules with export DH_VIRTUALENV_INSTALL_ROOT=/your/path/here

10
serkanh 14 hours ago 1 reply      
"Distributing Docker images within a private network also requires a separate service which we would need to configure, test, and maintain." What does this mean? Setting up a private docker registry is trivial at best and having it deploy on remote servers via chef, puppet; hell even fabric should do the job.
11
nZac 1 day ago 2 replies      
We just commit our dependencies into our project repository in wheel format and install into a virtual env on prod from that directory eliminating PyPi. Though I don't know many other that do this. Do you?

Bitbucket and GitHub are reliable enough for how often we deploy that we aren't all that worried about downtime from those services. We could also pull from a dev's machine should the situation be that dire.

We have looked into Docker but that tool has a lot more growing before "I" would feel comfortable putting it into production. I would rather ship a packaged VM than Docker at this point, there are to many gotchas that we don't have time to figure out.

12
sophacles 1 day ago 0 replies      
We use a devpi server, and just push the new package version, including wheels built for our server environment, for distribution.

On the app end we just build a new virtualenv, and launch. If something fails, we switch back to the old virtualenv. This is managed by a simple fabric script.

13
viraptor 1 day ago 2 replies      
> curl https://artifacts.nylas.net/sync-engine-3k48dls.deb -o $temp ; dpkg -i $temp

It's really not hard to deploy a package repository. Either a "proper" one with a tool like `reprepro`, or a stripped one which is basically just .deb files in one directory. There's really no need for curl+dpkg. And a proper repository gives you dependency handling for free.

14
erikb 18 hours ago 0 replies      
No No No No! Or maybe?

Do people really do that? Git pull their own projects into the production servers? I spent a lot of time to put all my code in versioned wheels when I deploy, even if I'm the only coder and the only user. Application and development are and should be two different worlds.

15
objectified 20 hours ago 0 replies      
I recently created vdist (https://vdist.readthedocs.org/en/latest/ - https://github.com/objectified/vdist) for doing similar things - the exception being is that it uses Docker to actually build the OS package on. vdist uses FPM under the hood, and (currently) lets you build both deb and rpm packages. It also packs up a complete virtualenv, and installs the build time OS dependencies on the Docker machine where it builds on when needed. The runtime dependencies are made into dependencies of the resulting package.
16
rfeather 1 day ago 0 replies      
I've had decent results using a combination of bamboo, maven, conda, and pip. Granted, most of our ecosystem is Java. Tagging a python package along as a maven artifact probably isn't the most natural thing to do otherwise.
17
velocitypsycho 1 day ago 2 replies      
For installing using .deb files, how are db migrations handled. Our deployment system handles running django migrations by deploying to a new folder/virtualenv, running the migrations, then switching over symlinks.

I vaguely remember .deb files having install scripts, is that what one would use?

18
StavrosK 1 day ago 3 replies      
Unfortunately, this method seems like it would only work for libraries, or things that can easily be packaged as libraries. It wouldn't work that well for a web application, for example, especially since the typical Django application usually involves multiple services, different settings per machine, etc.
19
avilay 1 day ago 1 reply      
Here is the process I use for smallish services -

1. Create a python package using setup.py2. Upload the resulting .tar.gz file to a central location3. Download to prod nodes and run pip3 install <packagename>.tar.gz

Rolling back is pretty simple - pip3 uninstall the current version and re-install the old version.

Any gotchas with this process?

20
webo 1 day ago 1 reply      
> Building with dh-virtualenv simply creates a debian package that includes a virtualenv, along with any dependencies listed in the requirements.txt file.

So how is this solving the first issue? If PyPI or the Git server is down, this is exactly like the git & pip option.

21
compostor42 1 day ago 1 reply      
Great article. I had never heard of dh-virtualenv but will be looking into it.

How has your experience with Ansible been so far? I have dabbled with it but haven't taken the plunge yet. Curious how it has been working out for you all.

22
BuckRogers 1 day ago 2 replies      
Seems this method wouldn't work as well if you have external clients you deploy for. I'd use Docker instead of doing this, just to be in a better position for an internal or external client deployment.
23
ah- 1 day ago 1 reply      
conda works pretty well.
24
theseatoms 12 hours ago 0 replies      
Does anyone have experience with PEX?
25
daryltucker 1 day ago 0 replies      
I see your issue of complexity. Glad I haven't ever reached the point where some good git hooks no longer work.
26
stefantalpalaru 1 day ago 1 reply      
> The state of the art seems to be run git pull and pray

No, the state of the art where I'm handling deployment is "run 'git push' to a test repo where a post-update hook runs a series of tests and if those tests pass it pushes to the production repo where a similar hook does any required additional operation".

27
jacques_chester 1 day ago 0 replies      
Here's how I deploy python code:

 cf push some-python-app
So far it's worked pretty well.

Works for Ruby, Java, Node, PHP and Go as well.

28
lifeisstillgood 1 day ago 0 replies      
Weirdly I am re-starting an old project doing this venv/ dpkg (http://pyholodeck.mikadosoftware.com). The fact that it's still a painful problem means Inam not wasting my time :-)
29
hobarrera 1 day ago 2 replies      
> The state of the art seems to be run git pull and pray

Looks like these guys never heard of things like CI.

JavaScript developers are incredible at problem solving, unfortunately cube-drone.com
203 points by ScottWRobinson  1 day ago   91 comments top 19
1
blhack 1 day ago 7 replies      
This is absolutely a NIGHTMARE for new developers. People come into the language, and there are what seems like an infinite number of "the only right" ways to do something, all of vary degrees of complexity/usefulness, and all claiming that they are god's gift to computer science.

That last part is the part that is most frustrating to me, and it isn't unique to javascript.

Google, facebook, yahoo, etc. have all gotten along pretty much just fine up until this point using existing web technologies. We should ALWAYS be trying to improve those technologies, and we should ALWAYS being trying to improve the workflows involved in using those technologies, but what comes across as ridiculous is the apple-style "this literally solves everything, and everything before it totally sucks and is useless and you are stupid for using it" marketing behind them.

I like javascript. A lot. I use it every single day, and it does a lot of useful things for me.

But my god the cognitive load involved in swimming through all of the "this library literally saves THE WORLD" marketing fluff is intense.

Javascripters, I get it, you're really excited. I am too. But maybe tone it down just a few notches.

Here's a thing I wrote about this topic a few years ago: http://thingist.com/blog.html?id=21434

2
vectorpush 1 day ago 1 reply      
> the ecosystem around Javascript is so densely layered and frequently changing that maintenance of any project over any significant period of time is going to be a nightmare

I hear this sentiment often, but don't really see any truth in it. Nobody is forcing you to update your code to be in line with the latest JS trends. If it ain't broken, don't fix it, and if it is broken, it was always broken and that fact has nothing to do with how rapidly the JS ecosystem is evolving.

Oh, but you want to leverage the latest and greatest that the ecosystem has to offer... well, that's your problem, not the ecosystem's and this is true no matter which ecosystem you're referring to. JS developers are just spoiled rotten because JS is so easy to refactor relative to every other part of the system. When the NoSQL hype train came barreling through the developer community a few years back, most of us didn't rush out to rebuild our applications on a NoSQL back-end because the database is too critical to mess around with.

If you want the latest and greatest you have to pay for it, but don't imply that there is something wrong with the ecosystem for giving you more options. Choosing the best tool is part of the job, and if you get hypnotized by every shiny new toy that debuts on the front page of HN, that's a professional flaw that only you can fix.

3
Xeoncross 1 day ago 4 replies      
I learned a while ago that it doesn't matter how broken something is if everyone is using it.

Humans are incredibly resourceful, if not also short-sighted.

Look at PHP, look at Javascript, look at Wordpress, look at email, look at the original jQuery etc...

4
robertfw 1 day ago 1 reply      
Problem: browsers are designed for documents, not rich applications

Solution: javascript

5
Cshelton 1 day ago 0 replies      
When I write c# code, it's just that, code. When I write Rust code it's just code. When I write JavaScript...I'm making beautiful peices of art. Almost like an old school watchmaker...a million ways to do the same end result.
6
program247365 1 day ago 1 reply      
JavaScript is both easy to get into, but difficult to master. There are places on your path to JavaScript mastery where you're literally like, "Did I make the right choices in my life?". Push a little harder, and like anything, the added effort to understand the choices put in front of others, and why something works the way it does, will be rewarding.

Illustrations/points like cube-drone has made here are popular because it's partly an over-exaggeration, and partly true. People on either side of the fence (JS mastery, or not), can find some kind of common ground of, "Right?? Isn't it painful??"

It's painful if you let it be. With the right mentors, and the right motivation, you can make JavaScript do wonderful things.

As a whole I think JavaScript has made the web a more fun and interesting place, no matter how much you want to bitch about the language or the ecosystem. We stand on the shoulders of giants, and I, for one, appreciate it. Thanks Brendan Eich! :)

7
felixgallo 1 day ago 7 replies      
One wonders how we get out of this mess. You'd have thought that Microsoft/Google/Apple/whomever would have at least tried to include a better alternative language in their browser by now. What stops them besides inertia?
8
ciaoben 20 hours ago 0 replies      
Just a little advice for newcomers ( like me ) that are scared but at the same time fascinated when it comes to use javascript. If you choose a framework to work with, stick to it for the entire project, and if you find an error that makes you struggle, don't immediately invent your own solution to patch it:for all the things that are said in this article, it always easy being attract to the fact that in every framework/code base you can put a simple piece of code that make things simpler for your and let you go ahead when you are stuck. But in my experience, most of the times, it only about being lazy and scared from and understood error. Behaving like this it let you behind, and don't make you improve your skills. And always break the entire logic of the tools you're using.

JS is a scripting language, so it easy 'script' your solution and postpone a problem. But it this way, you will never be able to improve, and your life with JS will remain frustrating and hard.

9
andrewchambers 1 day ago 0 replies      
The thing I like about clojurescript is the fact that it has a linker and library system which makes sense.
10
wwweston 1 day ago 1 reply      
"Prototypal inheritance is pants-on-head stupid."

Well, he's almost merely wrong, if he were to take out the "prototypal" part.

11
gotofritz 16 hours ago 0 replies      
I actually think the "blame" is not so much Javascript's - after all it has existed since 1996 and it wasn't like that at all for many years.

I wonder how much the rise of Github's / npm have to do with it. In the last 5 years or so it has become so much easier to create and share projects. Not only so - it has become a requirement, people use Github as their CV, we are all _expected_ to have side projects and stuff.

12
erikpukinskis 1 day ago 0 replies      
This is a double edged sword. Yes, web technologies are sprawling, but they are also constantly improving, which is not always true of the more tightly controlled platforms. You can build on iOS for example, where there are generally only a few ways to do things. But then you're waiting on Apple to release new stuff.

There will always be platforms that are the Wild West, build-your-own-adventure decentralized toolkits and others that have a narrow political agenda and are more stable. Neither is better, it's just a question of what kind of frustration hurts you less.

13
ScottWRobinson 1 day ago 1 reply      
While there's a lot of truth to this comic, I don't necessarily see it as a bad thing. I think it speaks to the flexibility of JavaScript. Sure, all of the tools and libraries can be hard to keep up with, but I'd rather have lots of community support than none.

I have to admit, I was a JavaScript hater for a long time, up until recently when I actually spent time to learn the language. Now its quickly becoming one of my favorite server-side languages. Isomorphic JS was too hard to resist :)

14
wwwtyro 1 day ago 1 reply      
"Problem: asm.js is basically unwritable by humans."

I have no idea where this is coming from, but it's simply untrue. See [1] for a ton of examples of it being perfectly human read/writable.

All I can come up with is that people are looking at code compiled to asm.js, which is of course going to have all semantic meaning stripped from it, no different than code compiled to javascript.

[1] http://asmjs.org/spec/latest/

15
javajosh 1 day ago 1 reply      
"Let's put a weak language on a billion screens and see what happens." The browser is Arrakis, and JS devs are the Fremen, hardened and shaped by an environment that is just barely survivable. Let us hope our own Golden Path is less painful than in Dune.
16
drKarl 1 day ago 1 reply      
I wish I could upvote it more...
17
anotherangrydev 1 day ago 2 replies      
Problem: Javascript has almost no standard library. --> Why is this a problem? What does he mean by standard library? Like, "strings", regexes and hashes? Not needed.

Problem: Javascript won't run outside the browser. --> Fixed.

Problem: The DOM is too slow for video games. --> Fixed.

Problem: Javascript is single-thread by design. --> Why is this a bad thing?

Problem: Javascript is too slow for video games. --> [Citation needed]

Problem: Javascript has no packaging or a linker to tie these packages together. --> Linker's not needed, and there are many package managers that work pretty good. Also, since the beginning of javascript you could always <script src=""> whatever you need, pretty cool actually.

Problem: Callback Hell. --> This "problem" only bothers bad/lazy/naive coders.

Problem: asm.js is basically unwritable by humans. --> Nope, and also, you may not need to write asm.js at all.

Problem: Prototypal inheritance is just pants-on-head stupid. --> Clearly a joke to try to use this as an argument.

Problem: Web resources need to be minified and zipped for performance. --> Yeaaaaah right, because Javascript == Web.

Problem: Machine generated output is more difficult to debug. --> As compared to... another machine generated output from a compiled language? Yeah right...

Problem: Async is still a nightmare, huh? --> Same as callback hell.

Problem: Balloning project size and complexity. --> [Citation needed]

Problem: Javascript still doesn't do everything. --> Is this guy serious? Does he thinks this is a valid argument for anything?

Problem: Output runs very slowly on mobile devices. --> [Citation needed]

The guy's just a lazy coder. His favorite language would be one where he could just do: "include <what_my_boss_wants>; run();". Good luck waiting for that one.

18
Nadya 1 day ago 3 replies      
I'm not sure how to interpret the last part...

"Where I am from, the point of digging is not freedom from digging."

Is it a statement that the point of digging is to reach further and learn more (positive)? Is it a statement more along the lines of "We dig because we must dig to survive." (negative)? Or possibly that digging is simply a way of life (negative)?

19
UUMMUU 1 day ago 1 reply      
I'm lost as to why Grunt is a problem. I tried to put all of my build scripts in the package.json but moving to Grunt has been incredibly useful.
Ask HN: Why don't transistors in microchips fail?
191 points by franciscop  1 day ago   107 comments top 21
1
joelaaronseely 1 day ago 5 replies      
There is another mechanism called "Single Event Upset" (SEU) or "Single Event Effects" (SEE) (basically synonymous). This is due to cosmic rays. On the surface of the earth, the effect is mostly abated by the atmosphere - except for neutrons. As you go higher in the atmosphere (say on a mountaintop, or an airplane, or go into space) it becomes worse because of other charged particles that are no longer attenuated by the atmosphere.

The typical issue at sea level is from neutrons hitting silicon atoms. If a neutron hits the neucleus in some area of the microprocessor circuitry, it suddenly recoils, basically causing an ionizing trail of several microns in length. Given transistors are now measured in 10s of nanometers, the ionizing path can cross many nodes in the circuit and create some sort of state change. Best case it happens in a single bit of a memory that has error correction and you never notice it. Worst case it causes latchup (power to ground short) in your processor and your CPU overheats and fries. Generally you would just notice it as a sudden error that causes the system to lock up, you'd reboot and it would come back up and be fine, leaving you with a vague thought of, "That was weird".

2
gibrown 1 day ago 2 replies      
As a former hardware engineer who worked on automated test equipment that tested ASICs (and did ASIC dev), there are a lot of different methods used to avoid this.

As others mentioned, most of these problems are caught when testing the chips. Most of the transistors on a chip are actually used for caching or RAM, and in those cases the chips have built in methods for disabling the portions of memory that are non-functional. I don't recall any instances of CPUs/firmware doing this dynamically, but I wouldn't be surprised if there are. A lot of chips have some self diagnostics.

Most ASICs also have extra transistors sprinkled around so they can bypass and fix errors in the manufacturing process. Making chips is like printing money where some percentage of your money is defective. It pays to try and fix them after printing.

Also, as someone who has ordered lots of parts there are many cases where you put a part into production and then find an abnormally high failure rate. I once did a few months of high temperature and vibration testing on our boards to try and discover these sorts of issues, and then you spend a bunch of time convincing the manufacturer that their parts are not meeting spec.

Fun times... thanks for the trip down memory lane.

3
kabdib 1 day ago 2 replies      
Oh, they do fail.

The last time I worked with some hardware folks speccing a system-on-a-chip, they were modeling device lifetime versus clock speed.

"Hey software guys, if we reduce the clock rate by ten percent we get another three years out of the chip." Or somesuch, due to electromigration and other things, largely made worse by heat.

Since it was a gaming console, we wound up at some kind of compromise that involved guessing what the Competition would also be doing with their clock rate.

4
ajross 1 day ago 4 replies      
Yes, they can fail. Lots and lots of them fail immediately due to manufacturing defects. And over time, electromigration (where dopant atoms get kicked out of position by interaction with electron momentum) will slowly degrade their performance. And sometimes they fail due to specific events like an overheat or electrostatic discharge.

But the failure rate after initial burn-in is phenomenally low. They're solid state devices, after all, and the only moving parts are electrons.

5
zokier 1 day ago 1 reply      
Slightly related thing is RAM random bit errors. There was an interesting article published few years ago where some guy registered domains that differed by one bit from some popular domains and recorded the traffic that hit them. Kinda scary to think what else is wrong in your RAM then... Too bad that ECC is still restricted to servers and serious workstations.

http://dinaburg.org/bitsquatting.html

6
Nomentatus 1 day ago 0 replies      
Nearly all chips experienced transistor failures, rendering them useless, back in the day. Intel is the monster it is because they were the guys who first found out how to sorta "temper" chips to vastly reduce that failure rate (most failures were gross enough to be instant, back then, and Intel started with memory chips.) Because their heat treatment left no visible mark, Intel didn't patent it, but kept it as a trade secret giving them an incredible economic advantage, for many years. They all but swept the field. I've no doubt misremembered some details.
7
nickpsecurity 1 day ago 1 reply      
They're extremely simple, have no moving parts, and the materials/processes of semiconductor fabs optimize to ensure they get done right. The whole chip will often fail if transistors are fabbed incorrectly and rest end up in errata sheets where you work around them. Environmental effects are reduced with Silicon-on-Insulator (SOI), rad-hard methods, immunity-aware programming, and so on. Architectures such as Tandem's NonStop assumed there'd be plenty of failures and just ran things in lockstep with redundant components.

So, simplicity and hard work by fab designers is 90+% of it. There's whole fields and processes dedicated to the rest.

8
RogerL 1 day ago 1 reply      
Others have answered why, here is the 'what would happen'. Heat your CPU up by pointing a hair dryer at it (you may want to treat this as a thought experiment as you could destroy your computer). At some point it begins to fail because transistors are pushed past theiroperating conditions. Another way to push it to failure is to overclock. The results are ... variable. Sometimes you won't notice the problems, computations will just come out wrong. Sometimes the computer will blue screen or spontaneously reboot. And so on. Just depends where the failure occurs, and if the currently running software depends on that part of the chip. If a transistor responsible for instruction dispatch fails it's probably instant death. If a transistor responsible for helping in computing the least significant bit of a sin() computation, well, you may never notice it.
9
mchannon 1 day ago 1 reply      
Generally, yes, a failing transistor can be a fatal problem. This relates to "chip yield" on a waferfull of chips.

Faults don't always manifest themselves as a binary pass/fail result; as chip temperatures increase, transistors that have faults will "misfire" more often. As long as this temperature is high enough, these lower-grade chips can be sold as lower-end processors that never in practice reach these temperatures.

Am not aware of any redundancy units in current microprocessor offerings but it would not surprise me; Intel did something of this nature with their 80386 line but it was more of a labeling thing ("16 BIT S/W ONLY").

Solid state drives, on the other hand, are built around this protection; when a block fails after so many read/write cycles, the logic "TRIM"s that portion of the virtual disk, diminishing its capacity but keeping the rest of the device going.

10
intrasight 1 day ago 2 replies      
When I was studying EE, a professor said on this subject that about 20% of the transistors in a chip are used for self-diagnostics. Manufacturing failures are a given. The diagnostics tell the company what has failed, and they segment the chips into different product/price classes based upon what works and what doesn't. After being deployed into a product, I assume that chips would follow a standard Bathtub Curve: https://en.wikipedia.org/wiki/Bathtub_curve

As geometries fall, the effects of "wear" at the atomic level will go up.

11
greenNote 1 day ago 0 replies      
As stated, two big variables are clock rate and feature size, which both effect mean time between failures (MTBF). Being more conservative increases this metric. I know from working in a fab that there are many electrical inspection steps along the process, so failures are caught during the manufacturing process (reducing the chance that you see them in the final product). Once the chip is packaged, and assuming that it is operated in a nominal environment, then failures are not that common.
12
tzs 1 day ago 1 reply      
Speaking of the effects of component failure on chips, a couple years ago researchers demonstrated self-healing chips [1]. Large parts of the chips could be destroyed and the remaining components would reconfigure themselves to find an alternative way to accomplish their task.

[1] http://www.caltech.edu/news/creating-indestructible-self-hea...

13
wsxcde 1 day ago 0 replies      
Others have already mentioned one failure mechanism that causes transistor degradation over time: electromigration. Other important aging mechanisms are negative-bias temperature instability (NBTI) and hot carrier injection (HCI). I've seem papers claim the dual of NBTI - PBTI - is now an issue in the newest process nodes.

This seems to be a nice overview of aging effects: http://spectrum.ieee.org/semiconductors/processors/transisto....

14
2bluesc 23 hours ago 0 replies      
In 2011, Intel released the 6 series chipset with an incorrectly sized transistor that would ultimately fail if used extensively. A massive recall followed.

http://www.anandtech.com/show/4142/intel-discovers-bug-in-6s...

15
spiritplumber 1 day ago 1 reply      
This is why we usually slightly underclock stuff that has to live on boats.
16
jsudhams 1 day ago 0 replies      
So would that mean we need to ensure the systems in critical area (not nuclear or some but banks and transaction critical) be tech refereshed mandatory at 4/5 years? Especially when 7nm production starts.
17
Gravityloss 1 day ago 0 replies      
They do fail. Linus Torvalds talked about this in 2007http://yarchive.net/comp/linux/cpu_reliability.html
18
msandford 1 day ago 0 replies      
> Considering that a Quad-core + GPU Core i7 Haswell has 1.4e9 transistors inside, even given a really small probability of one of them failing, wouldn't this be catastrophic?

Yes, generally speaking it would be. Depending on where it is inside the chip.

> Wouldn't a single transistor failing mean the whole chip stops working? Or are there protections built-in so only performance is lost over time?

Not necessarily. It might be somewhere that never or rarely gets used, in which case the failure won't make the chip stop working. It might mean that you start seeing wrong values on a particular cache line, or that your branch prediction gets worse (if it's in the branch predictor) or that your floating point math doesn't work quite right anymore.

But most of the failures are either manufacturing errors meaning that the chip NEVER works right, or they're "infant mortality" meaning that the chip dies very soon after it's packaged up and tested. So if you test long enough, you can prevent this kind of problem from making it to customers.

Once the chip is verified to work at all, and it makes it through the infant mortality period, the lifetime is actually quite good. There are a few reasons:

1. there are no moving parts so traditional fatigue doesn't play a role

2. all "parts" (transisotrs) are encased in multiple layers of silicon dioxide so that you can lay the metal layers down

3. the whole silicon die is encased yet again in another package which protects the die from the atmosphere

4. even if it was exposed to the atmosphere, and the raw silicon oxidized, it would make silicon dioxide, which is a protective insulator

5. there is a degradation curve for the transistors, but the manufacturers generally don't push up against the limits too hard because it's fairly easy and cheap to underclock and the customer doesn't really know what they're missing

6. since most people don't stress their computers too egregiously this merely slows down the slide down the degradation curve as it's largely governed by temperature, and temperature is generated by a) higher voltage required for higher clock speed and b) more utilization of the CPU

Once you add all these up you're left with a system that's very, very robust. The failure rates are serious but only measured over decades. If you tried to keep a thousand modern CPUs running very hot for decades you'd be sorely disappointed in the failure rate. But for the few years that people use a computer and the relative low load that they place on them (as personal computers) they never have a big enough sample space to see failures. Hard drives and RAM fail far sooner, at least until SSDs start to mature.

19
MichaelCrawford 1 day ago 0 replies      
They do.

That's why our boxen have power-on self tests.

20
rhino369 1 day ago 0 replies      
Extremely good R&D done by semiconductor companies. It's frankly amazing how good they are.
21
Gibbon1 1 day ago 1 reply      
Transistors don't fail for the same reason the 70 year old wires in my house don't fail. The electrons flowing through the transistors doesn't disturb the molecular structure of the doped silicon.
AWS CodePipeline amazon.com
217 points by jeffbarr  1 day ago   70 comments top 9
1
saosebastiao 1 day ago 1 reply      
Internally at Amazon, Pipelines (which inspired this service) was a lifesaver. Apollo (which is the inspiration for CodeDeploy) was also helpful, but should probably just be replaced by Docker or OSv at this point.

But if they ever release a tool that is inspired by the Brazil build system, pack up and run for the hills. When it takes a team of devs over two years to get Python to build and run on your servers, you know your frankenstein build system is broken. It could be replaced by shell scripts and still be orders of magnitude better. Nobody deserves the horror of working with that barf sandwich.

2
felipesabino 1 day ago 1 reply      
I wonder why GitHub specifically and not just Git repos in general? Isn't it weird?

It means they don't even support their own new "Git" product AWS CodeCommit [1]

[1]https://aws.amazon.com/blogs/aws/now-available-aws-codecommi...

3
atmosx 1 day ago 1 reply      
This is interesting for lone developers, but I'm not sure about the pricing:

Youll pay $1 per active pipeline per month (the first one is available to you at no charge as part of the AWS Free Tier). An active pipeline has at least one code change move through it during the course of a month.

Does this mean that every time you run a session you pay 1 EUR no matter how many stages the session has (pull, compile/build, test (multiple tests) and deploy?

4
jtwaleson 1 day ago 0 replies      
Is there any way to integrate this with ECS? That would be a great feature for me.
5
jayonsoftware 1 day ago 1 reply      
Can we build .NET code ?
6
maikklein 1 day ago 1 reply      
Could I install Unreal Engine 4 on CodePipleline so that I can build my game remotely?
7
pragar 1 day ago 0 replies      
Thanks. I was eagerly waiting for this :)
8
dynjo 1 day ago 8 replies      
Amazon seriously need to hire some good UI designers. They produce great stuff but it all looks like it was designed by developers in 1980.
9
ebbv 1 day ago 2 replies      
What's up with the Amazons spam? There's 5 different submissions on the front page right now. They could have all been one. Bad form, AWS team.
GitLab raises 1.5M gitlab.com
188 points by jobvandervoort  1 day ago   92 comments top 20
1
jbrooksuk 1 day ago 4 replies      
Firstly, a major congratulations to the gang at GitLab - well deserved!

We'd used GitLab for over a year internally, but as I've mentioned previously, it became a pain to maintain. So we switched to GitHub for our private "important" projects and turned off our GitLab instance (other reasons caused this too mind). Our version was 6.7 or something up until today.

Today we realised we should run GitLab internally again for non-critical repositories - since our networking is a pain to give external access to servers - we can't access it out of the office. I updated us to 7.12 CE and I kind of regret it.

The UI is so complicated now. Whilst there are good features that we like, it's so hard to navigate to where you want to be. I think this is down to the "contextual" sidebar. I really do prefer GitHub's UI for repo administration and usage, which is a shame.

Sure, the colours are nice in GitLab but it's far from functional. My colleagues felt the same way too.

Also (for those at GitLab) your Markdown renderer is still not rendering Markdown correctly in all cases...

Anyway, not to take away from the funding - it's excellent news!

2
general_failure 1 day ago 5 replies      
A feature I miss in GitLab and Github is an issue tracker across multiple repos. For example, our project has 5-10 repos but they are all part of single release/milestone.

Currently, we have to create milestones in each of the repos and assign issues to those milestones. It's really a hassle. We cross reference commits a lot in the issues and this is the reason why we don't create a "empty" repo simple for common issues. Unless there is some way to say something like "Fixes commonissuetracker#43".

Thanks, a very happy gitlab user

3
nodesocket 1 day ago 1 reply      
Seems like a small amount to raise from a heavyweight VC like Khosla and super angel Ashton Kutcher. I would imagine trying to compete against GitHub and GitHub enterprise would be a capital intensive thing.
4
Vespasian 1 day ago 1 reply      
Congratulations on the funding!

I am using a gitlab instance for about 2 years on my personal server and have been very happy with it.

Recently, (finally!) we switched our research group over from (a very very old version of) redmine and you can't imagine my joy when that happended! I think never before in my life migrating wiki pages and issues felt so good.

Last but not least it is encouraging to see a European software startup thriving and growing like you do. Nothing against the great products from SV but a little geographical competition never hurt nobody. Right? ;)

Keep up the great work.Gre aus Deutschland / Greetings from Germany

5
BinaryIdiot 1 day ago 2 replies      
I used GitLab at my last company. It was one of the earlier versions before they went to YCombinator. At the time I wasn't a fan; I ran into bugs and just had odd persistence issues.

But I've got to say GitLab is just incredible to use now. It's really nice and I now use it over BitBucket for my private repositories. I still use GitHub for OpenSource (that's going to be a hard barrier to get through if they really want to) but I'm a big fan.

So congrats on the round! This is technically the second seed round, right? Or does YCombinator not really count as a seed anymore?

6
wldcordeiro 1 day ago 1 reply      
I've been using Gitlab now for a few months and really like it but I've run into some bugs on gitlab.com that I've reported through multiple avenues and have had zero success fixing. The main one is that there are some repos that if I make an issue or edit an issue on the server will 500 error on form submit (the submit will still occur, the redirect is broken.) It would be beyond nice to see this extra cash go to a more responsive support system.
7
edwintorok 1 day ago 2 replies      
When I visit a project page I usually do it for one of these reason:

* learn about what the project is, a short description on what it is, how to install, where to find more documentation

* look at / search the files or clone the repo

* search bugreports or create a new bugreport

Your default project page looks quite similar to gitorious, which looks more like a place to just host your repository and not a place to interact with the project.Bitbucket's default looks way better for example, and github's is quite good too.

My suggestion to make Gitlab fit better into my workflow:

* default page/tab for project root should be configurable, either on per project or per user basis: I'd like to have the README as default for example, the Activity page by default interests me less.

* there should be a tab for issues on the default page, its more important than to see the activity IMHO

* you've got the clone URL in an easily accessible place, good!

* the Files view is quite similar to Github's (good!), but I can't figure out how to search (either fulltext or filename)

* I don't see a button to create a new issue (I'm not logged in, should I login first? Github has a new issue button that takes you to login)

* how do I search in issues (fulltext?)

* how do I search for project names, or inside projects/issues globally?

* the default project page should somehow highlight or focus on making it easy and obvious the main aspects on how you'd interact with the project, if all features are shown in equal style it feels somewhat cluttered and overloaded.

P.S.: should I open a feature request about these on the gitlab site?

8
jobvandervoort 1 day ago 0 replies      
We're very excited with this opportunity. We'll be here if you have any questions.
9
neom 1 day ago 1 reply      
Big fans of GitLab over here at DigitalOcean! Good work and good luck!
10
jtwaleson 1 day ago 1 reply      
Congrats from another Dutch company that expanded to the US! We're using GitLab for all our internal source code at Mendix, and are extremely happy with it.
11
physcab 1 day ago 3 replies      
This is a naive question, but what's the difference between GitLab and Github?
12
mullingitover 22 hours ago 0 replies      
I'm a big fan of GitLab's ability to create custom hooks and protected branches. GitHub doesn't offer those things, and despite their more polished UI it was a dealbreaker.
13
the-dude 1 day ago 2 replies      
But what is the valuation?
14
marvel_boy 1 day ago 1 reply      
Nice !Without doubt GitLab has created a lot of innovation. What are the main new things you will be deliver in the future?
15
schandur 1 day ago 1 reply      
Congratulations to the GitLab team! We use a self-hosted version and are very happy with it.
16
ausjke 1 day ago 1 reply      
For some reason I feel Redmine + Gitolite is the best for everything, except for code-review that is.
17
marcelo_lebre 1 day ago 1 reply      
Nicely done!
18
yAnonymous 1 day ago 1 reply      
Congrats and thanks for the great software!
19
joshmn 1 day ago 3 replies      
Paging @sytse; "GitLab CEO here" coming soon... :)

For those who don't get the joke, https://www.google.com/search?num=40&es_sm=119&q="GitLab+CEO...

20
fibo 1 day ago 1 reply      
I don't like the idea of a free as in beer software. GitHub is Great but Gitlab seems like a cheap clone, so targeting People that want to pay less or nothing. I don't think it is ethic to clone ideas, to build a better world we nerd new ideas.
You are a kitten in a catnip forest bloodrizer.ru
165 points by madars  1 day ago   54 comments top 21
1
Torn 20 hours ago 4 replies      
So this is A Dark Room with kittens?

Here's two one-liners for the console to help speed up the process

 setInterval(function () { $('span:contains(Gather catnip)').click() }, 5); setInterval(function () { $('span:contains(Refine catnip)').click() }, 100);
clearInterval() on the numbers given back will stop the auto-clicking

2
vlunkr 13 hours ago 1 reply      
Oh man, I just had to stop playing this game. It's an incremental game like cookie clicker or candy box. Except it's much more complex. You have to balance more than a dozen different resources as the game goes on. I left it running in a tab at work, but I was going back to it so often during the day that I had to just wipe my progress and leave.
3
frisco 23 hours ago 1 reply      
It feels a lot like A Dark Room [0]. I remember ADR was open source, I wonder if it's built on or inspired by that.

[0] http://adarkroom.doublespeakgames.com/

4
flashman 21 hours ago 1 reply      
5
tammer 8 hours ago 1 reply      
This is interesting and fun! Although from the title and it's placement on HN, I immediately hypothesized it to be someone's blog post about the endless distractions ever-present for the digital worker.
6
joshvm 22 hours ago 3 replies      
I feel like the woodcutter bonus should be increased. It's not economically viable once you have even a moderate number of farmers. It takes 100 seconds to produce seven wood, if I have 5 farmers I can do it in a third of the time and I won't die of starvation in the process.

I fear today's productivity is already lost.

7
unchocked 23 hours ago 0 replies      
I am. Such a sanitized version of resource management addiction, I don't even feel unclean playing it.
8
davidw 22 hours ago 1 reply      
>go east

Behind HouseYou are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar.

9
ojiikun 3 hours ago 0 replies      
Somewhat broken in Chrome on Android. Tapping a button brings up what looks like a weird tooltip that blocks most of the screen. Pretty much unplayable.
10
thestepafter 22 hours ago 2 replies      
Would it be possible to get a copy of this? I would like to do some HTML and CSS updates to it. Do you the project on github?
11
th0br0 18 hours ago 1 reply      
Simple means of accessing the currently available resources:

 gamePage.resPool
e.g.

 gamePage.resPool.maxAll()

12
zocoi 2 hours ago 0 replies      
Found this in game source:

 $("#devPanelCheats").show();

13
rodgerd 20 hours ago 0 replies      
Horribly addictive. You have been warned.
14
coldpie 14 hours ago 3 replies      
Can someone explain the appeal of this? I'm not going to spend minutes clicking on a button repeatedly. What changes? Is there a story that gets told or something? Some min/max puzzle to figure out? What compels you to click this button?
15
cheshire137 11 hours ago 0 replies      
Need a newline in there, or this gets grim. http://imgur.com/7JfnPbj
16
mapt 8 hours ago 0 replies      
Metagame: Competitive speedrun videos
17
d_theorist 14 hours ago 0 replies      
All my kittens are dead.
18
egmracer02 21 hours ago 0 replies      
Unlimited catnip: function clickn(n) { for (i = 0; i < n; i++) { $( "span:contains('Gather')" ).click();}}
19
amyjess 7 hours ago 0 replies      
Well, that killed my productivity for the day.
20
PopeOfNope 18 hours ago 0 replies      
Your cat is eaten by a grue?
21
innguest 10 hours ago 0 replies      
Wow, I really enjoyed Cookie Clicker but this is unplayable. I just learned a lesson on how important graphics are for some people.
Hack.chat A minimal, distraction-free chat application github.com
180 points by habi  17 hours ago   65 comments top 25
1
jetpm 11 hours ago 5 replies      
I also made a chat app, it's like the exact opposite of this, its totally distractinghttp://quatshy.com
2
unknownknowns 13 hours ago 2 replies      
The admin password in the config is the password on the live server FYI ;-).
3
idoco 12 hours ago 1 reply      
I posted MapChat (idoco.github.io/map-chat) three weeks ago at about the same time of the day and also got tons of traffic from HN.

HN crowd was brutal, and I wasn't really ready for that. People were trying to crash it in so many ways. But the overall experience was really fun :) and I also got some very valuable feedback.

4
ixwt 12 hours ago 0 replies      
Somebody seems to have found a way to constantly broadcast system messages and is flooding chat.

I really like how simple it is. I do like how it works similar to Mozilla Hello. Just give somebody a link, and they can join.

Also, I'm getting a cert error. Identity cannot be verified.

5
erikb 15 hours ago 2 replies      
I would understand a markdown format for chat, because it's easy, short and fun. What made you use something as powerful as LaTeX? With great power comes great responsibility, right?
6
emehrkay 14 hours ago 1 reply      
I really like how server.js is setup. Very straight-forward and I would imagine adding features would be pretty easy to build in.

edit: things like storing which rooms users are chatting in using Redis so that you can run multiple instances and balance them. Or persisting the messages to a DB. Or adding processes to download images/bring in website snippets when urls are posted (like Slack).

7
stann 14 hours ago 1 reply      
This is a cool project. Recently, I have had to teach a High School kid some basic science using Whatsapp and found it lacking. Searching on the app stores for a messaging app with Latex support yielded nothing. Already gearing up to roll up my sleeves and build one.
8
snehesht 15 hours ago 0 replies      
9
waynenilsen 15 hours ago 1 reply      
nice work, here is a brief feature request list

* when a formula is copied, the latex should be copied

* support for \begin{align}

10
cbsmith 7 hours ago 0 replies      
A distraction-free chat application seems like an oxymoron...
11
curiousjorge 12 hours ago 0 replies      
I just love how minimal and easy it is. No more waiting for a monolithic javascript front end to load or a "loading pleast wait", no glossy buttons or emoticons or avatars or github sign ins.

It even has IP flood detection by default! Multiple line protection should prevent the flood of ascii penis art and other obscenities.

12
codezero 12 hours ago 0 replies      
There was a time when telechat (chat over telnet, mid-late 90s early 00s) was a pretty cool niche chat medium. I really miss it.

It allowed more customization and control than IRC for the host, and a bunch of small communities popped up around them.

13
alfg 11 hours ago 0 replies      
Good work. The minimal design is an example of less design == good design.
14
chuhnk 11 hours ago 0 replies      
Interesting. Was working on something similar but more focused on streams with no identity. http://malten.me/
15
mrcactu5 10 hours ago 1 reply      
the KaTeX doesn't seem to render on mine:http://i.imgur.com/5GMaIAq.png

Personally I had never heard of KaTeX - since Khan Academy didn't consider MathJax fast enough they optimized ithttps://khan.github.io/KaTeX/

16
Tepix 14 hours ago 0 replies      
Great stuff!

Some feature request:a) allow ignoring usersb) allow UTF-8 in usernames

17
QuantumRoar 14 hours ago 0 replies      
Would be nice if an error is thrown when the LaTeX can't be parsed instead of just printing whatever TeX-foo you've written down.

Also, I'd like to see Tikz and pgfplots support :)

18
nunull 11 hours ago 1 reply      
Somebody could write a CLI based client for this. That would be awesome and it looks like it would be very straight forward, too.
19
personjerry 7 hours ago 1 reply      
Why are so many people trying to make an IRC-like app right now?
20
Vexs 14 hours ago 0 replies      
Seems to work pretty well after messing about on ?HN, lotsa people testing bad LaTeX though.
21
samurailink3 14 hours ago 0 replies      
Parsing usernames in LaTeX could lead to abuse/annoyance. Very cool project.
22
comrh 14 hours ago 0 replies      
> "POLICE.arrest(getAddress(badClient))"

If only

23
maxerize 13 hours ago 0 replies      
someone ban :^[()D]
24
vfvf 12 hours ago 0 replies      
awesomeeeee
25
mykhal 14 hours ago 0 replies      
Nice, but I bet nobody is currently able to write there a proof for

 $\Re(s) = \frac{1}{2}$ for all s where $\zeta(s) = 0$ and $0 < \Re(s) < 1$
.. I mean, where zeta is the Riemann's one :)

Things to Know When Making a Web Application in 2015 venanti.us
194 points by venantius  10 hours ago   125 comments top 23
1
tspike 8 hours ago 5 replies      
First of all, thanks for the nice writeup. I hate that comments tend to hone in on nitpicking, but so it goes. My apologies in advance.

> If you're just starting out with a new web application, it should probably be an SPA.

Your reasoning for this seems to be performance (reloading assets), but IMHO the only good reason for using a single-page app is when your application requires a high level of interactivity.

In nearly every case where an existing app I know and love transitions to a single-page app (seemingly just for the sake of transitioning to a single-page app), performance and usability have suffered. For example, I cannot comprehend why Reddit chose a single-page app for their new mobile site.

It's a lot harder to get a single-page app right than a traditional app which uses all the usability advantages baked in to the standard web.

2
balls187 9 hours ago 7 replies      
If you're new to web application development and security, don't blindly follow the advice of someone else who is also new to web application security.

You should instead have a security audit with people who have experience in security, so they can help you identify where and why you're system is vulnerable. If no one exists on your team/company that does, then hire a consultant.

Security is a hairy issue, and no single blog post/article is going to distill the nuances down in an easy to digest manner.

3
joepie91_ 4 hours ago 0 replies      
> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

The thing that everybody seems to overlook here: this has serious legal consequences.

You are demanding of your users that they agree to a set of TOS from a third party, that does not have either their or your best interests at heart, and that could have rather disturbing things in their TOS - such as permission to track you using widgets on third-party sites.

Not to mention the inability to remove an account with a third-party service without breaking their authentication to your site as well.

Always, always offer an independent login method as well - whether it be username/password, a provider-independent key authentication solution, or anything else.

> When storing passwords, salt and hash them first, using an existing, widely used crypto library.

"Widely used" in and of itself is a poor metric. Use scrypt or bcrypt. The latter has a 72 character input limit, which is a problem for some passphrases, as anything after 72 characters is silently truncated.

4
quadrature 7 hours ago 1 reply      
This is a bit of a pet peeve of mine, but that banner image is 10 megabytes, it can be compressed down to 2mb without any perceptible loss of quality. Heck it could probably be shrunk further if you can accept a bit more loss because most of the image is blurry and noisy anyway.

heres a compressed version: https://www.dropbox.com/s/bw606t7znouxpj1/photo-141847963101...

5
devNoise 8 hours ago 3 replies      
Question about JavaScript and CDN for mobile devices. Should I use a CDN for standard libraries or should I just concat and minify all my JavaScript?

The concat and minify seems better as that reduces the JavaScript libraries and code load to a single HTTP request.

A CDN seems nice in theory. Reality is: Does the browser have the library cached? Is the library cached from the CDN that I'm using? The browser is making more HTTP requests, which sometimes takes more time to request than to download the library.

I agree that using CDNs is a good speed boost. I'm trying to figure out if hoping for a library cache hit out weights a library cache miss.

6
balls187 10 hours ago 1 reply      
> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

OAuth isn't identity management, it's for authorization.

Each of those platforms does provide it's own identity management, but that isn't OAuth.

7
romaniv 8 hours ago 0 replies      
> All assets - Use a CDN

> If you can get away with it, outsource identity management to Facebook / GitHub / Twitter / etc. and just use an OAuth flow.

Questionable advice. At the very least neither of these two are some kind of automatic "best practice" everyone should just follow.

> it can be helpful to rename all those user.email vars to u.e to reduce your file size

Or maybe you should less JavaScript so length of your variable names does not matter.

8
vbezhenar 8 hours ago 1 reply      
One thing to note is login redirect. Please be sure that redirect parameter is local URI and don't redirect user to another site.

Maybe even append HMAC signature to that parameter with user IP and timestamp. Might be an overkill, but still be careful with craftable redirects, they might become vulnerability one day.

9
shiggerino 9 hours ago 4 replies      
>When storing passwords, encrypt them

Nopenopenopenopenope!

This is terrible advice. Don't do this. Remember what happened when Adobe did this?

10
jameshart 9 hours ago 2 replies      
"You don't have to develop for mobile..."

... well, no. Technically you don't have to. But you almost certainly should.

11
patcheudor 9 hours ago 0 replies      
For mobile apps that use WebView and/or has the capability to execute javascript or any other language provided by any network available resource I'd like to add:

ALWAYS USE CRYPTOGRAPHY for communication! Simply doing HTTP to HTTPS redirects is not sufficient. The origin request must be via HTTPS. Also make sure the app is properly validating the HTTPS connection.

Sorry I had to shout, but I'm growing tired of downloading the latest cool app that is marketed as secure only to find that it doesn't use HTTPS and as a result I can hijack the application UI to ask users for things like their password, credit-card number, etc., all without them having any way to tell if they are being asked by some bad guy.

12
toynbert 8 hours ago 1 reply      
As a web application developer in 2015+ I would argue that developing with mobile in mind should be required. At least taken into consideration. At bare minimum have a pre-deployment test: is my app unusable/does this look terrible on the most popular iphone/android.
13
Domenic_S 8 hours ago 1 reply      
How to make a reasonbly-decent webapp in 2015 without having to worry about bcrypt and open redirects and such:

1. Use a widely-accepted framework.

2. Implement your application using that framework's methods.

Why a beginner would implement even 1/3 of this list manually is beyond me.

14
Kudos 6 hours ago 0 replies      
One big omission from this list: gzip. Before you ever think about uglify, make sure you're gzipping your textual assets.
15
JikkuJose 58 minutes ago 0 replies      
Hey uncss supports dynamically added stylesheets too (via running in through PhantomJS).
16
martin-adams 6 hours ago 2 replies      
>> When users sign up, you should e-mail them with a link that they need to follow to confirm their email

I'm curious, why is this good? Sure, sending an email to them so they confirm they have the correct email, but what is the benefit of the verification step? Is it to prevent them from proceeding in case they got the wrong email? It would be nice if this was justified in the article.

I would also add, that changing a password should send an email to the account holder to notify them. Then when changing the email address, the old email address should be notified. This is so a hijacked account can be detected by the account owner.

17
Yhippa 8 hours ago 0 replies      
I like this list.

> Forms: When submitting a form, the user should receive some feedback on the submission. If submitting doesn't send the user to a different page, there should be a popup or alert of some sort that lets them know if the submission succeeded or failed.

I signed up for an Oracle MOOC the other day and got an obscure "ORA-XXXXX" error and had no idea if I should do anything or if my form submission worked. My suggestion would be to chaos monkey your forms because it seems that whatever can go wrong can. Make it so that even if there is an error the user is informed of what is going on and if there's something they can do about it.

18
Quanttek 7 hours ago 2 replies      
> The key advantage to an SPA is fewer full page loads - you only load resources as you need them, and you don't re-load the same resources over and over.

I don't know much about web development, but shouldn't those resources get cached? Isn't the disadvantage of SPAs that you are unable to link to / share a specific piece of content?

19
donmb 7 hours ago 0 replies      
Rails has most of this out of the box. Use Rails :)
20
sarciszewski 7 hours ago 0 replies      
>For all of its problems with certificates, there's still nothing better than SSL.

Yes there is. It's called Transport Layer Security (TLS).

21
andersonmvd 9 hours ago 0 replies      
When using SPA, validate CORS origin instead of allowing *.
22
anton_gogolev 7 hours ago 0 replies      
> sent to a page where they can log in, and after that should be redirected to the page they were originally trying to access (assuming, of course, that they're authorized to do so).

Smells like an information discolsure highway. I usually 404 all requests that hit "unauthorized" content.

23
stevewilhelm 6 hours ago 0 replies      
Internationalization?
Japans New Satellite Captures an Image of Earth Every Ten Minutes nytimes.com
172 points by revorad  12 hours ago   33 comments top 9
1
Syrup-tan 4 hours ago 3 replies      
I wrote a simple shell script[0] to scrape and output the latest image.

It uses the tiles from their online satellite map[1], and can output images in increments of 1x1, 2x2, 4x4, 16x16 tiles (each tile being 550px by 550px). Here is an example with 2x2 [2]

If you have any suggestions or bugfixes, feel free to fork or comment.

EDIT: Also works for a single tile[3], also clarity.

[0] https://gist.github.com/Syrup-tan/1833ba1671c7017f0d59

[1] http://himawari8.nict.go.jp/

[2] https://denpa.moe/~syrup/himawari8.png

[3] https://denpa.moe/~syrup/himawari8-single.png

2
johansch 7 hours ago 1 reply      
The resolution of the "full disk" (i.e. whole earth) natural color images appears to be 11000x11000 pixels every 10 minutes. I can't find any realtime access to these images though - could anyone else?

They do have a cloud service for disseminating the imagery, but only for "official use":

http://www.data.jma.go.jp/mscweb/en/himawari89/cloud_service...

"Until Himawari-8 becomes operational, NMHSs wishing to release Himawari-8 data and products to the public are requested to consult with JMA beforehand."

Edit: Here is at least a tile-zoomer with some sort of realtime access to high-res imagery: http://himawari8.nict.go.jp/

3
pavel_lishin 9 hours ago 2 replies      
Will there be a place where they can be downloaded? A live Planet Earth desktop wallpaper would be pretty great.
4
Animats 8 hours ago 1 reply      
Nice. Japan needs better weather data; too many hurricanes and too much coastal development. From geostationary orbit, the resolution has to be low, but it's always on.

The US has two geostationary weather satellites, which are usually parked roughly over Panama and Hawaii. Neither has good coverage of Japan. Korea's COMS satellite does, though. China has several, including one that's usually pointed roughly at Taiwan.[1] Right now, you can see the hurricane that's due east of Shanghai.

[1] http://www.hko.gov.hk/wxinfo/intersat/fy2e/satpic_s_vis.shtm...

5
sosuke 9 hours ago 0 replies      
The Earth is so beautiful.

I saw GOES-R http://www.goes-r.gov/ and the pronunciation I heard in my head made me think of Ghostbusters.

6
state 8 hours ago 1 reply      
I have always hoped that someday Google Earth would just be live.
7
chriscampbell 1 hour ago 1 reply      
Is it common to lock a satellite into a stationary orbit?
8
bargl 9 hours ago 2 replies      
So my first thought was (will this replace the doves by space labs). Which was an earlier story on HN. https://news.ycombinator.com/item?id=8158295

It won't because these are geostationary satellites (if I read the post correctly). So you'd need at least 3 of these to get a good image and that's not even considering some of the bigger issues with this. I also don't think the resolution is on par. But the images will be really cool to see.

Link to space labs. https://www.planet.com/story/

9
ChuckMcM 5 hours ago 0 replies      
I find that we can do this sort of thing amazing. However on the animation the fact that the terminator line changes angles is a bit unnerving.
VirtualBox 5.0 officially released oracle.com
167 points by therealmarv  1 day ago   53 comments top 15
1
bane 1 day ago 6 replies      
As a fun experiment, I do my day-to-day computing entirely in a Virtualbox Windows VM guest that I've given 2 cores, 150GB of storage and 4GB of RAM. I'm about a year and a half into the experiment and still chugging along.

It's a surprisingly performant day-to-day system, which I can snapshot to try out things, move to other machines if I need to, make backups etc. About the only thing it doesn't do well is really CPU intensive or GPU intensive operations.

But it works fine for 2 monitors, web browsing, watching videos, etc.

It's kind of surprising actually.

From time to time I'll also spin up some Linux VMs and do various dev activities in a real Linux, which I usually just background and ssh into from my Windows VM. It's kind of nice having a virtual rack of machines to monkey around on.

Less impressive has been trying to get Ubuntu to not feel terrible, but Centos works fine.

If I need better performance, I'll dive back to the host OS and do those things, but it's mostly just for gaming or music production.

Bonus, my host OS has stayed relatively free of junk and stays really snappy, even all this time later.

My only recent problem is that the Windows 10 updater won't qualify the VM guest for the upgrade. So I'll probably have to grab the ISO and try it that way.

2
therealmarv 1 day ago 1 reply      
If you are using Vagrant: Wait for the 1.7.3 release before upgrading https://github.com/mitchellh/vagrant/issues/5572 . How to upgrade Vagrant: Just install newest version. ETA: Tomorrow most likely https://twitter.com/mitchellh/status/619163221992189952
3
comex 1 day ago 2 replies      
One of the biggest features here, the virtual USB 3.0 controller, is closed source (as well as USB 2.0!). See:

https://www.virtualbox.org/manual/ch01.html#intro-installing

While it's available for free, the license only allows personal and educational use. Bleh.

4
beezle 1 day ago 2 replies      
Advise caution if you are a windows 8.1 host. Besides still having to deal with https://www.virtualbox.org/ticket/13187#comment:178 this new version completely disabled networking, even after a reboot.

Oracle has done a very good job of ruining VB for quite a few users.

5
yc1010 1 day ago 1 reply      
"HiDPI!support"

Does that mean Virtual Box will no longer look ridiculous on my 3840 x 2160 laptop screen?!

Downloading....

6
cowsandmilk 1 day ago 0 replies      
From the changelog[0]:

Make more instruction set extensions available to the guest when running with hardware-assisted virtualization and nested paging. Among others this includes: SSE 4.1, SSE4.2, AVX, AVX-2, AES-NI, POPCNT, RDRAND and RDSEED

This makes me incredibly happy.

[0] https://www.virtualbox.org/wiki/Changelog

7
abhv 1 day ago 3 replies      
Why do I get this from https://www.virtualbox.org when I access from Chrome? The site works from curl and Firefox.

Traceback (most recent call last):

File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 436, in send_error data, 'text/html')

File "/usr/lib/python2.4/site-packages/trac/web/chrome.py", line 803, in render_template message = req.session.pop('chrome.%s.%d' % (type_, i))

File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 212, in __getattr__ value = self.callbacks[name](self)

File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 298, in _get_session return Session(self.env, req)

File "/usr/lib/python2.4/site-packages/trac/web/session.py", line 162, in __init__ self.get_session(sid)

File "/usr/lib/python2.4/site-packages/trac/web/session.py", line 189, in get_session self.bake_cookie()

File "/usr/lib/python2.4/site-packages/trac/web/session.py", line 170, in bake_cookie assert self.sid, 'Session ID not set'

AssertionError: Session ID not set

UPDATE: same error from www

UPDATE: curl --header "Cookie: trac_session=" https://www.virtualbox.org

The issue seems to be that some plugin in Chrome erases my trac_session cookie, and the website cannot handle this. I am running Disrupt, uBlock, and PrivacyBadger.

(1) Why does vbox need to track my session?

(2) What python framework are they using? App should not fail in this case.

8
snowwindwaves 1 day ago 2 replies      
I've used Xen, VMWare workstation 9/10, but primarily Virtualbox since 2008, on linux and windows hosts and linux and windows guests.

Virtualbox has really never let me down, I don't see any reason to use VMware workstation over it.

9
zurn 1 day ago 1 reply      
Does this still require its own crashy kernel driver on Linux?
10
agumonkey 1 day ago 0 replies      
The detached start option is very nice. Also the vm group things, as tiny as it is, really helps when you setup multi-VMs netlabs.
11
whoisthemachine 1 day ago 0 replies      
Hmm getting a hash sum mismatch when doing an apt-get update. Anyone else seeing this?
12
seren 1 day ago 1 reply      
Installed it for USB3, did not manage to make it work, although I configured it.
13
Koldark 1 day ago 1 reply      
I might need to try this before my next upgrade to Parallels. If it works for what I need why spend the 100 bucks for each upgrade.
14
brizzle 1 day ago 3 replies      
Windows 10 support? Anyone tied it yet?
15
lsllc 1 day ago 2 replies      
Sadly, boot2docker 1.7.0 doesn't [yet?] work with VirtualBox 5
AWS Device Farm amazon.com
185 points by mauerbac  1 day ago   35 comments top 15
1
stickydink 1 day ago 4 replies      
> unlimited testing for a flat monthly fee of $250 per device

Renting remote-controlled Android devices for $250 a month, is this even remotely worth it? There aren't many devices that wouldn't pay for themselves by the end of the 2nd month...

2
jastanton 1 day ago 4 replies      
Correct me if I'm wrong but does this feel pretty expensive?

> Pricing is based on device minutes, which are determined by the number of devices you use and the duration of your tests. AWS Device Farm comes with a free tier of 250 device minutes. After that you are charged $0.17 per device minute. As your testing needs grow, you can opt for our unmetered testing plan, which allows unlimited testing for a flat monthly fee of $250 per device.

3
Artemis2 1 day ago 1 reply      
Amazon is coming for Google's [Cloud Test Lab](https://developers.google.com/cloud-test-lab/) and [Nativetap](https://beta.nativetap.io/).
4
yla92 1 day ago 1 reply      
Pretty interesting. Now, we have more choices. The other day, I've found this project on Github called OpenSTF[1] which allows you to set up your own devices labs. It's pretty interesting and I've got to try yet. Generally, I'd prefer to set up my own device labs rather than testing with 3rd party services. It depends on your target market. For us, the apps we made are target for the local market (people and devices). And the phones sold in our market are different with the phone sold in States and I don't expect to have some China devices to be available on 3rd party services.

[1] : https://openstf.github.io

5
martin_tipgain 1 day ago 2 replies      
Check us out at https://www.testmunk.com/ we do have iOS and Android support :) Don't hesitate to reach out if you have any questions.
6
RyJones 1 day ago 0 replies      
The most crucial bit for me, which I don't see on the one-pager, is the ability to test devices in-market on carrier networks. Testing phones for Korean or Japanese carriers (or British, or Brazilian) on simulations or using GSM roaming is not good enough.
7
Wonnk13 1 day ago 1 reply      
Didn't Google introduce something similar to this at IO a few weeks ago?
8
saurik 1 day ago 0 replies      
I am always really curious to know how these kinds of services deal with security issues: they don't get to control the security of the devices they are running, and often will only even sort of have access to reflashing them; how do they deal with someone testing an app with a kernel exploit, installing a persistent backdoor, and then watching what everyone else later using that device is testing?
9
nickpsecurity 1 day ago 0 replies      
I had an idea, more a need, for something like this many years ago. It came back to my mind when OpenBSD needed funding partly due to all the different pieces of hardware they test on. Wouldn't it be nice for portability-focused projects to be able to pool together resources on hardware to cut down on it?

Anyway, awesome to see AWS doing it in practice. As usual, it will be more interesting to see what happens when competition turns up. Cloud space has more innovation and cut-throat competition than many IT sectors. Can't wait to see what the competition costs. ;)

10
jakozaur 1 day ago 0 replies      
I wonder if they add IOS at some point. Maybe it's the Amazon way of doing. Start with MVP and iterate.
11
varelse 1 day ago 0 replies      
12
ex3ndr 1 day ago 1 reply      
Why Amazon and not solutions from other mobile-focused platforms?
13
mwcampbell 1 day ago 0 replies      
Curious about what the built-in, no-scripting-required test suite can do. I wonder if this is what the app reviewers for the Amazon Appstore have been using.
14
fudged71 1 day ago 0 replies      
I was hoping this would be a Raspberry Pi farm!
15
fbaptista 1 day ago 0 replies      
ohhh :)take a look at another monkey at www.monkop.com
       cached 11 July 2015 04:11:01 GMT