hacker news with inline top comments    .. more ..    8 Oct 2015 Best
home   ask   best   2 years ago   
YC Research ycombinator.com
1674 points by sama  20 hours ago   356 comments top 104
aresant 20 hours ago 5 replies      
I love this idea.

Not sure if this is driving your thinking but this would open a desperately needed alternative to Academia if you can scale this idea as successfully as you've scaled startups.

I've watched my genius brother-in-law (PhD Materials Science & Chemistry / Biomimicry) be consumed by a very broken UC system over the past decade.

Started with an exceptionally bright, curious, and inventive man who created breakthrough science in self-healing materials for the "benefit of the world"

10 years of grant fights, personnel struggles, under-served licensing resources, conflicting lab-vs-student priorities, etc has put him out the other end hopeless and disenfranchised.

The world will probably lose one of their best "public" researchers to the bowels of commercial science as a result.

Go for it Sam, good luck and excited to see what your first projects are.

sama 20 hours ago 4 replies      
I was hoping to answer questions here for awhile but I'm getting pulled into a crisis so I have to go.

I'm going to do an AMA on HN later this week (mostly to answer questions about applying to YC) and would be happy to answer more questions about YCR then.

pavornyoh 20 hours ago 1 reply      
It takes a special person to donate $10 million to a new project. You don't get enough credit Sam Altman. Good job and a very good idea.

EDIT: Why am getting down votes for this comment? You must know that in this world not everyone is willing to part with their money and are not that giving. So relax before you down vote me promptly:)

markbao 19 hours ago 1 reply      
Reminds me in some way of this tweet from PG last year:

> Paul Graham (@pg) Markets are usually quite clever but one place they break is in encouraging treatments rather than vaccines. Subscription revenue.


myth_buster 20 hours ago 2 replies      
This thread [0] from couple of days back on Alan Kay's views on recreating Xerox PARC seems relevant.

[0] https://news.ycombinator.com/item?id=10322929

jxm262 20 hours ago 1 reply      
> Well especially welcome outsiders working on slightly heretical ideas (just like we do for the startups we fund) and well try to keep things smallwe believe small groups can do far more than most people think.

What's the process for an outsider to get involved? This sounds like a really cool and much needed movement. Will more information be coming soon on how to apply/join/contribute?

waterlesscloud 20 hours ago 4 replies      
TIL that Sam Altman has considerably more personal wealth than I would have guessed, if I'd ever been inclined to guess.

Anyway, this is very cool. Exactly the sort of thing that needs to happen to push the world forward in deeper ways. I'll be very curious to see what sort of teams they put together and just how fundamental and long term and heretical they go with their research.

hbhakhra 20 hours ago 2 replies      
I like the idea of YC funding research. Bias free and politics free research can go a long way. A couple of questions though for @sama:

1. Where will the research be based? Is YC providing space to setup lab?

2. One of the problems I see with this is lack of fellow researches present on site making casual collaboration harder (compared to a research university campus). Have you thought of that?

3. Are you planning on funding researches with track records or people with high risk/high reward ideas that are relatively new?

napoleoncomplex 20 hours ago 0 replies      
This is fantastic. Addresses the completely broken aspects of doing research in academia, gives researchers access to top developers through the YC network, gives you a much better environment stress and pay-wise than if you're a researcher, and focuses on meaningful innovation. And the research will be free to use. Nailed it in so many aspects.

@sama - since you mention that startups aren't good at solving certain types of problems, is this something you've always believed, or is it a realisation through your work at YC?

Also was it inspired by the Shuttleworh Fundation in any way?

lacerta 41 minutes ago 0 replies      
How do we search and see if the area where I currently do research in is something that YC Research would focus on in the near future? I am finishing up a MS in ChBE with a focus on designing materials using computational methods (eg. DFT) and would like to continue with similar projects.
dewitt 20 hours ago 3 replies      
> "To start off, Im going to personally donate $10 million..."

Holy cow. That's real money. Sincerely very impressed by the commitment.

urs2102 20 hours ago 3 replies      
@sama - are the members of these groups going to be selected by YC or will there be an application process similar to YC?

This is awesome to see, maybe YCR could be the next Bell Labs or something (although Google probably has a hold on that now).

moinnadeem 16 hours ago 0 replies      
Hi. Seventeen-year old high schooler here, I love this idea.(@sama, anyone else behind YCR):

All my life I had wanted to go into industry, follow that classic Steve Jobs, Elon Musk, Peter Thiel dream that many CS High Schoolers go into.

But recently, I got into research. The CollegeBoard opened up a class called AP Research I am now in, and I spent my last summer researching Machine Learning at my local private university. I was surprised by how much I loved it, the math behind it fascinates me (context: am currently taking Adv. Diff Eq and Linear Algebra my Senior Year.)

I am now trying my own Deep Learning algorithms and working on my paper. I have always been conflicted between industry and academia, since all of my fellow HS CS friends just want to found companies in industry. Things such as YCR make research a bigger dream for youth, whom I would currently argue are too under-exposed to it. It just doesn't seem as glamorous to them.

Thank you for YCR, it reminds me of the cool things research can do. I can't wait to hopefully apply one day.

andyjohnson0 6 hours ago 1 reply      
First, this is really great. I'm looking forward to seeing how it works in practice.

"We plan to do this for a long time. If some of these projects take 25 years, thats perfectly fine with us. "

Building an institution that supports this kind of long-term commitment is going to be a challenge. While there are plenty of organisations that work over those timescales, very long-term projects seem to me to be unusual - and not just because results are increasingly expected in the short-term, but because 25 years is about half of a long career and few people (imo) have the commitment and self-belief to embark on such a project. I can think of some, like SETI [1] and Mass Observation [2], but they tend to be highly distributed. The danger might be that you end-up with something like the Institute for Advanced Study [3] with its debatable productivity record.

Nevertheless, this is a massively positive announcement from YC. We need more stuff like this.

[1] https://en.wikipedia.org/wiki/Search_for_extraterrestrial_in...

[2] https://en.wikipedia.org/wiki/Mass-Observation

[3] https://en.wikipedia.org/wiki/Institute_for_Advanced_Study#C...

silverlake 16 hours ago 4 replies      
> We think research institutions can be better than they are today.

I hope you'll expand on this. Is there something that YCR can do that academic research can't do? You might have more impact if you use that money to influence US and EU gov't to invest more in basic research. A 1% increase in annual gov't funding will be vastly larger than YCR's total budget.

The Gates Foundation is tackling difficult health issues that aren't profitable for companies and neglected by poor governments. They've found an underserved gap. What's the gap that YCR is uniquely suited to tackle?

netcan 5 hours ago 0 replies      
Wow! This seems cool and potentially very useful.

A lot of "how do we fix academia" discussions might be (kind of) asking the wrong questions. Academia is a very wide cover over a lot of varied things. That cover is an institution. It has values, rules and ways of doing things. There might not be anything wrong with the institution per se, rather its success has brought to many things under its cover. For people, idea or pursuits that don't really belong there, it seems broken.

Take publishing. Publishing is a great practice and a lot of the conventions around it are really useful. But, it's not the only way to make information available, accessible, trustworthy, etc. There are other ways that might make more sense for some other project.

rjurney 19 hours ago 0 replies      
This is the biggest news since YC started. The YC model works, but it isn't churning out disruptions like Xerox PARC did... ethernet, the laser printer, the PC. It tends to produce Reddits. This could fill that gap. Bravo...
simonebrunozzi 12 hours ago 0 replies      
Sam Altman is personally donating $10 Million to YCR.This is one of the greatest thing he could do to show some honest generosity towards the world.Bravo.
mrdrozdov 20 hours ago 0 replies      
This article seems to have relevance. The Recurse Center is a YC company that recently began pursuing programming language research. https://www.recurse.com/blog/83-michael-nielsen-joins-the-re...
antognini 20 hours ago 1 reply      
Very interesting! But exactly how fundamental is this research meant to be? Will there be any mathematicians working on projects not at all related to cryptography or physicists working on projects entirely unrelated to fusion?

As an astrophysicist in the thick of applications for jobs in both industry and academia, I strongly identify with the points raised in the article. While I consider my research slightly heretical with the potential to solve many big problems in astrophysics, I don't have any illusions that it will really lead to any practical developments. So is this initiative meant to keep scientists like me working on the problems we're working on, or is it meant to prevent computer scientists working on a new way for cars to drive themselves from jumping ship and joining Google or Uber?

pcmaffey 18 hours ago 2 replies      
"heretical" "dangerous" "free to everyone"

Strong language. Especially coming from an organization that generally touts "ideas as useless." Well here's a model based exponentially upon the promise of "ideas." I sincerely hope you disrupt the current models, which have been severely hijacked by interest groups, short-term thinking, and governance.

So yes Sam, you have some bold ideas (you're not yet sharing). Put $10m down to validate them and bring them to the world. You've earned the chance. And if you succeed, you'll create a new model that touts the power of ideas, takes responsibility for them, but does not own them.

Whatever we can do to help.

vickychijwani 16 hours ago 4 replies      
Surely I'm missing something here. Won't giving YC equity to these researchers (who are working not-for-profit and releasing IP freely) incentivize research that benefits YC startups directly and hence increases the value of the equity held? Can someone explain why not? Honest question.
go1979 20 hours ago 2 replies      
Will scientists at YC Research continue the common pattern of academic research/publications? Or are there some grander plans to disrupt traditional research?
mrdrozdov 20 hours ago 1 reply      
@sama is this targeted towards pre-PhDs who are deciding between pursuing their PhD or working in industry, as well as post-PhDs who are full-time researchers?
geebee 19 hours ago 0 replies      
It's easier to comment on something specific when you disagree than when you agree but that leaves a very critical trail after a while. Since I've been critical of Sam's airbnb position and expressed a lot of concern over the amount of power a VC would have in the "founder visa" posts in the past, I would like to chime in to say that YC Research is an intriguing and exciting development with a lot of money behind it. This has the potential to be pretty great.
Fede_V 8 hours ago 0 replies      
I think this is absolutely amazing.

sama has written multiple times about the 'distractions' that pull founders away from building beautiful products. Things like attending conferences, networking, fundraising, etc...

Conservatively, the academic equivalent of those things takes up at least 80%+ of the time of a professor's time - and the more successful you become, the less time you actually dedicate to research. In the life sciences, it's not uncommon to have PIs with 30+ people in their groups, and they might be barely aware of the kind of research that's being done in their lab.

I cannot emphasize enough how thrilled I am by this. One tiny concern: it's currently much easier to get H1B visas for researchers working in educational institutions than it is for people working in private for profit companies. Is there any way YC can spin this research into a legal status that would allow it to recruit researchers through the 'easier' academic H1B pipeline?

_pius 20 hours ago 1 reply      
This looks amazing ... a modern Bell Labs or Xerox PARC.
CPLX 20 hours ago 3 replies      
> YC has a very high problem flux at this point

What does this phrase mean in English?

staunch 18 hours ago 0 replies      
This is the kind of thing Silicon Valley should be doing all the time. I hope Sam's guts is replicated rather than just admired by other investors. Even if it's not, this is a great thing.
hpvic03 11 hours ago 0 replies      
This is great.

I always considered getting a PhD and becoming a researcher, but decided to remain an engineer for the time being because I've heard so much about how the academic research system is broken, both with politics and low pay.

If this model works, it might not only help existing researchers be more productive, but it might motivate other people considering this career to actually do it instead of other careers.

imajes 20 hours ago 4 replies      
@sama: is one of the intentions for this group to acquire patents, so that you may be able to use them in defense for YC companies?
choppaface 20 hours ago 3 replies      
There are a few YC technical founders who were fired by their non-tech co-founders for rather flimsy culture reasons. So far YC was pretty hands-off. What makes YC think it can be effective at managing researchers? Wouldn't that be much more a function of the YC director(s) of research than the program?
mehrdada 20 hours ago 0 replies      
This is great. I am hoping things like this will shake the dynamics of publication-oriented credentialing and ultimately the value proposition of academia/academic research.
S4M 20 hours ago 1 reply      
That's great! Two questions:

- Will the researchers need to be based in the Bay Area - AFAIK: not the best place to raise a family?

- In the short term, how many researchers do you plan to hire?

interknot 16 hours ago 0 replies      
This sounds like a great idea, and I can't help but think of this as a very "Valley" take on Renaissance-style patronage.

Here's hoping this effort has similarly transformative results!

j0e1 10 hours ago 0 replies      
Great job Sam! YC isn't any more only about starting new companies and making them profitable. But about improving lives for everyone on the planet. And to do that >'YCR is a non-profit.' Just brilliant!
liedra 7 hours ago 0 replies      
I hope there will be an independent ethics review committee or similar.

The modern concept and frameworks of Responsible Research and Innovation might be a helpful starting point. https://en.wikipedia.org/wiki/Responsible_Research_and_Innov...

curiousfiddler 19 hours ago 0 replies      
I just passively started looking around for a change. My criteria is exactly what's described here: "work that requires a very long time horizon, seeks to answer very open-ended questions, or develops technology that shouldnt be owned by any one company." [1]. I have been unable to find very many places that will take non PHDs and let them participate in projects with these characteristics. I really do hope YCR relaxes the PHD constraint a bit. Look forward to it.

1. https://ycr.org/

quantum_state 4 hours ago 0 replies      
This is simply brilliant n fundamentally good ... as a former Bell Lab researcher in fundamental physics, I hope this will allow people with profound ideas to pursue the ideas ...
buro9 17 hours ago 0 replies      
I'd love to see research on shaking up governments and the existing political process with technology.

Now, there are flaws to the use of electronic voting systems and the level of transparency required, but that is not what I mean.

What I mean is that every time I have run a large community the social structure that naturally emerges is not the one that we all allow ourselves to be governed by.

I'd like to research how tech could be used to empower lay citizens to shape their society in the way that they imagine it should be.

For example; I was interested in Google's internal experiment with voting transparency and delegation, and how it allows for a different kind of inclusive democratic process that isn't reflected in our existing systems.

One of the moonshot goals of the forum/community startup I created was to start to provide tools to support communities that shaped their own political and social structures, with an express goal of training them in the possibilities of being engaged in their society and then letting them use the tools to shape the real world in the same way that they shaped online societies.

Simply, I'd love to see research and later innovation into how societies and democratic functions could be.

When we talk about changing the world, I really mean it.

thanatropism 20 hours ago 0 replies      
Is this more targeted at semi-cofounders/early-employees that fall by the wayside (like Aaron Swartz, poor soul) or is the plan hiring top researchers (the way Google got Hal Varian and Peter Norvig)?
abalone 19 hours ago 0 replies      
DARPA, the publicly funded agency that generates most of the core tech that sustains Silicon Valley, has an annual budget of $3B. The National Science Foundation, which funds a large chunk of core science research, is $7B.

Where would YC Research fit in this landscape? Would it be seeking government funds, and just offer a different institutional model than universities for hosting the research? Or would it be primarily based in private funding somehow (and how?)

humility 6 hours ago 0 replies      
Congratulations on this great endeavour! I've frequently mulled setting up website like patreon to help support and accelerate crucial research, which is seeing less and less progress these days. I'm glad YCombinator took note of this problem too!
markhelo 11 hours ago 0 replies      
I love this idea. However I am not too surprised that YC is doing this. YC is a different kind of corporation. Instead of hiring engineers, they hire startups. And just like any well established company, they are now investing in R&D. Make no mistake, there will be a commercial bent to the R&D that YC does and there is nothing wrong about that. R&D should not be just about publishing papers but advancing new ideas in the market too. So overall I hope this takes off and they bring ground-breaking research out of this venture.
jondubois 17 hours ago 0 replies      
I agree, making the world a better place doesn't pay well. Screwing the world over pays really well.

I remember when I was in school people would often say that to get money, you have to find something you love doing and the money would come by itself. I don't know if that used to be true, but that is certainly no longer the case.

In general, doing good is at odds with earning money.

trevmckendrick 20 hours ago 0 replies      
This is a logical step in YC's path as the "new" university. It's a great experiment to run, and I can't think of better people to do it.
anigbrowl 19 hours ago 0 replies      
Well especially welcome outsiders working on slightly heretical ideas

Are you going to require qualifications, or just sufficiently intriguing proposals?

earlyadapter 13 hours ago 0 replies      
After the collapse of Bell Labs and the inefficiencies at the University level, this is an effort that can truly impact change.

True fundamental innovation takes 15+years... stable ecosystems (govt policies, strong economies etc) are needed to foster this type innovation. YC is definitely a stable environment to foster fundamental inno.

Fundamental innovation also creates new methodologies in two or three areas, whether it be technology, market or implementation. Everything else is incremental innovation (new tech enhancing products in a known market), this can be achieved in 1 to 3 years. Kudos YC!

arrel 20 hours ago 1 reply      
> Were not doing this with the goal of helping YCs startups succeed or adding to our bottom line.

Helping YC startups succeed might not be the primary goal, but it's silly to say the project isn't also aimed to help YC startups. Having access to a top quality research team can absolutely help startups succeed, so this is either disingenuous or surprisingly unimaginative.

kfitchard 18 hours ago 0 replies      
It's an interesting point about fundamental research. The old industrial labs are dead and while academic institutions can do a lot someone in the private sector has to pick up the research mantle. Google X is a good example, but I think smaller efforts like this are much more interesting. If it's done right that is...
kevinalexbrown 16 hours ago 0 replies      
To be honest, the freedom to fail alone would make a huge difference. On one hand there's a lot of freedom for intellectual growth in academia - in the last few months I've used deep learning, assembled hardware under a microscope, performed neurosurgery and genetic engineering (this is somewhat standard for my field). On the other hand, I make less than I did as a new cable guy, won't make more for 10 years, and every postdoc and phd student in my lab works 7 days / week standard. And if I fail, I will probably be done career-wise. LOTS of talented people leave because they see startups as the 'safer' bet - at least if your startup fails you're not doomed.
glxc 12 hours ago 0 replies      
DE Shaw gave away his hedge fund empire to start DE Shaw Research. He similarly used his own money to start a privately funded research lab. He is their Chief Scientist.
Geekette 20 hours ago 1 reply      
Interesting. I am curious though, as to why YCR researchers will also receive equity in Y Combinator as part of their compensation? Especially given that both are organized separately and YCR discoveries won't be funneled through or used in YC. I am also assuming that YCR will pay market rate salaries.
BinaryIdiot 20 hours ago 0 replies      
I like this idea; it's pretty novel. Since there is just one group starting this off, which makes a ton of sense from a funding perspective, is this an MVP or a long term bet with regards to this specific group? Meaning if this group has trouble getting their idea off the ground in the next 1-2 months, do they get scrapped or are they going to have years to conduct their research?

An idea for future research may be to do almost like a round of start-ups where a bunch of groups come to SF for 3 months but unlike start-ups you find the promising ones and keep them all and drop any that do not look promising. Basically thinking of ways to not put all your eggs in one basic. Naturally this becomes quite problematic if you're researching something highly complex that won't show signs of promise for a decade!

skoocda 17 hours ago 0 replies      
Phenomenal undertaking! When you say "Shouldn't be owned by any one company" does that imply that these releases will all be open-source? Do you have plans to develop a new form of IP licensing to go along with the research?
guelo 15 hours ago 0 replies      
This seems weird. They'll fund open ended research and then what? Are there any metrics to guide the outcomes? Why would the average super rich guy want to put any money into this?
SalmoShalazar 16 hours ago 0 replies      
As a researcher, this is very exciting news. I can't wait to see what areas of research YC will be directing their efforts towards. I hope the world of genomics/genetics gets a nod!
koopuluri 18 hours ago 1 reply      
There's repeated mention of academia being broken due to issues such as politics, and mis-aligned interests between researchers and their departments. What have been the changes over time in Academia that led to this?
mpweiher 18 hours ago 0 replies      
Great idea!

I decided not to go into academia many, many years ago, because a short look revealed that it seems to destroy the love of the subject matter more quickly than just about anything.

Now that I am partly back working on my PhD, I am still glad I didn't make it my main source of income, because now I have the freedom to do real research, rather than gaming the system of academia with minimal viable papers and incremental research re-gurgitating old insights. ("Doing X in Smalltalk" 10 years later: "Doing X in Java" 10 years after that: "Doing X in JavaScript" Sigh).

HorizonXP 20 hours ago 1 reply      
This sounds interesting. I'm curious to see what areas the research will be focused on.

Are there plans to provide laboratory space for areas that require it? Would researchers be housed in some space in MV, or would they work out of their own space like YC startups?

ralucam 9 hours ago 0 replies      
This is such an insanely great idea. This (alongside with the YC Fellowship and the classic YC program) makes us really have no excuse for not doing something great, that matters for us and for the world. Wow, YC, well played.
micheljansen 6 hours ago 0 replies      
I get really excited about this. I left academia for the bureaucracy, and found R&D to often be too short sighted.

Best of luck!

zoba 20 hours ago 0 replies      
This sounds fantastic; I'm so glad to see it.

I'm curious what areas of research will be chosen, and how. I wonder if there is any room for a voting style 'the community would like to see resources put behind this idea' decision making process. (On the other hand, its not the community's money, so do what you will :)).

I'm guessing all of this will be in the Bay Area?

It'd be great if there was a way for regular folks to help out with this. It seems to be something genuinely intended to help people and I'd love to contribute in some way.

superfx 20 hours ago 0 replies      
Will this be structured in a way where there are groups headed by "PIs" that set the research agenda, and researchers working under them, or will you be pursuing a different model?
alexchamberlain 17 hours ago 0 replies      
Absolutely fantastic! It's nice to see Sam et al putting their money to good use.

I really hope that we see some development of new types of programming languages. We seem to be stuck in a rut with procedural/OO/mostly-functional languages. Some research around flow based programming, especially related to machine learning. I can't help thinking we're missing something here...

bholdr 13 hours ago 0 replies      
I wrote a blog post awhile back, I wonder whether YC is after a similar thing...http://yansh.github.io/articles/phd-distruption/
lfx 20 hours ago 0 replies      
Well I always wanted to be in Y funded startup. Now my dreams become bigger - now I want get to YCR.

Congrats Y team! I'm can't wait to see what you come up for next announcement!

dschiptsov 11 hours ago 0 replies      
Is there any way to participate remotely?

I am not applicable for any visa in so-called "developed" countries - too old and without higher education (lack of which does not imply much in some cases, as one might see).

AndyKelley 19 hours ago 0 replies      
This is extremely exciting and I will apply as soon as the application process is announced. I would love to get funded to work on this open source digital audio workstation I've been working on[1]. Its goals are perhaps a little too ambitious and 25 years is not an unreasonable timeline for realizing them.

[1]: http://genesisdaw.org/

PabloOsinaga 20 hours ago 0 replies      
@sama: how are you thinking about evaluating progress of the groups? it seems all the bureaucracy in the current research ecosystem derives from the fact that it is really hard to evaluate progress in research/science. So you form a committee of experts to evaluate if a given researcher is doing well, and that eventually leads to politics and bureucracy. what are you thoughts on evaluation?
subrat_rout 17 hours ago 0 replies      
Would love to know which fields/areas YC will be focusing for research. Is it for only Computer Science? or Physics? Biomedical Science or behavioral economics? or combinations of few fields?

And what kind of ideas/research will be given priority? Will it be bench/desktop based research projects or applied to people lives directly?

anonbiocoward 14 hours ago 0 replies      
Would you be willing to let someone just participate? I have multiple lines of research and my physics background allows me to reach further into biology and computation than most biology types. And for a variety of reasons I don't quite have the same funding problems most people have, but I would really like to work with the groups you are able to put together. I would be happy to fill out the same application.

Your log files should be able to unmask me but for additional reasons I'd rather not publicly associate my claims above with my identity.

rw2 20 hours ago 1 reply      
Why not give YC a 7% ownership of the IP and YC the other 93%. That would definitely drive the more entrepreneurial scientists to work there and also allow YC to monetize products out it's own research.

I have some ideas on technologies that are harder to create and takes time, but actually want to own the IP at the end.

alecco 20 hours ago 0 replies      
Way to put your money where your mouth is! Kudos to Sam. You are a good man. Hope this delivers amazing results.
evanwarfel 20 hours ago 0 replies      
Sweet! We'll need more things like this if we want to see our share of societal progress. I have faith that more programs like this will blossom in the coming years.

@sama -is this only for severely resource intense research projects? And do you envision this turning into an alternative to graduate school too?

eachro 16 hours ago 0 replies      
Will YC Research only focus on applied research ideas? Specifically, I'd like to know about whether or not researchers in pure math, theoretic cs, theoretical physics, etc will have a place at YC Research.
fasouto 20 hours ago 1 reply      
Awesome news. It's interesting to see new ways to fund research.

BTW the "Read more" at https://ycr.org/ point to the Y Combinator blog not to the post permalink. It will became confusing once you write a new post.

hugh4 19 hours ago 0 replies      
The better yet native is to focus on quality of research outputs rather than just number. "Grant committees can't read, but they can count".

Trouble with that is it doesn't scale. It's easy to do when you're funding a handful of researchers like YC, but impossible when you're funding thousands like NSF or NIH.

Dowwie 15 hours ago 0 replies      
Are there plans for peer review and highly rigorous verification of results prior to publication?

Hopefully YCR busts some of the myths perpetuated by industry.

mortdeus 19 hours ago 0 replies      
What is the patent policy?
pitchups 14 hours ago 0 replies      
Amazing! This may have far greater impact than all of YC's billion dollar unicorns in terms of making the world better!
jrmo 15 hours ago 0 replies      
My academic friends and I are constantly talking about how we need startups to counter/disintermediate the constantly growing administrative overhead at universities. Without viable alternatives there is just no pressure to keep administrative costs from balooning. This is a great start and I hope to see more like it.
vtlynch 16 hours ago 0 replies      
Love the epic lack of perspective in this thread
Ind007 8 hours ago 0 replies      
Thanks for making me feel good in the morning.
tefo-mohapi 8 hours ago 0 replies      
Sounds good. Would love to see what comes out.
mizzao 19 hours ago 0 replies      
Great news! Probably the only other non-university lab at the moment with this type of long-term basic research goal is Microsoft Research.
djabatt 10 hours ago 0 replies      
YC is being brave and smart ... again. Keep it up.
bra-ket 17 hours ago 0 replies      
this is great, I'd love to see something similar to the Kavli Foundation growing from this: https://en.wikipedia.org/wiki/The_Kavli_Foundation
pboutros 20 hours ago 0 replies      
This is awesome. I'm really excited to see what groups are announced - this is going to gather some great minds!
pow_pp_-1_v 17 hours ago 0 replies      
This is really cool! I wonder, though, if YCR will accept donations? It will be nice if I could personally contribute to a research project.[Loong time lurker here. Couldn't help myself from commenting on this one.]
pjf 4 hours ago 0 replies      
How to apply?
it_learnses 14 hours ago 0 replies      
this is awesome. I'm just finishing my masters, and would love to do phd in the same area, but the TA grant is measly. no way I can do that right now.
dtournemille 20 hours ago 0 replies      
Can't view the ycr.org site. Flagged as malware by OpenDNS. https://malware.opendns.com/controller.php?url=ycr.org&serve...
ratsimihah 18 hours ago 0 replies      
Hey there! Can I apply to YC Research for my AGI research project? :)
hoorayimhelping 18 hours ago 0 replies      
This is amazing. Where do I sign up to help?
amelius 17 hours ago 0 replies      
Can we vote for projects somewhere?
ramon 20 hours ago 0 replies      
Nice, good luck! Good funding :)
softwarerocks 19 hours ago 0 replies      
As long as the people hired in are more hands-on and not the Sheldon Cooper types then it's a wonderful idea, especially that the good stuff will be made free for everyone. It's also quite endearing that you are putting so much of your own money into the idea.
graycat 11 hours ago 0 replies      
Hope YCR works. Suggestion: Get a board of advisors who have had a lot of experience managing research, and let themoccasionally advise you onopportunities and pit falls.

I got an applied math Ph.D.,just to improve the goodcareer I already had going inapplied math and computing,had no intention of being aprofessor, but for a while,having to do with my wife inher long illness, I was a professor.

The biggest problem I had in researchwas just getting the mathematicalword processing done. The typingwas much more difficult than theresearch! I published some papersin some good journals and neverhad a paper rejected or needingsignificant revision. I couldhave done a lot more in research,but I was really interested justin making money.

For research in academics, that is,the STEM fields, I have long had a suggestion: Borrow from research in medicine!

Why, how,in what respects? Sure: Researchin medicine also has a clinicalside, and a lot of the research,applied research intended to connect with applications, verymuch needs the clinical side.

E.g., instead of all the seminarsbeing professors and students presentingsolutions still looking for applications, have about half theseminars with people from the real world outsideacademics present problemslooking for solutions.Have professorswith their students attack suchproblems. That way couldget some problems to work on,simple, medium, and really difficult and important, e.g.,P versus NP just from applications of optimizationto, say, vehicle schedulingor communications networkdesign. E.g., I was workingon something applied andrediscovered k-D trees --a few years earlier and Iwould have been the inventorof k-D trees. And there areother examples of startingwith a real problem from thereal world and getting goodresults for that problem,getting a good research problem,and getting some good researchprogress.

Besides, such research alreadyhas one application and, thus,is much more likely to havetwo or many more than two.

Have an expectation that a professorpursuing applications is a professional and needs to practicetheir profession. And the studentsneed to be there also as apprentices.

Also, have codes of ethics,standards on how revenue is tobe distributed, and professionalpeer review of the clinicalpractice.

For more, it would help if theventure capital community wereready, willing, and able toevaluate original researchintended as the crucial coreof startups.

chm 16 hours ago 0 replies      
Wow, YC is realizing a dream of mine. Congratulations, it is a great idea!
tacos 18 hours ago 0 replies      
Interesting comments by pg 1322 days ago: https://news.ycombinator.com/item?id=3622545
rokhayakebe 20 hours ago 0 replies      
Ideas: Free Global Health Care, Free Global Education, Government Management Software (A to Z), or Private Secondary Government.
pw 19 hours ago 9 replies      
> Im going to personally donate $10 million

Statements like this always turn me off. I guess it's because it breaks the illusion that Sam and everyone else at YC are normal people like most of us. They're not. They're INSANELY wealthy by any reasonable standard.

New Windows 10 Devices From Microsoft windows.com
873 points by yread  1 day ago   934 comments top 135
MatthiasP 1 day ago 18 replies      
That's what the OEMs get for not being able to put out a laptop that could compete with Apple in all those years, they always managed to introduce some fatal flaw in their premium laptops, from weird keyboard layouts to bad fan management software.

Let's hope the Surface Book will be succesful and Apple finally gets serious competition in the premium laptop market.

aresant 1 day ago 17 replies      
Wow, this page is a mess - here's an element based breakdown of the switching elements in their presentation:


- nav 1

- nav 2

- header with what sounds like a call to action, but no button to buy?

- hero image with text overlaid that has terrible contrast nobody will read

- inter-page menus with some insane zooming function that scared my browser

- another hero image

- 3 columns - maybe buttons? no not clickable.

- another hero image

- 3 more columns - maybe buttons? no not clickable.

- another hero image

- 3 more columns - maybe buttons? no not clickable.

- 2 columns marketing other products? maybe buttons? yes - those little ">" things mean they're clickable i guess.

- 3 more columns - maybe buttons? ok now they're buttons. but there's no ">"?

- another hero image with a price action, no button to click to follow the action! WTF!

VS vs the iPad Pro http://www.apple.com/ipad-pro/ which is single column and nav consistent throughout.

As the old quote goes "If I'd had more time I would have written a shorter letter."

Feels rushed.

pcunite 1 day ago 1 reply      
What Microsoft has done today is prove they're very focused about providing a top of the line personal computing experience. You can argue about server, but when it comes to applications (which are floating windows), Microsoft Windows has proven they can keep that title for their operating system.

I'm glad to see them take ownership over the hardware. That has always been the black mark. I build my own PCs and always bought ThinkPad to keep the good experience. Now, Microsoft can help others who don't or can't do that.

cdnsteve 1 day ago 19 replies      
Could be a MBP replacement for developers. The only thing is those of us running on OS X, how is Windows 10?

I love my command line and linux like commands and tools. - Homebrew- Bash scripts- Docker (Windows 10 currently not supported)- Vagrant

I just feel the tooling for MS isn't in the direction I am.I still have a Windows 7 desktop and it's just not the same.

dang 1 day ago 4 replies      
We merged https://news.ycombinator.com/item?id=10340117 and https://news.ycombinator.com/item?id=10340022 hither, since there shouldn't be three stories about this on the front page.

Since the live event looks done now, we've picked (what I think is?) the most significant product URL to change to from http://www.microsoft.com/october2015event/en-us/live-event. If anyone suggests a better URL we can change it again.

mark_l_watson 1 day ago 2 replies      
One advantage that Microsoft has is better cloud services and integrated apps. I am typing this on a MacBook, but I use Office 365, and all of the cloud services and apps run just fine also on my Android phone and iPad. Perversely, Microsoft supports Linux very well: I find the web versions of the Office 365 apps useful on my Linux laptops and I use Linux VPSs on Azure.

The Surface Book blows me away. It looks like it covers all use cases except for a phone.

Apple has their advantages, primarily most people love Apple devices. They just need to improve their cloud services.

Google's huge advantage is their AI based systems. Google Now has no real competition right now.

I am almost 65, and even though I enjoy running a machine learning/AI consultancy, I am transitioning to a more complete retirement. I am looking for a "winner" in the digital life space, adopt their products, and make my leisure years simpler. But, Microsoft, Apple, and Google blow me away with their products and choosing will be difficult.

nullrouted 1 day ago 14 replies      
I think the Surface Book is finally something that can give the MacBook Pro line a run for its money, this should be interesting.
nilkn 1 day ago 2 replies      
When I got my first MacBook back in 2008, it was a revolution for me. The industrial design, the multitouch trackpad that actually worked 100% of the time, the backlit keyboard, the battery life, the trackpad-friendly OS -- these all worked together to make me wonder how other laptop manufacturers had got it so astoundingly wrong.

I haven't had that feeling since then. Sure, the MacBook Air came out and it was amazingly thin. Now there's Force Touch, and that's quite nice as well. But this whole time I've just been waiting for somebody, Apple or not, to blow me away the same way I was blown away ~8 years ago, to do something that makes you wonder how everyone else got it so wrong.

Is this that moment? I don't know, but it might be.

codeulike 1 day ago 1 reply      
"The laptop that can replace the tablet that replaced your laptop is also a tablet" - someone on twitter (edit: @schaemelhout)
sz4kerto 1 day ago 4 replies      
I think they've pulled the rug from under other HW vendors, and that is well-deserved (for the vendors). MS essentially made the ultimate Windows PC (tablet, pen, long battery life, powerful GPU if needed), after waiting for the partners for years.
aleem 1 day ago 4 replies      
I have been skeptical after a lot of Microsoft misses but the Surface Book Pro might just put Microsoft on the high road. Splitting up the hardware breaks new ground. If I understood it correctly, the GPU is in the keyboard which you can attach to get more power. In detached mode the screen itself has an i7 processor that's plenty powerful. So they managed to let you hot-plug the GPU while the OS is running?
bhauer 1 day ago 3 replies      
Surface Book promo video: https://www.youtube.com/watch?v=XVfOe5mFbAE

Worth watching.

DeusExMachina 1 day ago 2 replies      
I think this device really shows the different approaches of Apple and Microsoft to "tablets".

When the iPad Pro was announced, many joked about the fact that it was just like a surface. With the release of this device from Microsoft, the look even more similar then before from the outside.

Still, the huge difference in the approach is the software. Microsoft is bending a computer operating system, with a full hardware keyboard and an interface made mainly to be used with a mouse, to adapt to touch and the use of a tablet. Apple instead is slowly expanding the functionality of a pure touch operating system that reject the idea of a mouse and a cursor entirely, to accommodate more computer uses, adding a keyboard and a pencil.

fumar 1 day ago 1 reply      
As a Surface Pro 3 user, who saves a Macbook Pro for heavy lifting (like video) at home, I can safely say the MB is getting replaced with the Surface Book. Depending on in-person use, it might also replace the Surface Pro 3. I typically watch Apple's and Google's product launches, have to to say this event was concise and unveiled products in a great forward moving momentum. Solid work Msoft marketing team!
s3nnyy 1 day ago 8 replies      
In closed state, the surface book has a giant gap between screen and keyboard (https://goo.gl/n5B7Te). I think, it can easily happen that things in your backpack slip between screen and keyboard and damage the screen. That is why the old Thinkpads used to have a "click" mechanism.
SwellJoe 1 day ago 1 reply      
I can't believe I'm seriously considering buying a Microsoft product. But, this is a really nice looking laptop, and I suspect does not fall prey to all of the bullshit that is so common on Windows laptops, even high end ones.

If it were possible to dual boot to Linux, I'd be sold. I have my doubts that it is, however. I guess one could use a VM...I've always found that clumsy in the past, particularly in terms of getting accelerated graphics drivers working, but maybe times have changed.

voiceclonr 1 day ago 7 replies      
Page looks bad and it loads so slow! At some point, the msdn pages were awesomely fast and I would've imagined they spent more time on load testing for such a crucial day.

That aside, the product looks very interesting. I was a Windows user for a long time and switched to Mac in the last few years. This makes me want to give the newbie a try.

SNACKeR99 1 day ago 0 replies      
That was an epic launch. Docking the phone and using desktop apps, and then the removable Surface Book screen, wow. It eclipsed the Surface Pro 4 launch, which is what I expect most people were most hyped about. I have to go back and remind myself what changed there...
JustSomeNobody 1 day ago 4 replies      
I wonder if I can get it to run Linux. That would make a pretty nice machine, if so.

Edit: I'm not hating on Windows. I just don't prefer it.

Roritharr 1 day ago 0 replies      
The Surface Book feels like the second coming to me. This is everything i wanted them to produce and they delivered perfectly. Thanks Surface Team!
lewisl9029 1 day ago 1 reply      
Overall quite happy with these updates for the Surface and Surface Book, but personally I'm a bit disappointed by the lack of shiny new technologies like USB-C, Thunderbolt 3, Wireless Charging and WiGig.

After having used a laptop with a WiGig dock that can support a fully populated 10-port USB Hub, 3 DisplayPort displays, and Gigabit Ethernet transfers wirelessly across most of my room, I'm thoroughly convinced that a device with both WiGig and Wireless Charging would be absolutely amazing to use.

drewg123 1 day ago 3 replies      
How about the Band 2?

Unaware of this event, yesterday morning I ordered the first MS Band for a little more than 1/2 the list price of the Band 2. I'd been shopping for fitness trackers for a long time, and settled on the Band as the only tracker meeting my needs that can also act as an Android trusted bluetooth device & keep my phone unlocked.

From what I can tell, the Band 2 adds:- softer, more flexible strap (soft shell vs hard shell)- barometer for elevation- gorilla glass- better touch sensitivity

Is that it? If so, given the nearly 2x price difference, I think I'm just going to keep the old band and use it.

bndw 1 day ago 7 replies      
Looks terrible on Chromehttps://imgur.com/H75ejZN
gbraad 1 day ago 1 reply      
Silverlight or Flash needs to be installed? ... Eh no, thank you.

Googling for Surface Book reveals enough; a beefy (but Ultra-thin) Ultrabook with Skylake CPU + Nvidia GPU and detachable as tablet. Surface Pro is similarly specced with a Skylake CPU based tablet as successor to the previous version.

Dutch Tweakers.net:http://tweakers.net/nieuws/105651/microsoft-brengt-zijn-eers...http://tweakers.net/nieuws/105648/microsoft-onthult-surface-...

edvinbesic 1 day ago 0 replies      
This thing is awesome. Does this mean that we can now finally upgrade the GPU of our laptop by buying next years keyboard dock?

If they sold that separately in the future it would be a killer feature!

dogma1138 1 day ago 0 replies      
I'm kinda bummed now that i just bought the MBP 15" with dedicated graphics.

After finally swallowing my pride and getting an apple device MSFT announces this.

I've never been invested in the Apple eco-system and I've spent a week to find comparable software and even after that I'll still be needing to run a windows guest on VMWare Fusion 8.

If Amazon returns will accept it I might actually return it once the UK prices for the Surface Book will be published, the funny thing is that Amazon sold the MBP 15 for 500 GBP less than the apple store, it's almost like they new this will happen.

thoman23 1 day ago 3 replies      
Why on earth is it so hard to find video of the keynote? Here we have a genuinely exciting product announcement with a brilliant "one more thing" hook, and I can't find the video. Microsoft should have it plastered over every news outlet in the world.

I found the video below (with 97 views!?), which makes me really want a Surface Book...and then it cuts off RIGHT BEFORE the big reveal! Microsoft seems to have caught up to Apple in terms of hardware, but it seems they still have a ways to go in marketing and PR.


noahbradley 1 day ago 0 replies      
Weighing in as an artist who works in Photoshop all day: I just preordered one because they look awesome. This is exactly the sort of machine that people in my profession want.

I've used the Surface Pro 3 exclusively (as in, no desktop) for about a year now. This will be a fantastic replacement.

Props to Microsoft for actually looking out for creatives.

bsharitt 1 day ago 0 replies      
The surface book looks nice. I like the idea of tablet/laptop hybrid, but not a fan of kick stands and keyboard covers. I liked the idea of Asus's transformers but I don't recall seeing one with really good specs. They were either Android(don't need a laptop there) or the Windows ones I remembers seeing were Atom powered.
aceperry 1 day ago 0 replies      
I'm amazed that Microsoft has introduced a laptop that is worthy of going head to head with the best. All of the specs look great and it looks like it could possibly replace my beloved Chromebook Pixel (the original). If linux can be installed on it easily, it would probably be my next computer.

Very interesting to see a "pen first" interface which I hope works much better than that shitty "touch first" interface in Windows 8. I don't care what the fanbois say, windows 8 sux! Windows 10 is a major improvement in usability compared to 8, so I hope MS is back on track to making productive systems. If the new surface book works as good as it looks, MS is back on track to being relevant in the computing landscape.

arthurfm 1 day ago 3 replies      
Does anyone know why the fingerprint reader on the Surface Pro 4 keyboard is only available in the US?


jpeg_hero 1 day ago 0 replies      
the PC Makers and the windows OEM echo system deserves this.

msft and intel tried for years to bring them along... remember the "ultra book initiative?"


thinks got so bad that msft realizes that they've got to "make the whole widget" despite the reluctance to cannibalize the ecosystem. (we ship gold masters, not pcs)

dionidium 1 day ago 0 replies      
As someone who was completely out of the loop on Microsoft's recent offerings, I found the comparisons at the bottom of this page helpful:


It's also loading a lot faster right now.

h43k3r 1 day ago 1 reply      
The detachable part is very important if you are buying a touch screen laptop. I have a $1800 lenovo x1 carbon which has a awesome touchscreen which I rarely use because it doesn't support tablet/detachable mode.
bitL 1 day ago 1 reply      
Funny how both fruity and glassy company "inspire" each other - first fruit's version of surface, now glass' version of a "book" :-D
bitsoda 1 day ago 4 replies      
I'm not crazy about paying extra for the tablet capability and hinge as I'll only ever use it in laptop mode, also curious that there's no USB-C port. However, I'm glad Microsoft is showing its OEMs what a proper Windows laptop should be. The OEMs have been shitting the bed on this for years.
akhilcacharya 1 day ago 0 replies      
I was very impressed until I saw the pricing strategy. $1500 for a 128GB SSD/8GB RAM and no discrete graphics? I'm not as interested.
saosebastiao 1 day ago 1 reply      
I'll buy one if it can support Linux.
Artistry121 1 day ago 1 reply      
How much adoption of windows phones do you think it will take for google to start releasing dedicated apps like they do on iPhones?
Gustomaximus 1 day ago 0 replies      
I bought a Lumia 435 recently to test Windows 10 and see if it's worth going all in (they announced they would upgrade 435). Now MS are not going to upgrade this model. Behaviour like this is not how they will build credibility and new users.

Overall current version of Windows Mobile seems 80% there. Some things are done better than Android/iPhone, many other they don't. I was really looking forward to trying 10, but now will likely try out alternatives (Sailfish/Firefox/Ubuntu Touch?) as I can flash an existing phone.

I'd love to see a market where there are four+ OS's each with reasonable share seriously competing to be the best. It would be a consumers dream!

brudgers 1 day ago 0 replies      
Because I use Emacs, I was pumped when I saw that the keyboard has symmetrical keys around the spacebar like a Thinkpad or Microsoft's own Natural Ergonomic 4000 keyboard.

Then I went to the computer with the big monitor and zoomed in on the keyboard photo. The assignments are committee meeting fucked. Fn and <- instead of two Cntl's on the third key outboard. Oh well.

codeulike 1 day ago 2 replies      
The Lumia 950 docked into a screen and keyboard was pretty interesting
listic 1 day ago 0 replies      
I wonder what the exact CPU models will be? Microsoft seems to be taking clues from Apple's guidebook and doesn't bother with stating the exact CPU model even in 'Tech specs'. Likely the Y-series of the latest 6-gen (Skylake) mobile i5's [1] and i7's [2]. They are not released nor announced yet, but the Surface Book is not exactly shipping, either.

[1] https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_micropro...

[2] https://en.wikipedia.org/wiki/List_of_Intel_Core_i7_micropro...

suprgeek 1 day ago 0 replies      
Welcome to the party Microsoft!We were missing you...
ctvo 1 day ago 0 replies      
One of the first things in years MS has put out that I'm seriously considering buying. It looks like they nailed the tablet / laptop experience from the previews so far.
BinaryIdiot 1 day ago 2 replies      
I'm very curious how well the trackpad on the Surface Book works. I've had MacBooks and the HP Spectre; the Sprectre is awesome but the trackpad...I mean holy shit it's just absolutely awful to the point where I've pretty much stopped using it.

So how good is this trackpad and does it work like a MacBook's where two fingers = right click versus this weird obsession PC vendors seem to have about dividing a single trackpad into invisible click zones?

wslh 1 day ago 0 replies      
Up to 16gb and lightweight? It competes directly with Lenovo X series and the Dell XPS 13.
pgrote 1 day ago 2 replies      
I am very interested in learning how they provide 12 hours of usage for the Surface Book. What do they disable to extend the life? They battery capacity?
arunitc 1 day ago 0 replies      
What i5/i7 processor are they using MQ, HQ, M or U?
togusa 1 day ago 0 replies      
That looks like it might actually hit the spot to replace my old ThinkPad X-series.
tdicola 1 day ago 3 replies      
Surface Book looks neat but is it an admission that the Surface Pro and its flexible keyboard, kickstand, etc. perhaps aren't the best design?
tempestn 1 day ago 0 replies      
Looks like a nice, inexpensive docking solution for the Surface Book too [1]. $200 for a full-featured dock with ethernet, 2x DisplayPort, 3.5mm, multiple USB ports, and charging, all with a single cable.

[1] http://www.microsoftstore.com/store/msusa/en_US/pdp/Microsof...

owenversteeg 1 day ago 0 replies      
Anyone here know how usable this would be with a tiling WM/on Linux? I've got a touchscreen Ideapad Yoga and I just bought the newer Yoga Pro, which looked really nice, but I've never gotten any use out of the touchscreen on Linux. If you assume that you can't use the touchscreen (which seems to be the current reality on Linux) this has very few advantages over any other laptop (beautiful construction aside.)

Specifically, I'd love any names of Linux applications that people use with the pressure-sensitive pen/touchscreen/multitouch.

-or- anyone out there that dualboots Windows and Linux on a touchscreen just for the Windows touch features? How usable is this?

I'd be very appreciative of anyone who has any comments/suggestions, as this is something that's bothered me for quite some time.

jerrysievert 1 day ago 0 replies      
Does anyone know what type of cable the SurfaceConnect cable is? Is it yet another standard, or thunderbolt re-labeled?
TurboHaskal 1 day ago 0 replies      
Some tips and common sense to combat the hype train:

- Never buy "iteration 0" products. If you do, at least wait a few months for long term user reviews.

- Never buy non upgradeable ultrabooks, unless you plan to sell it right before the warranty expires. If that's the case, then get a product that doesn't depreciate like crazy after a few months (stick to MacBooks and nothing else really).

passive 1 day ago 1 reply      
I'll be the odd one out and say that I will miss the kickstand on the Surface Book. For me, the kickstand is what has made the SP3 into the best computing device I've ever used.

While a proper laptop will cover many of the same cases as the kickstand, there are times when the kickstand can function as a hook, allowing the surface to be used in positions a laptop simply wouldn't be practical.

Probably my favorite example is to hook the SP3 over my steering wheel (not while driving!), and allow the keyboard to drape down mostly vertically. Certainly not the most productive position, but for videoconferencing on the go, it's come in handy half a dozen times.

Another similar one is lying down with my knees up. The stand wedges it nicely between my legs, with the keyboard over my thighs. It's fairly comfortable, and pretty good for typing.

Otherwise, Surface Book looks terrific. :)

6stringmerc 1 day ago 1 reply      
If this doesn't convince musicians and DJs to more seriously consider the Microsoft Surface line of products, I don't know if it's possible. Personally I love the look of this, and in time, will have it earmarked as a replacement for my Lenovo X class ultrabook. I'm impressed.
codeulike 1 day ago 1 reply      
Question is: Whats the battery life of the detatched screen?
pmelendez 1 day ago 0 replies      
I would love to read a new Penny Arcade review of this vs. Surface Pro 4. I hope it happens...
chx 1 day ago 0 replies      
As I posted in another thread: I was wondering and after some Googling it seems the dock is not using DisplayLink but some proprietary connector routing DisplayPort and USB3 which means it works with Linux well. This is awesome news.

The keyboard we discussed yesterday https://news.ycombinator.com/item?id=10334026 is less than an inch bigger in each direction -- I am wondering whether I could replace a laptop with that keyboard and the Surface Pro 4. Previously I wouldn't even consider because of the 8GB limitation but it's possible now with 16GB.

acaloiar 1 day ago 1 reply      
This looks like a great step in the right direction for Microsoft, but their marketing copy, "Use it like a clipboard" is profoundly jarring. Why is Microsoft afraid to use the word "tablet"? Clipboard is hardly a synonym.
deskpro 1 day ago 0 replies      
Great to see 16GB RAM in Surface Book. Disappointing they've not embraced USB C/3.1
lewisl9029 1 day ago 0 replies      
Personally, I actually prefer the 16:9 aspect ratio for any screen that has sufficient raw vertical resolution to not hinder productivity (i.e. a lot more than 1080px for me).

I feel that once you get past the point where the lack of vertical resolution limits productivity, the marginal utility offered by even more vertical resolution becomes rather insignificant, to the point where I'd benefit more from having more horizontal resolution for snapping windows side-by-side.

masklinn 1 day ago 1 reply      
1. how does it charge? I've looked at the promo video, I don't understand where the power plug is

2. on a completely related note, why the Type A ports? Why not put a bunch of nice Type C, maybe a single Type A for backwards compatibility?

anjc 1 day ago 0 replies      
What an amazing event. Every announcement was mindblowing. It's saying a lot when I'm hyped about the Surfacebook and Lumia, and then realise that I forgot about the Hololens devkit being released in 3 months. Amazing.
bluecalm 1 day ago 0 replies      
It's a perfect fit for what I imagined to be a dream laptop for programming: 3:2 high res screen (more vertical space than 16:9 or 16:10), real quad CPU, good battery life, 16GB of RAM. It's light and great looking as well.The only thing I am missing there and which makes me unlikely to switch is no trackpoint option. I find trackpoint to be so much superior to a trackpad I don't ever want to go back. I even prefer a trackpoint to a mouse these days (and the fact that you don't need to carry it around is just an additional bonuse)
hoverbear 1 day ago 0 replies      
Looks like a great Linux laptop... Let's hope it works with it.
Veedrac 1 day ago 0 replies      
Google Trends "surface pro" vs "macbook air" vs "macbook pro" is informative. Of course, none comes close to the iPhones or iPads, but it's impressive how much market share Microsoft have snatched up compared to the almighty Apple behemoth.


ChuckMcM 1 day ago 2 replies      
Interesting, I think the iPad Pro announcement did them a favor. Now I'm going to have to visit a store and actually play with one of these things, could be an expensive fall season for me this year.
astaroth360 1 day ago 0 replies      
That hinge is so cool! Finally a real tablet/laptop hybrid comes out!

Seriously, this makes the difference for me. I never wanted to buy a tablet device because I thought I wouldn't use it much, but if I can just pull off the keyboard and turn it into a tablet that runs the same OS occasionally, that's something I'm interested in.

morsch 1 day ago 1 reply      
Available with 16g RAM. Interesting. The upcoming XPS 13 refresh supposedly offers 16g, too, but the current version doesn't. I wonder how well a Microsoft laptop is going to support Linux.
jiantastic 1 day ago 2 replies      
Curious how Microsoft can enter the market this openly. Don't they have an agreement of some sort with hardware manufacturers ( HP, Dell etc) that they won't enter the market?
addicted 1 day ago 0 replies      
The design reminds me of my 2005 plastic iBook for some reason. And I totally mean this as a compliment. Probably the best device I've ever owned (adjusting for contemporary tech).
johnchristopher 1 day ago 0 replies      
I love the `thunderstruck' theme song but the `start me up' one from the stones was a better fit when MS communicated about Windows 95.

I remember some Office ad campaign that had more punch and coherence too.

Adding the weird Lumia branding I'd say there's a decline in MS marketing efforts. But I might very well be in an echo chamber of my own.

rayalez 1 day ago 0 replies      
I wonder if this will work well with Ubuntu Touch.

Ubuntu is on a similar path of creating universal OS for both laptops and tablets. On this hardware it can be something really epic.

sidcool 1 day ago 0 replies      
I cannot open this page somehow. It's accessible on my mobile data, but not from broadband. In fact, no Microsoft sites are opening. What could be the reason?
miahi 1 day ago 2 replies      
Not different than most current offers on laptops, the keyboard still looks horrible for programming[1]. Fn keys shared with Home/End/PgUp/PgDwn, Up/Down arrows sharing a key. I guess they call it "slick", I call it won't buy.

[1] ok, depending on what you are programming, I'm sure there will be people saying they only need the "0" and "1" keys working.

gcb0 1 day ago 1 reply      
tell me the didn't remove true multitasking from windows (pro no less)

faq got me very worried:

"""Can I run multiple programs at the same time?

A: Your Surface Book allows you to run up to two apps side by side on a screen at a time. You can schedule meetings on your calendar while you respond to email, or edit your PowerPoint deck while you listen to music."""

up to two ...what?!?!

inerte 1 day ago 2 replies      
Does anyone have more details on the "NVIDIA GeForce"? I can't find the model, memory or any other info. What I really want to know is how powerful the GPU really is, and some benchmarks, but for now just some model-number or series-letter would be fine.
ljk 1 day ago 0 replies      
For people who uses tablets like the surface exclusively, does your neck get sore? It doesn't look comfortable working like that
unabst 1 day ago 3 replies      
Microsoft has been chasing Apple for a while now, but chasers never win, because they depend on whom they are chasing for instructions. They even hired the Apple store firm to do their stores. Now they are making notebooks at Apple's price points.

What's ironic is Microsoft has often tried things first. They tried tablets and hybrid laptops and smartphones before Apple. They even tried flat design before Apple. They have ideas, but some of their products have been just awful.

Microsoft still leads in the living room with xbox, the office with Office, and in the market (or big parts of it) with affordable computers. They're actually killing it with their smart phone apps and the subscription model. They even seem to be embracing open source. These are core strengths, and are areas where they are actually ahead.

Yet, by doing as Apple does, only later and weaker, the conversation is always about Apple being better and ahead. And the kicker is, it's true. It's hard working around the truth in America when it's this blatantly obvious.

I'd love to see Microsoft open source Windows and make it a subscription. I'd love to see them put Windows 10 on every device and produce a phone that could run the real photoshop, just as a statement, even if it sucks. I'd love to see them define a new laptop that OEMs could build instead of selling one and competing with them. I'd love to hear a competing philosophy and not just a product.

If anything, what they lack is philosophy, and a face that speaks it. What I'd love to see is a philosopher sharing ideas and inspiring an audience... I would do anything to see Steve Jobs again.

elipsey 1 day ago 2 replies      
Are the bootloaders locked?
cryptoz 1 day ago 0 replies      
Does the Lumia 950 have a barometer? Or does anyone have a full specification sheet? I don't see the specs listed at http://www.microsoft.com/en-us/mobile/phone/lumia950/
SoapSeller 1 day ago 1 reply      
Also, the (pre)order page is here[0]

But without full specs ):


simonhughes22 1 day ago 0 replies      
Wow, could they make it look more like a Mac Book? Apple will have to come up with a different look for theirs now.
kid0m4n 1 day ago 0 replies      
Excited to see what this competition will do to Apple. Expect to see something radical in the MBP line now.
vinceyuan 1 day ago 1 reply      
Tech specs of Surface Book are very nice. But it does not look good and is too thick when the lid is closed. https://imgur.com/IXUzASs
bane 1 day ago 0 replies      
In case somebody missed the presentation: https://www.youtube.com/watch?v=407Fykg8oz4
rayiner 1 day ago 1 reply      
Miffed that you gotta by the $2,600 model with dGPU (which most people don't want or need) to get 16GB of RAM. You can get a 13" rMBP with 16GB for only $1,500.
bananaoomarang 1 day ago 0 replies      
Looks like a great machine, but almost certainly won't run a non-virtualised Linux distro very well (if the old Surface lineup is anything to go by). A shame for devs, but I suppose they want people to use the MS env/toolchains.
bduerst 1 day ago 0 replies      
Why no USB Type-C?
johnchristopher 1 day ago 0 replies      
Can it be said that MS is pivoting from a software company to a hardware one ? With Office and Windows as (almost fre) incentives ?
51Cards 1 day ago 0 replies      
Not sure many would agree but I would have liked to see a kickstand on the back of the Surface Book screen. I can see a desire to stand it up without needing the full keyboard. Other than that I think that may be my next device purchase.
rw2 1 day ago 1 reply      
The site is screwed up on MAC chrome: http://imgur.com/1hHpSiy

How can you not check one of the most popular browsers on the internet.

20tibbygt06 1 day ago 0 replies      
The surface book has the potential to propel Microsoft past apple. I was so amazed at everything that the surface book offers. dedicated graphics, pci storage, the design, and the convertibility. I highly believe that 2 in 1 hybrids like the surface book will be the future. A device like this embodies everything that we have come to love with technology in this time and age. the tablet form, touchscreens, portability, performance, sleekness, applications.

I currently use a toshiba click2pro [1] as my laptop of choice. This device has the same spirit as the surface book, but fails on many ends where I see the surface book exceeding. So far I have not seen any other manufacture crack the right formula for this type of device. The Asus transformer book, HP Spectre, Toshiba click all have their drawbacks or just weren't design right.

Take my Toshiba click 2 pro for example. 1. the weight of the LCD is heavier than the keyboard dock causing the laptop to be top heavy when moving it around. Toshiba circumvented this by docking the screen not at the the where the dock and keyboard meet, but by moving the docking location in a bit the lcd will stay without being top heavy and failing over. 2. The keyboard dock has potential to add components such as extra internal storage when docked, adding another battery cell in the dock, adding dedicated graphics. 3. The docking hinge is awful this is the main problem that I currently see with these devices. All the 2 in 1 have terrible hinge technology to hold the device together.

Now Microsoft has gone and put a lot of thought into this device and I believe they have a winner. The have set the bar for this type of device and other manufactures will be coming with their own devices. The hybrid 2 in 1, a true laptop and tablet device.Why Microsoft will succeed is because they have not only looked as design, but they have put a lot of thought into performance and functionality. From adding dedicated graphics to the keyboard dock it transforms the tablet to a true power house of a laptop. Also from what I saw, they added extra batteries in the keyboard dock by doing this they have distributed the weight of the components. I hope to see more analysis on the weight and feel of the device. The hinge looks to be highly in house designed. There was a talk about how Microsoft built the surface pro 3 [2] and watching the talk you can see that Microsoft spent a lot of time designing the components to be functional. I highly believe that the "dynamic fulcrum" hinge is a step way above of what other manufactures have done so far for a device of this type. I hope we get analysis on the hinge.

A device that blurs lines between the laptop and tablet. Between entertainment and performance. For some time we were split between a tablet device running an OS system that was designed for entertainment consumption, not being able to have the power to do more intensive tasks yet manufactures trying to sell it as if it could. (iPad Pro, the new Pixel C) to having to choose between OSX or windows for when we truly needed the power of full operating system.Like another comments stated we are seeing where Microsoft and Apple are heading. Microsoft is blurring the line of devices and like they stated are making us the hub where we are allowed to use our devices in different ways. To docking your phone to enjoy the desktop experience under continuum to using the surface book while lounging around and browsing as a tablet. I am liking where Microsoft is heading.

[1] http://www.pcmag.com/article2/0,2817,2468274,00. [2] https://channel9.msdn.com/Events/Ignite/2015/BRK3302

shogun21 1 day ago 0 replies      
Great. I was all ready to get a Surface Pro 4 and now there's the new Surface Book to consider!

My biggest question is how does performance differ between the SP4 and Surface Book (detached from keyboard)?

PaulHoule 1 day ago 3 replies      
When is somebody going to come out with a convertable that doesn't have a trackpad? Does somebody have a patent that makes it illegal to make a PC without a trackpad or something?
mtw 1 day ago 0 replies      
Typos, poor loading times. I'm not impressed, Microsoft!
xez 1 day ago 0 replies      
So are there specs for the Book anywhere? If I can replace my notebook + wacom intuos setup with one of those, It'd be wonderful. A helluva lot more travel-friendly.
jhugg 1 day ago 2 replies      
How much does the books base weigh? Why will no one tell me?
codeulike 1 day ago 0 replies      
PixelSense is apparently their name for what drives the pen now (rather than Wacom or N-trig). I think they bought n-trig and have developed it further maybe?
thezilch 1 day ago 1 reply      
It's weird this forwarded me to "en-au", when I've always lived in the US. I had opened a separate tab directly to the Surface Book that was on the "en-us" site, and when I saw the device pricing, not knowing I was on AU, I was surprised to see the HUGE difference in price for the Surface Book from what I'd saw on the other page/tab -- 2.3K vs 1.5K. Hopefully that's rare for them, or there are going to be a lot of people turned off to that price point -- 2.3K for the Surface Book is insane.
AdmiralAsshat 1 day ago 1 reply      
So..when your slogan is "The tablet that can replace your laptop," maybe it's not a great idea to announce a laptop next to it?
obilgic 1 day ago 3 replies      
Website layout is all messed up on Mac/Chrome
chang2301 1 day ago 0 replies      
this might be something to expect after all these years...seeing lots of dudes wish Windows to come back just to give Apple more pressure to make the next revolutionary product...this might be a alarm to Apple somehow by lacing breakthrough all these years.
masterponomo 1 day ago 0 replies      
I want one, but after the New Yorker article on Reid Hoffman I'm kind of waiting to find out if he likes it.
miguelrochefort 1 day ago 1 reply      
The headphone jack position is a deal-breaker.
Pimmel 1 day ago 0 replies      
Finally! Now, the only problem is the os.
jokoon 1 day ago 0 replies      
Wonder how hard they found against hardware backdoors on this. Also wonder what companies provided the hardware.
bambang150 1 day ago 0 replies      
Apple - Microsoft seems interesting these days. Old rivals seem back after a whole silence for both of them
uvu 1 day ago 0 replies      
The page cannot be displayed because an internal server error has occurred.
dfar1 1 day ago 0 replies      
Hopefully this page (barely loading) does not represent the quality of their products.
rbnacharya 1 day ago 4 replies      
Anybody have any idea about memory? How much gigs of RAM it compromises ?
tudorw 1 day ago 0 replies      
LTE? Or do they really want to sell phones that badly...
findjashua 1 day ago 0 replies      
that page does not inspire confidence

screenshot: http://imgur.com/uBQfoaX

BadassFractal 1 day ago 0 replies      
Their CDN was crapping out earlier, seems fine now.
manuu 1 day ago 0 replies      
what's the target users for this machine?
miguelrochefort 1 day ago 2 replies      
The headphone jack position is a deal-breaker.

Rookie mistake.

jblow 1 day ago 0 replies      
No trackpad buttons, no sale.
sliverstorm 1 day ago 1 reply      
By reconnecting it to the keyboard, you unlock its full creative power in a pen first mode.

Hmm... what does the keyboard contain? Are we looking at another dual-processor hybrid? Or perhaps only dual-graphics? Or even simpler, does the keyboard have most of the battery and so the screen aggressively throttles itself when undocked?

plg 1 day ago 0 replies      
can you install linux
techaddict009 1 day ago 0 replies      
I compared prices of it and macbook pro. They both are almost same!
samstave 1 day ago 1 reply      
I have a surface pro 3 and it's the first windows based machine I've touched in five years and I love the thing.

I really want the book now though...

The keyboard for the pro 3 is awesome. I really love how solidly the magnetic attachment feels for the keyboard, the backlit keys and the actual mechanical buttons...

It's fantastic.

I look forward to seeing how the book feels in person

20tibbygt06 1 day ago 0 replies      
The surface book has the potential to propel Microsoft past apple. I was so amazed at everything that the surface book offers. dedicated graphics, pci storage, the design, and the convertibility. I highly believe that 2 in 1 hybrids like the surface book will be the future. A device like this embodies everything that we have come to love with technology in this time and age. the tablet form, touchscreens, portability, performance, sleekness, applications.

I currently use a toshiba click2pro [1] as my laptop of choice. This device has the same spirit as the surface book, but fails on many ends where I see the surface book exceeding. So far I have not seen any other manufacture crack the right formula for this type of device. The Asus transformer book, HP Spectre, Toshiba click all have their drawbacks or just weren't design right.

Take my Toshiba click 2 pro for example. 1. the weight of the LCD is heavier than the keyboard dock causing the laptop to be top heavy when moving it around. Toshiba circumvented this by docking the screen not at the the where the dock and keyboard meet, but by moving the docking location in a bit the lcd will stay without being top heavy and failing over. 2. The keyboard dock has potential to add components such as extra internal storage when docked, adding another battery cell in the dock, adding dedicated graphics. 3. The docking hinge is awful this is the main problem that I currently see with these devices. All the 2 in 1 have terrible hinge technology to hold the device together.

Now Microsoft has gone and put a lot of thought into this device and I believe they have a winner. The have set the bar for this type of device and other manufactures will be coming with their own devices. The hybrid 2 in 1, a true laptop and tablet device.

Why Microsoft will succeed is because they have not only looked as design, but they have put a lot of thought into performance and functionality. From adding dedicated graphics to the keyboard dock it transforms the tablet to a true power house of a laptop. Also from what I saw, they added extra batteries in the keyboard dock by doing this they have distributed the weight of the components. I hope to see more analysis on the weight and feel of the device. The hinge looks to be highly in house designed. There was a talk about how Microsoft built the surface pro 3 [2] and watching the talk you can see that Microsoft spent a lot of time designing the components to be functional. I highly believe that the "dynamic fulcrum" hinge is a step way above of what other manufactures have done so far for a device of this type. I hope we get analysis on the hinge.

A device that blurs lines between the laptop and tablet. Between entertainment and performance. For some time we were split between a tablet device running an OS system that was designed for entertainment consumption, not being able to have the power to do more intensive tasks yet manufactures trying to sell it as if it could. (iPad Pro, the new Pixel C) to having to choose between OSX or windows for when we truly needed the power of full operating system.

Like another comments stated we are seeing where Microsoft and Apple are heading. Microsoft is blurring the line of devices and like they stated are making us the hub where we are allowed to use our devices in different ways. To docking your phone to enjoy the desktop experience under continuum to using the surface book while lounging around and browsing as a tablet. I am liking where Microsoft is heading.

[1] http://www.pcmag.com/article2/0,2817,2468274,00.[2] https://channel9.msdn.com/Events/Ignite/2015/BRK3302

vegabook 1 day ago 0 replies      
yes very nice hardware, both the phone and the surface book. But the real standout here for me is continuum. This takes the logical next step where all the heavy lifting compute / storage happens remotely, and your "PC" doesn't need to be anything more powerful than the device in your pocket. I for one am overjoyed at not having to cart around a notebook, as the vast majority of my sit-down work happens in locations where monitors are available (ie office, home, client location, in that order of frequency. I personally don't do desktop-class work in trains/planes/coffee shops, even if I understand that there is a use case for this, and that's what the surfaces are for).

I'll even reconsider Windows now. My only slight concern is it would be great to get a slightly less clunky USB/HDMI docking cube. Did it have to be a cube? Surely a cable of some kind would have worked?

Now the only thing left is for Canonical (finally) to ship Unity 8 convergence. Please?

untilHellbanned 1 day ago 4 replies      
how does their website not work on Chrome?


ebbv 1 day ago 1 reply      
I can't see giving up my 15" rMBP for this. Never have I ever wanted to take the screen off and use it as a tablet.

And when I've been forced by circumstance to do development on my Windows machine it's been at best awkward. (Basically trying to recreate a Unix like experience via Cygwin or what have you.)

eru_iluvatar 1 day ago 2 replies      
Did anyone notice that this site is rather sexist?

There's not a single image on that site of a woman using the laptop. The closest image is of a person who looks like they could be a woman under the "Key Features" > "Ultimate performance" text block, but since they don't show the person's face there's no way to know.

The only women on the page are at the bottom, being the support woman and the woman sitting next to the man using the laptop.

This is part of the problem that women in STEM have, and as a guy, it's really disappointing to see such a major player in the tech sector do this.

rebootthesystem 1 day ago 2 replies      
THAT is what I've been waiting for, on all fronts. On top of everything else, run Ubuntu VM's on the phone (if and when possible), Surface 4 and Surface Book and you now have the best of both worlds.

We run various Windows-native engineering tools (CAD, CAM, EDA, FEA, etc.) yet do a lot of software development under Linux (by running VM's on all of our Windows desktops and laptops).

I can imagine popping an SD card into a phone and plugging it into a projector at a client's office to review a SolidWorks design. I know that's not possible today due to the SnapDragon processor but it isn't too far fetched to suggest companies like DS might very well consider at the very least having viewers and other tools available for the phones in the future. It makes total sense.

Microsoft is taking this in exactly the right direction. From user-accessible non-proprietary I/O to file system access on the phone and multi-user capabilities on the Surface devices. Everything is just right.

Sorry Apple. If reviews are good we are dumping our iPhones for MS Phones by the end of the year. Not upgrading to any of your new closed hardware and OS's. We are done. Bye bye.

I really like the energy I am seeing coming out of Microsoft.

There's only one thing missing from that presentation: Microsoft TV. You know that's got to be in the works.

revelation 1 day ago 0 replies      
Are we all just stuck in the bubble? I mean I love this thing for the specs, and Windows is a perfectly fine OS for development nowadays.

But I fail to see how this is helping Microsoft move forward. Capturing a chunk of the developer notebook market doesn't exactly move the needle for them, and they don't have the brand that allows Apple to sell MBPr as glorified Facebook machines to people that would otherwise balk at the price.

curiousjorge 1 day ago 1 reply      
it looks like a copy of macbook, and I love microsoft for that, actually might buy this because of that fact.

the tablet laptop thing is a must if you spend a lot of time in front of a computer. Unfortunately, sometimes you need a keyboard to answer emails and get stuff done.

ilaksh 1 day ago 0 replies      
Wow, this page looks incredibly bad and loads amazingly slow on my Chromebook. Either google is trolling M$ or M$ is trolling themselves.
blondie9x 1 day ago 0 replies      
What a joke. Don't they vet out and optimize the page before mass releasing a new product? Just shows why MSFT lags behind Apple and Google and Amazon. ASP.NET FTL
richardboegli 1 day ago 1 reply      
Next year there will be a Surface Pro 5 (5"), Pro 8 (8") and Pro 12 (12") all with pens. 12 will be the successor to current Surface Pro 4.
squeakynick 1 day ago 1 reply      
At the end of the day it's balls in the back of the net that count. I'm pleased for everyone, so let's see if this translates into sales. It's a hybrid mix of things to get to success; and all about execution.


Larry Wall Unveils Perl 6.0.0 pigdog.org
791 points by MilnerRoute  1 day ago   313 comments top 52
krylon 1 day ago 11 replies      
I honestly did not expect to see the day Perl 6 gets finished. This must have been one of the most difficult - if not the most difficult - births in the history of programming languages.

At work, I have been using Perl 5 increasingly often over the past two years, mainly because handling unicode in Python 2 is not a lot of fun (and I still haven't come around to learning Python 3), and I have rediscovered why I used to like it so much.

So far I have not looked into Perl 6 seriously, because I did not see the point to do so before it was finished. Guess I know now what I'll be doing this Christmas. :)

Also, you gotta love Larry for quotes like this one: "This is why we say all languages are religious dialects of Perl 6..."

oldmanjay 1 day ago 3 replies      
I haven't looked, but I really hope Perl 6 held on to the 50 different ways to do any one thing. Pulling out my hair in frustration from trying to deal with Perl written by other people saved me all manner of haircut money in the 90s.
haberman 1 day ago 7 replies      
> One of the most impressive things Larry demonstrated was the sequence operator, and Perl 6's ability to intuit sequences.

 say 1, 2, 4 ... 2**32
> This correctly produced a nice tidy list of just 32 values -- rather than the 4,294,967,296 you might expect.

Someone with more time than me needs to find an IQ test that is based around sequence questions like this and plug them all into Perl 6. So we can find out what Perl 6's IQ is and whether it has achieved AI.

jamespitts 1 day ago 0 replies      
Congrats to the perl community. We should keep an open mind and learn from what they have accomplished.

If perl 6 turns out to be useful for the larger software engineering community (and is more widely adopted), this is wonderful. If not, then we still have another interesting language in the great river of interesting tech things.

Either way, the perl community will continue to do what it does best!

atemerev 1 day ago 3 replies      
Some languages are for earning your daily bread (like Java, or Objective-C). Some, like Haskell, are for intellectual pursuits.

Perl is for poetry. For seeing the impossible and going back.

thedz 1 day ago 3 replies      
> ++; # An anonymous state variable, for very fast loops

Perl, never stop being you

reuven 1 day ago 0 replies      
Given how long it took to replace Perl 6, and the fact that I (and most others I know) have moved onto other languages, I can't decide if I'm more (a) surprised, (b) impressed, or (c) indifferent.

Kudos to Larry for sticking with it, and for (I'm sure) introducing interesting ideas into the world of programming languages. That said, I have to wonder how many people will really use Perl 6 in their day-to-day work, and how many will look at it as a curiosity.

For me, I'm afraid that Python and Ruby have replaced Perl, despite more than 10 years in which Perl was my go-to language. Maybe I'll return to it, but I sorta doubt it.

vezzy-fnord 1 day ago 0 replies      
Perl 6 appears to be the SNOBOL of our times, with grammars being a first-class lex, plenty of functional idioms, APL influences and a reasonable object system.

Given how much of contemporary programming is munging, I can see it really picking up steam.

muraiki 1 day ago 2 replies      
I've found these resources helpful for learning Perl 6:

http://learnxinyminutes.com/docs/perl6/ is a pretty good intro to Perl 6 if you just want to jump into things.

http://www.jnthn.net/papers/2015-spw-perl6-course.pdf is more slowly paced but provides greater detail.

Also, the people in #perl6 on Freenode are all awesome and are usually quick to help beginners unless they are in the middle of fixing some crazy bug.

tcdent 1 day ago 6 replies      
I'm really surprised this site is the source y'all chose to upvote. Not only was the second-newest article published in 2012, they also have an assortment of other questionable content[0][1]. Advertising from rotten.com and a category named "Jonny Jihad" are also professional touches.

I mean, whatever floats your boat, but HN, is this really one of your go-to trusted news sources?

[0] http://www.pigdog.org/in_the_pink/html/interview_with_a_stri... [1] http://www.pigdog.org/pooniedog/

ambrice 1 day ago 2 replies      
So is this a real thing? I've never seen this pigdog website before, it's second newest article was published in 2012, and there's no mention of this on the perl 6 home page or in the perl 6 announcements mailing list..
jedharris 1 day ago 1 reply      
My sense, playing with Perl 6, is that it could become a "pro tool" -- very deep and powerful. It needs to mature some (though it is very solid already) and to grow its ecosystem (though it can run much of CPAN via Inline::Perl5).

Those who want to play can install from http://rakudo.org/how-to-get-rakudo/

Larry's talk was a lot of fun, especially the way he zipped around between vi and terminal windows -- no need for presentation software.

meesterdude 1 day ago 2 replies      
Perl was the first programming language I ever used, and i think 6 was being worked on even then. I've moved on to ruby as my goto, but my (limited) memories of perl are fond, and I'm happy to see its still moving forward.
akkartik 1 day ago 1 reply      
Junctions and autothreading seem to be importing the core of APL/J: http://doc.perl6.org/type/Junction

(I'm randomly scanning http://faq.perl6.org)

raldi 1 day ago 1 reply      
> Finally, there's a way to stop non-identical strings from matching just because they have the same numerical value.

Um, this has been there for decades:

 $a = '123'; $b = '123.0'; print $a == $b ? 'y' : 'n'; # prints y print $a eq $b ? 'y' : 'n'; # prints n

a3n 1 day ago 0 replies      
Long ago, Perl as a coding Leatherman, along with C/C++/Java.

Python took Perl's place on my belt. I'm using Python2, because all the reasons everyone else gives, and because it's Python2 at work. I've been wondering when I'd finally decide/be able to move to Python3.

And now I'm rubbing that faint Leatherman imprint on my belt, and wondering if I'll just skip Python3 for Perl6.

huangc10 1 day ago 0 replies      
Perl was the first high-level language I learned and I've used it many times during interviews. I still think I can solve most non language-specific interview questions extremely efficiently with Perl. Go Perl!
AceJohnny2 1 day ago 2 replies      
Sounds like a v2.0 with feature bloat.

""Any infix operator can be replaced by itself in square brackets..." Later someone asked, "In a world of user-defined operators everywhere, how do you define precedence?" And Larry pulled out is tighter() and is looser(), noting that Perl 6 even has customizable precedence levels. "You can add an infinite number...""

It's been said that most of programming is about managing complexity. I'm unhappy with Perl in general because it makes it really easy to hide complexity. Perl 6 looks like it makes it even worse.

Yes, it looks like an amazingly powerful and sophisticated language, and an amazing and respectable accomplishment, but my gut feeling is I'll hate seeing some in production...

dasil003 1 day ago 1 reply      
I would never want to use Perl at my day job, but I also wouldn't want to live in a programming universe where Perl didn't exist. Perl is a place where the pure joy of coding thrives.
rjurney 1 day ago 1 reply      
You launch Perl 6 without updating the fucking home page? So Perl. So perl.
guelo 1 day ago 0 replies      

 $ brew install rakudo-star <.... snip ....> $ perl6 > say "hello world, sorry for the wait" hello world, sorry for the wait >

atomicbeanie 1 day ago 0 replies      
I may not write a bunch of Perl 6, but the planet is better for having Perl 6. That says a lot about the person who guided Perl 6 to completion, he's a gem of a guy. Thank you Larry!
xtreaky 1 day ago 1 reply      
Here are the official words from Larry. There will be 2 beta releases and the major one in December.

Larry goes by the handle TimToady


eweise 1 day ago 0 replies      
I've been waiting on the sidelines for 20 years but this release makes me think its finally time to jump in.
ash_gti 1 day ago 0 replies      
I'm pretty excited about the release of perl6. perl6 has a number of interesting use cases and features. I'll be happy to watch it evolve as more people pick it up and find new/interesting uses.
iso8859-1 1 day ago 0 replies      
Things that won't make it in to 6.christmas (the non-beta release): https://gist.github.com/jnthn/040f4502899d39b2cbb4
brudgers 1 day ago 1 reply      
"If that's what it takes to make Ruby programmers happy..."

Completely brilliant.

z3phyr 1 day ago 1 reply      
When talking about Perl never forget to mention CPAN. Heck this library ecosystem is still great and running..
meshko 1 day ago 1 reply      
Is Duke Nukem Forever out yet?
cygx 1 day ago 0 replies      
Some of the code examples suffer from a failure to escape angle brackets...
kriro 1 day ago 0 replies      
Someone should implement a Duke Nukem Forever clone in Perl 6.

Pretty excited about Perl 6. Haven't used Perl in a long time but there seems to be quite a bit of cool stuff and I'm very excited to see what a language where Larry had more free reign will be like. Xmas reading/coding here we go (how fitting)

tempodox 1 day ago 0 replies      
Oh my. I expect the next version of Perl to read my thoughts and to not require any source code from fallible mortals. The jury is still out whether this will produce more bugs than it abolishes. The answer is expected in time for the end of the universe.
amyjess 1 day ago 1 reply      
Congratulations Larry and the Perl 6 community!

From what little I've played around with the language in the past, I really like it, and I'm glad the community can put the "still in development" talking point behind it.

Also, one of my favorite things about Perl 6 is that the Rakudo interpreter prints some of the nicest error messages I've ever seen. It actually offers suggested solutions, and it points out Perl 5->6 migration gotchas.

protomyth 1 day ago 5 replies      
sidenote: why does a barracuda device consider pigdog.org "Porn"?
MrPatan 1 day ago 0 replies      
I don't think I'll ever use it again, but I'm glad that it exists.
InclinedPlane 1 day ago 10 replies      
Seems too little too late. Is Perl still relevant in 2015? What would you build in Perl today that you wouldn't build in another language instead?

I used to be a big fan of Perl but it seems to have fallen behind the times, I doubt Perl 6 is enough to catch up.

iamreverie 1 day ago 0 replies      
oddly enough, a coworker and i were joking about how perl 6 would come out in the next 100 years on monday and just a few days later larry wall made us look foolish.
xiaq 1 day ago 1 reply      
Congrats! But shouldn't some kind of official announcement be made on perl6.org?
zem 1 day ago 1 reply      
quetion for the early adopters: if i develop an end-user app in perl6 on linux, how easy is it for me to deliver it to a non-technical windows and/or mac osx user?
the-owlery 23 hours ago 0 replies      
Perl 6 needs a good tutorial.
chm 1 day ago 2 replies      
So, for someone who has never read or written Perl code, where should I start? Perl 5 or 6? I get that they're different languages, but I want to see what "Perl" is like.
bgilroy26 1 day ago 7 replies      
I love Perl [I have been to perl monger meetings and I had to look up how to 'unshift' in python today].

But Python3 is mostly just Python2 with a print function instead of a print keyword. It is occasionally non-trivial to port large existing code bases from Python2 to Python3, but it is practically always trivial for a Python2 programmer to write new code in Python3.

I hope Perl 6 will get its own camel book!

EGreg 1 day ago 0 replies      
Perhaps it was better veiled :)
curiousjorge 1 day ago 0 replies      
Way too late
ryanobjc 1 day ago 6 replies      
Larry Wall Unveils Perl 6 and no comments on hacker news an hour later.

That's the new headline that sums up Perl: an anachronistic programming language that may never recover from the perl 5->6 decade long 'freeze'. Also the language is crazy to write substantial programs in!

rjurney 1 day ago 0 replies      
And nobody cares.
bliti 1 day ago 0 replies      
meeper16 1 day ago 0 replies      
Give 10 guys the same programming task in Perl, watch 10 completely different syntactically built innovations come from them.
pbreit 1 day ago 0 replies      
What's Perl?
aceperry 1 day ago 0 replies      
Larry Wall is a great guy. Unfortunately, I don't really like Perl. Maybe the new version will change my mind.
Twirrim 1 day ago 0 replies      
> say [+] <3, 1, 4> x 2

It's like the perl developers saw the comments about perl being indistinguishable from line noise and decided to make it even more so. WTF?

jackdaniel 1 day ago 2 replies      
This rant made me laugh one day: http://www.schnada.de/grapt/eriknaggum-perlrant.html

Disclaimer: I don't know Perl and I have no opinion if this rant is deserved or not yet it's an enjoyable read.

Trans-Pacific Partnership Trade Deal Is Reached nytimes.com
477 points by shill  3 days ago   371 comments top 36
tptacek 3 days ago 12 replies      
Just a reminder: the TPP, like most trade deals, is negotiated in secret, but ratified in public. The final version of the deal will be published in 30 days, and then Congress gets 90 days to consider before an up-or-down vote.

The 90-day thing is a result of Trade Promotion Authority granted by Congress to the administration. This is the "fast track" Congress voted to allow the President. It means the bill can't be filibustered.

belorn 3 days ago 2 replies      
Will be interesting to see how much of the leaked chapters is still in there. The old one said to make ISP's more liable for data being transfered through it. Imports of copyrighted goods without the authors permission will be made illegal (and they said barriers to international trade was dead). Copyright Terms will be extended in several countries. DRM protection is extended so that those who "enabl[e] or facilitat[e]" circumvention can be charged even if they do not violate a copyright (fun time for researchers). Last it dictate that generic medicine is destroyed if such happen to be found in a country where a patent cover it (all those who complain about Russia burning smuggled food might find this interesting).
jussij 2 days ago 4 replies      
In other words a handful of multinationals, prepared to pay millions in endorsements in hand outs to corrupt politicians, have got exactly what they wanted.

So much for the democratic process and in fact stuff the democratic process.

This deal gives a handful few even more power in controlling the world economy. It lets them screw not only the local worker, but the ability to screw ever worker in the world, in the name of prosperity.

While I hate the fact that such an obvious power grab is happening, what I hate more is the youth of today seem to let this shit happen.

Use your voice and vote out that crap!!!

Sadly my prediction will be, nothing unlike the last ten years, where as the minimum wage remains flat (or maybe even declines), the corresponding CEO wage will see ten fold increases thanks to this amazing free trade deal :(

bko 3 days ago 6 replies      
I hesitantly applaud such trade deals. I know that they are rife with corporate subsidies and targeted protectionism of politically favored domestic industries but it is better than the alternative. Interdependence and trade have led to a much safer world and a rising global standard of living for all.
137 3 days ago 0 replies      
In general, agreements like this seem to be a threat to classical liberalism. Perhaps this is a simplified view but integration is a 7-stage process that ends with supranational organizations and political unions. Or to be specific, an eventual global government rooted more in EU-style bureaucracy rather than (in theory) American-style classical liberalism.

MEP Daniel Hannan elucidated this nicely in a speech regarding the Treaty of Lisbon-


Details on the 7 stages and lists of these agreements from the first 2 stages-




raur 3 days ago 0 replies      
"TPP raises significant concerns about citizens freedom of expression, due process, innovation, the future of the Internets global infrastructure, and the right of sovereign nations to develop policies and laws that best meet their domestic priorities. In sum, the TPP puts at risk some of the most fundamental rights that enable access to knowledge for the worlds citizens."


kome 3 days ago 1 reply      
To put those different "Trade Deal" in their context, wikileaks has made a short but informative video: https://www.youtube.com/watch?v=Rw7P0RGZQxQ

tl;dr - US is trying to rewrite the rules of world trade because they are scared by mounting BRICS influence over the World Trade Organization.

lighthawk 3 days ago 2 replies      
> For the first time in a trade agreement there are provisions to help small businesses without the resources of big corporations to deal with trade barriers and red tape. A committee would be created to assist smaller companies.

That's awesome. But, if you have that much of a problem, why form a committee to help smaller companies- why not just make it easier for everyone? And what good will a committee really do? Why not just say, "We promise to make trading with foreign entities just that- you won't have to deal with the U.S. government and foreign government at all."

teekert 3 days ago 2 replies      
Wow: The New York Times is very clear on its political preference: "Donald Trump has repeatedly castigated the Pacific trade accord as a bad deal, injecting conservative populism into the debate and emboldening some congressional Republicans who fear for local interests like sugar and rice, and many conservatives who oppose Mr. Obama at every turn."
raldi 3 days ago 1 reply      
To any NYT employees who may be reading this: It's 2015, and you're still using graphics (like this trade map) in a way that shows up tiny on mobile devices but can't be zoomed in on -- and you've even managed to thwart the usual "tap and hold, then Open Image in New Tab" trick.

This is the sort of thing that makes people demand ever-ridiculously-huger smartphones.

gerty 3 days ago 3 replies      
Does someone know if the treaty has to be ratified by all parties before becoming a law? If it's rejected by Canadian or NZ parliaments, would it still be implemented?
hellbanner 3 days ago 1 reply      
Most of this article is quoting what other people said about the TPP, applying labels to supposedly specific provisions eg. 'foo expert calling it "historic"' etc.

Smoke and mirrors until we can actually read the thing. Or change it ourselves.

mcv 3 days ago 4 replies      
As much as I am disgusted by the secrecy of these negotiations, the way they seem to be pushed down our throats, and indeed some of the stuff that was leaked (like the ISDS), there does seem to be some good stuff in it:

"The worker standards commit all parties to the International Labor Organizations principles for collective bargaining, a minimum wage and safe workplaces, and against child labor, forced labor and excessive hours."


"The changes, which also are expected to set a precedent for future trade pacts, respond to widespread criticisms that the Investor-State Dispute Settlement panels favor businesses and interfere with nations efforts to pass rules safeguarding public health and safety."

Who knows? This might actually have turned into a decent treaty. But only because of all the massive criticism on the bits that leaked through all the secrecy.

dougmany 2 days ago 4 replies      
My favorite part:

"Japans other barriers, like regulations and design criteria that effectively keep out American-made cars and light trucks, would come down"

Take our crappy cars Japan!

I also didn't realize the US had a large (25%) tariff on trucks.

mimo84 2 days ago 0 replies      
Just yesterday I saw here on HN a news about what the TPP actually means for intellectual property, which should be a quite known problem here in the community. Interestingly enough though that news has only got 10 points and right now it is quite low in the list.The first news today in HN is about CPU caching. What do we need it for if we're losing our rights so quickly?
rce 2 days ago 1 reply      
Let's say a country wants to pass stronger environmental protections, shorter copyright terms, or some other legislation which would conflict with the TPP. How would they do that? Does the treaty need to be renewed every so often at which point those items can be re-negotiated? Or does this essentially lock in certain legislation such that it can't be changed in the future?
misterbishop 3 days ago 1 reply      
From the candidate who said he wanted to "re-negotiate NAFTA". This is a betrayal to American workers, and it's a disaster for the Pacific signatories.

The TPP should be treated as a Treaty, requiring 2/3rd in congress. The majority of the agreement has nothing to do with trade.

ddp 3 days ago 1 reply      
Where's Ross Perot when you need him.
jmnicolas 3 days ago 4 replies      
"The Trans-Pacific Partnership still faces months of debate in Congress[...]"

So nothing is reached : my understanding of US politics (which is quite shallow I'll admit) is that the congress majority will vote contrary to anything Obama wants.

jedharris 2 days ago 0 replies      
NONE of the comments defending the TPP reference ANY POSITIVE reasons to support it. They all reject negative claims or argue process ("It is too democratic!" "Amendments are often bad!" etc.)

Obviously the TPP has major costs, both directly and indirectly. IF as I doubt the TPP is worthwhile, then proponents should be able to give examples of its benefits.

Asbostos 3 days ago 0 replies      
How on earth is every country going to pass all these laws? Won't it end up broken to bits or with some countries quitting?
giardini 3 days ago 1 reply      
So is TPP a treaty or merely a "trade deal"? If a treaty then IIRC only the Senate is required to ratify it, not "Congress".
kevinalexbrown 3 days ago 1 reply      
The fact that the negotiations were done in secret probably means that most of the TTP content is being given to journalists by those officially authorized to speak about it. This isn't unexpected, but it does affect how the TTP is framed (even if you don't buy the Greenwald puff-piece-for-access argument).
datashovel 2 days ago 0 replies      
One positive IMO is that Ford Motor Co doesn't like it.

Ford, the company who famously exported so many manufacturing jobs out of US in the past suddenly grew a heart for the well-being of "future competitiveness of American manufacturing" ? Probably not.

binarray2000 3 days ago 2 replies      
R.I.P. democratic free society.
beedogs 2 days ago 0 replies      
Welp, democracy was nice while it lasted.
worik 2 days ago 0 replies      
arca_vorago 3 days ago 1 reply      
I knew when TPA passed it meant that TPP was nearing completion, they passed TPA (fast-track) because it curtails the power of the congress to stop what I consider to be the unconstitutional TPP. One of the best resources for both documents I have found is the podcast Congressional Dish by Jennifer Briney, who actually takes the time to read the docs and summarize issues.

Personally, I think this is a giant leap towards world government, away from constitutional representation, and away from free-trade and towards the oligarchy-controlled globalism.

I plan on digging into it more and writing a summary of my own, because this is a major issue that we need to push back on hard due to the limitations of the house and senate to oppose it.

PythonicAlpha 3 days ago 0 replies      
Here is a discussion of and link to the consequences of this "great deal":


ck2 3 days ago 0 replies      
Just a reminder that this above everything else is Obama's legacy no matter how they try to re-write history.

So if you thought it was bad that the TSA can hold people without even a phone call to a lawyer, wait until they start putting people in prison over the TPP

mtgx 3 days ago 0 replies      
Unless I skipped it, the article doesn't mention anything about the new copyright clauses?
DeBraid 3 days ago 1 reply      
tldr: eventually end more than 18,000 tariffs that the participating countries have placed on United States exports

- Goods include: autos, machinery, information technology and consumer goods, chemicals and agricultural products ranging from avocados in California to wheat, pork and beef from the Plains states.

- establish uniform rules on corporations intellectual property,

- open the Internet

- crack down on wildlife trafficking and environmental abuses

isaacdl 3 days ago 0 replies      
dbg31415 3 days ago 1 reply      
Don't worry, Reddit won't have this on the home page for 4-5 hours.
politician 2 days ago 0 replies      
I know of no other act that would so throughly demonstrate the subjugation of our democracy to corporations than to hold them to a 3-month review of a complete rewrite of the laws that bind corporations.

We are staring at a phase transition.

When this treaty passes, expect the remaining dominoes to fall hard and fast.


Europe's highest court has rejected the 'safe harbor' agreement businessinsider.com
563 points by noplay  2 days ago   297 comments top 37
A_Beer_Clinked 2 days ago 4 replies      
The full ruling is available here:http://www.politico.eu/wp-content/uploads/2015/10/schrems-ju...

These bit jumped out at me:>Furthermore, national security, public interest and law enforcementrequirements of the United States prevail over the safe harbour scheme, so that United Statesundertakings are bound to disregard, without limitation, the protective rules laid down by thatscheme where they conflict with such requirements. The United States safe harbour schemethus enables interference, by United States public authorities, with the fundamental rights ofpersons, and the Commission decision does not refer either to the existence, in the United States,of rules intended to limit any such interference or to the existence of effective legal protectionagainst the interference.

>This judgment has the consequence that the Irish supervisory authority is required to examine Mr Schremscomplaint with all due diligence and, at the conclusion of its investigation, is to decidewhether, pursuant to the directive, transfer of the data of Facebooks European subscribersto the United States should be suspended on the ground that that country does not affordan adequate level of protection of personal data.

My reading (not a legal expert) is that data residency is the important bit here. Which in my view is a small step but not sufficient.

walterbell 2 days ago 4 replies      
Meanwhile, TPP prohibits countries from having data sovereignty laws, http://www.zdnet.com/article/tpp-moves-toward-killing-off-go..., with similar prohibitions sought in TTIP and TISA, https://blog.ffii.org/a-license-to-spy-cross-border-data-flo...

"Governments in Australia, the United States, New Zealand, Canada, Singapore, Vietnam, Malaysia, Japan, Mexico, Peru, Brunei, and Chile will be unable to force companies from those countries to store government data in local datacentres ... governments will not only be prevented from mandating data sovereignty provision, they will also be unable to demand access to source code from companies incorporated in TPP territories."

a_bonobo 2 days ago 1 reply      
This is good, and a direct result of the Snowden revelations - without those, the US would still considered to be a safe harbor for your data. I'm hopeful that this will create the kick that the US needed, now that actual income (and high income, at that) is becoming threatened by the NSA. Of course this isn't the end to their data theft. They're likely to get the data from their Five Eyes European friends instead, but still - a good victory.

Amazing to see what one determined person can do!

protomyth 2 days ago 6 replies      
Its been asked by multiple people in the thread, but I'm not clear on the answer.

If I host a website that has user accounts in the US, and do not stop people from the EU from registering, do I, with no offices outside the US, need to do something different because of this ruling?

weddpros 2 days ago 4 replies      
Edit: I'm reacting to "Facebook and Twitter [...] could be forced to host European user data in Europe"

Border control with data is the worst idea ever.

Think of it: my Facebook friends lists has EU and US people in it. This list can't reside in EU or US. This webpage can't be served by either a EU or US web-server. By law. LOL

Plus I'm a EU citizen, and I can choose to give my data to whoever I want... no more. That's sad.

This ruling only shows the dismal tech knowledge of lawyers and lawmakers. It's impossible to implement Facebook with data spread between EU and US. Same for Tweeter and others. Say goodbye to social networks. Because of model denormalization, because of network latency and intercontinental bandwidth.

Some mention cloud zones, but they're only useful with replication, which IS data transfer.

OR... social networks will cheat. And one day, they'll be sued for cheating the impossible regulations (think VW...)

axx 2 days ago 7 replies      
European citizen here, and as much as i welcome a step like this, it's also pretty interesting to see, what this means for smaller (online) businesses outside of europe.

Sure, you want to host customer data from europe in europe (latency-wise) anyways, but now that this will be more or less required it will be interesting to see how people will solve this. The good thing is, with "the cloud" you have a lot of option (locations) to choose from.

julianpye 2 days ago 1 reply      
Does this effectively render any Parse or Firebase application (they only have US servers) that stores user information (e.g. email account) illegal in the EU?
mhandley 2 days ago 0 replies      
I wonder if there are additional ramifications of this, even for European companies dealing with European customers. For example, what happens when personal data from a European datacentre to a European customer transits a US network on the way (such routing diversions are fairly common)? In the light of Snowden's revelations, this would seem incompatible with EU privacy regulations unless the data were encrypted. Of course personal data should always be encrypted, but where are the CAs located? Is a European company negligent if they don't use a European CA and do certificate pinning? Interesting times.
rmc 2 days ago 1 reply      
Americans: This is time to get your government to change your laws if you still want to be the leader in the tech field.
MichaelGG 2 days ago 3 replies      
This sounds great! Though if the owning company is in the US, then the US views this as reason to be able to access customer data no matter where its stored. More fun to come mm?

Question: Why do companies HQ themselves in the US? Why not pick a friendlier country, then turn their US parts into a simple contractor that supplies software development and engineering resources? Then the US company would not have actual ownership of any data. Forcing them to reveal customer records would be the same as forcing an individual to steal data right?

neppo 2 days ago 1 reply      
off topic, but why does the article use a picture of Mark Zuckerberg with lip stick photoshopped on?
jupp0r 2 days ago 1 reply      
So this is what I think will happen: a lot of companies (maybe even the likes of facebook and google) will move out of europe and just serve everything from the US. There is not really an alternative to that, how could my EU-hosted facebook profile not be transferred to the US so my friends can see my book favourites?
chaitanya 2 days ago 1 reply      
So we are building a messaging product for organizations. I am wondering how this can impact us if an org that uses our product has employees in both EU and US (assuming that national regulators in EU go ahead and bar personal data transfer to US).

* Will we need to partition user data based on location, even if they are in the same organization?

* What happens when a user in EU sends a message to one in US? So right now the chat history for one-on-one conversation pairs is stored in one place, does this ruling mean that now we have to duplicate this chat history for both the users?

* Even worse, what if multiple EU and US users are part of the same chat group? Is there any way we can store the group's chat history in one place?

JulianMorrison 2 days ago 0 replies      
Good. Hopefully this puts pressure on the USA to rein in its out of control security state.
srj 1 day ago 1 reply      
How is it possible that people don't discuss the GCHQ in the same breath as the NSA? From news reports it seems they may as well be the same agency. Keeping data out of the US isn't enough, and it's dangerous for Europeans to think that their own governments are looking out for their privacy. They should be looking instead to make encryption ubiquitous. This may be limiting corporate data storage, but I don't think this impacts intelligence gathering for the US at all.
nabla9 2 days ago 0 replies      
My reading of the judgment is that it just throws the decision back to the national courts to decide what constitutes safe harbor. Safe Harbour agreement between US and EU streamlined the process for getting access to EU data. Now it mus be decided in national level.


Aloisius 2 days ago 2 replies      
If I'm a US company that does business in the EU, is there any reason that personal information collection can't just happen through a US web server? That way it is the user who is transferring the data to the US, not the company.

Updating your name, birthday and other personal information would take an extra 100 ms in order to POST to the US, but it could then be replicated back out to the EU for reads if necessary for performance.

erikb 1 day ago 0 replies      
Great success! They should try it the other way around. Looking for the set of things they can do that are correct in the European countries and then apply it to the US as well. If the biggest argument is to simplify ruling and management then this approach would be just as good as allowing US rules to overwrite European rules, right?
cmurf 1 day ago 0 replies      
I wonder if some companies have sufficiently complex operations globally, that they end up with mutually incompatible laws and would have to either stop doing business in a country or split itself in two to continue to operate?
IBM 2 days ago 1 reply      
It's interesting that certain bloggers such as Dustin Curtis and Ben Thompson have claimed that Apple's privacy stance will ultimately hurt them because they'll be at a disadvantage to competitors, but it seems like they've shown some real foresight when you take this ruling into consideration.
AndrewKemendo 1 day ago 0 replies      
While this is a win for individual privacy, it does truly make scaling web services significantly harder and more costly.

Being in compliance is fairly easy for large companies, but it's going to be a challenge for startups.

copsarebastards 2 days ago 0 replies      
This is good for everyone's privacy. By making it difficult for businesses to centralize the storage of US and European data, the European court has incentivized businesses to pressure the US government toward laws that respect our privacy better.
kornakiewicz 2 days ago 1 reply      
Prohibition of storing behavioral data would be great next step.
makeitsuckless 2 days ago 1 reply      
It's interesting how this is described as a potential "bureaucratic nightmare". Having to follow the law of the country your doing business in has been standard operating procedure for, well, basically all of human history.

Somehow the tech industry seems to think it should be exempt from that, even if it means being allowed to piss all over the basic civil rights of citizens of modern Western democracies.

Yes, this is a problem that needs to be solved given the reality modern cross-border online services. But it can't be solved by the corrupt political elite simply selling their citizens hard fought rights to corporations operating from countries that lack respect for such rights.

mtgx 2 days ago 1 reply      
Couldn't we get a better source than Business Insider?
codedokode 1 day ago 1 reply      
Hosting data locally in EU doesn't solve privacy problem because the servers are still operated by USA companies that can (and obviously will) share the data with NSA. The solution is to create more local services so the data never leave the country. It is also better economy-wise so the money stay in the country too.
SimplyUseless 2 days ago 2 replies      
While this is a massive ruling, there are valid exceptions that allow companies who have agreed with their clients to transmit their data from EU to US while keeping data separation and with respect to the data protection law.

This is not a blanket-panic for all US/EU companies as the media are projecting.

karavelov 2 days ago 1 reply      
This is just small victory. AFAIK, US government can still ask without a court order Facebook or MS or any other US company to handle them the data of/for european citizens that hosted in Europe.
finnjohnsen2 2 days ago 0 replies      
Perhaps it's time to p2p everything.
mtgx 2 days ago 0 replies      
> The average consumer will not see any restrictions in daily use, but will hopefully soon be able to useonline services without potentially being subject to mass surveillance

> However, US companies that obviously aided US mass surveillance (e.g. Apple, Google, Facebook,Microsoft and Yahoo) may face serious legal consequences from this ruling when data protectionauthorities of 28 member states review their cooperation with US spy agencies.

Can't wait. This is going to be good.


_of 1 day ago 0 replies      
I wish the title was "EU's highest court". Europe != EU.
pinaceae 2 days ago 0 replies      
well, I guess LinkedIn is hosed. and AWS which does global replication. and and and.

this ruling ignores the decentralized nature of the internet.

worst case is Europe being shut off from any tech advances, while the Pacific region from Cali to China takes off.

peter303 2 days ago 1 reply      
The Euros are jelly they did not invent profitable Big Data. So they will be putting every roadblock possible against those who did.
unfamiliar 2 days ago 2 replies      
>That could be a bureaucratic nightmare: In theory, American companies with European customers could now end up trying to follow 20 or more different sets of national data privacy regulations.

Good. If you want to be a multinational company, then you should have to obey the laws of each country.

jsudhams 2 days ago 1 reply      
This is good and I see no reason why this cannot be done easily for most corps (except the ones who mine personal data). For why would you not have critical personal data in the specific country table/database that is in that specific country. If you do not provide the service in that country and some one signs up then inform that the data is not safe and give visible warning. Is that really difficult. I used to have DB library layer earlier where based employee location it will direct their data to that location.
VikingCoder 2 days ago 6 replies      
No, this is terrible.

These countries are demanding we run our services in their countries. This is a money grab.

Note that these same countries expect the United States to act as World Police, and do not contribute as much money as they should. They want the US to know about attacks ahead of time. I wonder how the US could possibly know about attacks ahead of time?

I deplore mass surveillance. I really do. But I think wiretapping with a warrant is a necessary tool for fighting crime, and terror, and bad state actors.

There's a part of me that desperately hopes all major internet services just shut off Europe entirely. Welcome back to the Stone Age.

UserRights 2 days ago 0 replies      
The "good" companies should relocate their business central away from USA and come to Europe!

Some big companies should finally stop talking and start acting, this is the only chance for a real change.

Cut the NSA-Brotherhood ties! These little Hitlers from all the affiliated "Clubs of Distopians" and the War-Industry completely destroyed the most important association of "USA == Freedom" in the world. Face it. Deal with it. Act accordingly.

For people interested in history: it might be interesting to look at the post-ww-2 de-Nazification process in germany to understand how hard it is to remove established circles of anti-democratic bureaucrats from power structures. This will take a very long time (if it happens at all).

The better immediate reaction would be to support progressive and freedom-oriented societies with your technical powers until "good old USA" is restored. Europe is not perfect, but what happens in USA nowadays is pure distopia, a very unhealthy development that will lead to a negative outcome for all of us.

Once people came to The USA because of suppression and lack of freedom in their home countries. Just a few generations later if you have the same sense and longing for freedom like these ancestors of you, it is now time to leave that continent as the suppressors followed your trails - come home to Europe and together we can build a better future!

A Guide to Fast Page Loads nateberkopec.com
551 points by nateberkopec  1 day ago   158 comments top 39
bzbarsky 23 hours ago 7 replies      
There's a lot of good advice here, but also some misinformation.

First of all, a script tag will block DOM construction, but in any sane modern browser will not block loading of subresources like stylesheets, because browsers speculatively parse the HTML and kick off those loads even while they're waiting for the script. So the advice to put CSS before JS is not necessarily good advice. In fact, if your script is not async it's actively _bad_ advice because while the CSS load will start even before the JS has loaded, the JS will NOT run until the CSS has loaded. So if you put your CSS link before your JS link and the JS is not async, the running of the JS will be blocked on the load of the CSS. If you revers the order, then the JS will run as soon as it's loaded, and the CSS will be loading in parallel with all of that anyway.

Second, making your script async will help with some things (like DOMContentLoaded firing earlier and perhaps getting something up in front of the user), but can hurt with other things (time to load event firing and getting the things the user will actually see up), because it can cause the browser to lay out the page once, and then lay it out _again_ when the script runs and messes with the DOM. So whether it makes sense to make a script async really depends on what the script does. If it's just loading a bunch of not-really-used library code, that's one thing, but if it modifies the page content that's a very different situation.

Third, the last bullet point about using DomContentLoaded instead of $(document).ready() makes no sense, at least for jQuery. jQuery fires $(document).ready() stuff off the DomContentLoaded event.

The key thing for making pages faster from my point of view is somewhat hidden in the article, but it's this:

> This isn't even that much JavaScript in web terms - 37kb gzipped.

Just send less script. A lot less. The less script you're sending, the less likely it is that your script is doing something dumb to make things slow.

[Disclaimer: I'm a browser engine developer, not a frontend or full-stack web developer.]

some1else 1 day ago 4 replies      
This guide is a comprehensive explanation of Chrome's Network timeline, but the optimisation recommendations are quite skewed towards the front-end. There's a missing piece on server configuration, no mention of CDNs or asset/domain sharding for connection concurrency, server-side or client-side caching. It also doesn't take into account HTTP/1 vs. SPDY & HTTP/2. For example, loading JavaScript modules as individual files can improve performance for SPDY & HTTP/2, because changes in individual files don't expire the entire concatenated bundle. Here's a slide deck called "Yesterday's best practices are HTTP/2 anti-patterns", that re-examines some of Nate's advice: https://docs.google.com/presentation/d/1r7QXGYOLCh4fcUq0jDdD...
ohitsdom 1 day ago 4 replies      
Great technical details in this post. When speeding up page loads, I usually struggle with:

> You should have only one remote JS file and one remote CSS file.

I get this in theory, but it's difficult in practice. For example, this post has 7 CSS files and 13 JavaScript files. Also, combining all resources includes assets that aren't needed (CSS rules used on other pages), and also reduces the utility of public CDNs and browser caching.

zeveb 1 day ago 2 replies      
Pure HTML loads ludicrously fast these daysas in, well-nigh instantaneously. With a single CSS file, you can make it quite attractive. Eschew JavaScript unless you really, truly need it.
daleharvey 1 day ago 5 replies      
This is a useful guide, however there is one thing missing that will have an order of magnitude improvement over anything that is mentioned.

Use appcache (or service workers in newer browsers). Yes appcache is a douchebag, but its far simpler than going through all of these and will have a far bigger improvement

FLGMwt 23 hours ago 0 replies      
Udacity has some really awesome free courses from Google devs about this: Website Performance Optimization[1] and Browser Rendering Optimization[2]

[1]: https://www.udacity.com/course/website-performance-optimizat...

[2]: https://www.udacity.com/course/browser-rendering-optimizatio...

fauria 23 hours ago 0 replies      
I really recommend "High Performance Browser Networking" book by Ilya Grigorik, it digs deep into the topic of browser performance: http://chimera.labs.oreilly.com/books/1230000000545/index.ht...
cheriot 21 hours ago 1 reply      
Spending the last 6 weeks in East Africa has completely changed my perspective on web performance. And it's not just performance, it's reliability. Every request a page can't work without is another chance to fail.

React/Flux and the node ecosystem are more verbose than I'd like, but they might be onto something by rendering the initial content server-side.

resca79 22 hours ago 1 reply      
Another small advice, but less generic, could be to not include all css and js libs of bootstrap, that is modular. I'm mentioning bootstrap because it is the standard de-facto of many web apps.

Just spend 5 minutes selecting only the packages that you really use inside your webpage, you can drastically reduce css and js file size

radicalbyte 20 hours ago 0 replies      
I've had great success in the past doing one very simple thing: on first load send the client the exact html/css that must be loaded on their screen.

Once the page is loaded, use javascript to take over updates (using framework of choice).

It worked great in 2008, hopefully the modern javascript developers can now reinvent the wheel. It'll be a lot easier nowadays what with Node/V8 meaning you can use the same code...

tedunangst 1 day ago 1 reply      
On a static HTML site with no scripts or external resources, I see 100ms of loading/painting in the beginning, then 3000ms of "idle" time at the end, which turns the flame graph into a pixel wide column. What is the point of that?
krat0sprakhar 1 day ago 1 reply      
This is damn helpful! Thanks for sharing. If you're interested in getting to know more about how other sites perform and how to use chrome devtools to address frontend performance issues, Google developer - Paul Lewis recently started a series on Youtube called Supercharged. Here's the first episode - https://www.youtube.com/watch?v=obtCN3Goaw4
jakub_g 21 hours ago 1 reply      
One thing that might be non-obvious is that async script, while not blocking `DOMContentLoaded`, blocks `load` event.

It means that if you have listeners to `load` event that are doing stuff, you may want to have the `load` event as fast as possible.

Also, until the load event is raised, browsers display a spinner instead of page's favicon.

Hence for non-critical third-party scripts, you may prefer to actually inject them in JS in onload handler instead of putting them directly in HTML.

A semi-related issue is handling failures of external non-critical scripts (analytics blocked by adblock etc)

I wrote a draft of a blog article on the topic last week:


Context: We've faced an insane page load time (70s+) due to external analytics script being slow to load (yeah, we should have been loading the app on DOMContentLoaded instead of onload).

dasil003 21 hours ago 0 replies      
This is a really nicely put together article, and I'll even admit the animated gifs are funny, but damn if they don't make it impossible to focus on reading the text.
philbo 20 hours ago 1 reply      
Surely the first bit of advice in any post about analysing website performance should be: USE WEBPAGETEST

It gives you access to everything Chrome dev tools do, plus so much more:

 * speed index as a metric for the visual perception of performance * easy comparison of cached vs uncached results * operating on median metrics to ignore outliers * side-by-side video playback of compared results * different user agents * traffic-shaping * SPOF analysis * a scriptable API * custom metrics
I could go on. There's a lot of excellent performance tooling out there but WebPageTest is easily the most useful from my experience.

arohner 23 hours ago 0 replies      
Gratuitous plug, my startup, https://rasterize.io, will give you most of the information in the chrome timeline, for every single visitor to your site. It also analyzes the page and detects a lot of these warnings, and alert when you introduce things that slow the page down.

It's in beta, but contact me if you're interested.

thekonqueror 1 day ago 1 reply      
I have been using AppTelemetry plugin for rough numbers on each phase of request. This is much better for performance tuning.

Do you have any tips for optimizing PHP, where server response times are poor to begin with? I've been trying to optimize a blog as proof-of-concept[1] but it has plateaued at 1.5s load time.

[1] http://wpdemo.nestifyapp.com/

Kluny 20 hours ago 1 reply      
Could anyone explain like I'm 5 what "layout thrashing" is? As far as I understand, it's when the size of an element is set in the CSS, like

 div { width:100px; }
Then later in the CSS it's changed to

 div .biggerdiv { 200px; }
Or maybe it's javascript that changes it:

but either way it's when an element has some size near the beginning of rendering, then as more information becomes available, it has to change size a few times.

Am I getting it?

RegW 22 hours ago 1 reply      
Potentially dumb question from a backend dev:

Is there a way to get stats about page loading from Timeline, that could be used to automatically ensure that the load times are not creeping up, and breaking NFRs?

timbowhite 1 day ago 2 replies      
Any tips on how to handle site layouts that depend on the js that is asynchronously loaded via the async attribute? Seems like this can cause a flash of unstyled/unmodified html while that js is loaded and executed.
eatonphil 23 hours ago 1 reply      
Could someone explain this paragraph? I feel like it is making a lot of assumptions or generalizations about the use of $(document).ready();. I do not follow what he is trying to say:

> Web developers (especially non-JavaScripters, like Rails devs) have an awful habit of placing tons of code into $(document).ready(); or otherwise tying Javascript to page load. This ends up causing heaps of unnecessary Javascript to be executed on every page, further delaying page loads.

gcb0 21 hours ago 0 replies      
"JS and CSS assets must be concatenated"

that is fine if you have a tiny little site. If you are a big company, each micro site will use a piece of the larger set of files. if one use a,js b.js and c.js, when you concatenate you just lost 100% of cache if the user clicks a link to a side of the site that only uses b.js and c.js

likewise, try to load, un-concatenated, common libs from widely used free CDNs.

snomad 1 day ago 3 replies      
Question about the 1 CSS/JS file rule.

If you have a 'My Account' section w/several unique rules (say 10k), which is best?A) One website CSS (main.css) and your users download the My Account rules even though they may never use themB) 2 CSS files are used for My Account (main.css and myaccount.css)C) 1 file under My Account that incorporates the main and section rules (main-plus-myaccount.css)?

pranaya_gh 23 hours ago 0 replies      
Google just came out with the AMP project. It looks like google is encouraging publishers to join its initiative in the same way how it herded everyone to get "responsive". At the end of the day, good news for mobile web - https://techbullets.com/
amelius 20 hours ago 0 replies      
Regarding layout-thrashing: if only there were a way to hint the dimensions of each element.
outworlder 1 day ago 0 replies      
I like this guide, with the caveat that, if you are doing a single page web application, some of it gets turned upside down.

For instance, the "javascript will be loaded on every page load" part no longer applies. It will be loaded only once, and will fech whatever it needs from then on.

CosmicBagel 22 hours ago 1 reply      
Gifs in the sidebar, 10/10
chain18 18 hours ago 0 replies      
Can someone explain his point about $(document).ready()? I don't understand how it is different from DomContentLoaded?

jQuery source for the ready function: https://github.com/jquery/jquery/blob/c9cf250daafe806818da1d...

dfar1 22 hours ago 0 replies      
The best explanation of the timeline tool I've ever found. Thank you!
camperman 1 day ago 1 reply      
Very helpful stuff but I did go to The Verge and look under the timeline. Scripting was a fraction of 1% of the load time. Have they disabled it because of your article or am I missing something?
humbleMouse 1 day ago 0 replies      
LOLZ @the layout thrashing gif! Nice post though, very informative.
blowski 1 day ago 0 replies      
Thanks - a well-written article with some really helpful pointers.
cbsmith 1 day ago 1 reply      
...and if you do HTTP/2, you'll get even faster loads if you pretty much break all those rules (some notable exceptions).
jmartens 20 hours ago 0 replies      
>While I use New Relic's real user monitoring (RUM) to get a general idea of how my end-users are experiencing page load times, Chrome Timeline gives you a millisecond-by-millisecond breakdown of exactly what happens during any given web interaction.

New Relic does, too! Its a Pro feature called Session Traces.

yAnonymous 23 hours ago 0 replies      
>47 requests, 7.474,76 KB, 4,68 s

That's on a 100Mbit connection.

_ZeD_ 1 day ago 0 replies      
the page it's too slow to load... i waited solid 5 minutes... and... still... nothing... to... see...
dates 1 day ago 0 replies      
That was awesome- thanks!
binthere 23 hours ago 1 reply      
Had to disable the font of the website, it's terrible for reading.
mahouse 23 hours ago 0 replies      
"bare-metal Node.js"
Image diffing using CSS franklinta.com
488 points by ck2  5 days ago   22 comments top 9
bennettfeely 5 days ago 0 replies      
If it's two images you are comparing you can use `background-blend-mode: difference` to spot the differences also.

E.g. http://codepen.io/bennettfeely/pen/NGdzjr

Bognar 5 days ago 4 replies      
This is the same general method as I used to quickly solve "spot-the-difference" games as a child. Just cross your eyes until the images match and let your brain pick out the disparities.
BinRoo 5 days ago 0 replies      
What if the two images were taken from a camera instead, and are subject to some noise/transformation? I worked on an image algorithm on this related problem a couple years ago: http://shuklan.com/photo/difference-analyzer/
jamesfisher 4 days ago 0 replies      
I would like to see an image diff which could reason about insertions and deletions as well as just changed pixels. For example if I were to insert a new section in a webpage, and take a screenshot before and after, the image diff would show the new section of the screenshot in green.
eponeponepon 5 days ago 3 replies      
Neat... I can see a use case for this in comparing pre- and post-final HTML documents (though I don't relish the headaches involved in actually acquiring both relevant sets of images at the same time out of our CMS).

Presumably it gets deeply murky when dealing with compressed jpegs?

byron_fast 5 days ago 0 replies      
Not too far from it being a useful business. See: diff.io or diffbot.com.
bananaboy 5 days ago 1 reply      
Cool idea. Btw if you want a tool that does this, the free Perforce diff/merge tool can diff images. It allows you to specify a tolerance for how different as well.
suyash 5 days ago 0 replies      
Very cleaver idea..UI automated tests can be a lot simpler.
linkydinkandyou 5 days ago 0 replies      
There's a 3.5" floppy disk on the tray of cookies on the right!
Losing Sight tink.uk
563 points by robin_reala  4 days ago   93 comments top 22
StavrosK 4 days ago 10 replies      
Two of my best coworkers are blind. They're ops people, so their job is computers. I don't think anyone can tell any difference in productivity between them and sighted people. I was surprised, initially, because I thought that it would be hard to work without the high-bandwidth data transfer medium that is vision, but apparently they make it work beautifully.

Also, it has shown me how hard I make their lives when I don't design with accessibility in mind. I didn't use to think that anyone with vision problems used the internet much, but it turns out not only they do, but they do the exact same things I do. This is a good tool for accommodating visually impaired people:


I guess I'm saying don't be a jerk, be mindful of sight-impaired people when designing your products.

kamens 4 days ago 2 replies      
As a Type I diabetic for what is it now 21 years, stories like this make my heart race.

I simultaneously admire the author's fortitude and am hit by powerful worry and frustration that facing the non-stop challenges of T1D for so many years can result in such outcomes and often as a result of one's own imperfections facing those challenges.

The guilt the author describes when realizing she'd go blind had to be unbearable. Man I hope to work hard enough and be lucky enough to not face that.

chasing 4 days ago 0 replies      
Down-vote me if this is inappropriate, but I'm actually working with a start-up at the moment that's attempting to make managing diabetes a bit less of a headache and possibly even an enjoyable and enlightening task:


It's a tough challenge and we hear all sorts of stories about problems that arise when people stop properly self-managing for whatever reason.

And I've worked on other healthcare projects and it really does seem like simple compliance -- getting people to do what they've been medically advised to do -- is a huge problem. Like, there are diseases and medical issues that are mostly solved. Except for patient compliance.

kolanos 4 days ago 2 replies      
As a mostly blind father, not due to diabetes, with a son that has type 1 -- this really hit close to home. I was diagnosed at 16 with retinitis pigmentosa, or RP for short. I am now 33 and have very little sight left, will probably be totally blind in the next ten years. I won't lie, it's been a struggle. The author of this piece describes the experience very well. But like her, I've been able to be a productive software engineer for the past 15 years and hope to continue.

For those who can't imagine anything worse than losing your sight, I can assure you that you're wrong. It's no picnic, but I can imagine much worse predicaments. Not having to look at yourself in the mirror anymore can actually be liberating.

Frankly, the toughest part about being blind is the stigma around it. I've lost jobs because my blindness made people uncomfortable. I've been turned away from job interviews because the interviewer didn't think I could do the job without giving me a chance to prove otherwise. Most people are great about it and are very empathetic. But there's still the occasional person who avoids me like the plague.

As for my type 1 son, I hope I provide enough of a dose of reality for him to stay on top of his blood sugar.

eponeponepon 4 days ago 0 replies      
It seems a long time since - if ever - I read anything touching on accessibility on the web with anything genuinely positive to say.

I know it's a hard, complex set of problems, but it really does seem that everyone essentially concludes "well, we're not there yet", and that's a real shame.

arnold_palmur 4 days ago 0 replies      
"... but given that I no longer really remember what I look like ..."Now that statement really made me think.
WhitneyLand 4 days ago 0 replies      
Frustrating that companies like Pandora refuse to support basic accessibility features like dynamic type on iOS. It's a feature that probably costs 1 hour of development time. They know about it, and won't even put it on the backlog.
ckdarby 4 days ago 2 replies      
As a software developer, this article scared the shit out of me. One of my biggest fears in life is losing my sight because of the challenges that would be presented being blind.
kabdib 4 days ago 1 reply      
One of the best programmers I know is nearly blind. He can see, a little, with the aid of a telescope-like affair that is mounted to a pair of glasses.

But the breadth and depth of his understanding of computers and software is amazing, and the code that he writes is some of the best I've seen: Very clear, very direct and unsullied by useless abstraction, and easy to maintain.

csirac2 3 days ago 0 replies      
> I wanted to know why I couldnt work out how much food (carbohydrate) I was about to eat, measure my blood glucose, and then calculate my insulin dose based on those and other factors.

Ouch. I suppose carb ratios weren't so big back then (admittedly, it wasn't for me either, being diagnosed in the late '90s). But I still measured myself so many times a day that the pharmacist thought I was scamming the PBS for test strips somehow (our subsidized medicines scheme here in .au). As a curious teenager I was able to develop a mental picture of what my BSL did when I ate certain foods after a few months of 10+ measurements per day. So, even if I didn't have an actual carb ratio figured out, I "knew" by trial and error how much insulin different foods needed.

Even so, I've fallen off the wagon a few times. I got so used to having specialists and doctors tell me what a great job I was doing on my own, I had an embarrassingly long period between specialists. To the extent that I stayed on humulin for quite a few years longer than I otherwise would have if I'd seen a specialist (newer insulins are way faster-acting and easier to live with).

This story has certainly prompted me to re-evaluate where I am now; complacency is a silent killer.

j_s 4 days ago 0 replies      
Scott Hanselman manages to include quite a bit of useful information for tech geeks managing their diabetes (alongside his Microsoft evangelism):


carlob 4 days ago 0 replies      
I suffered (probably still do) from myopic retinopathy and I underwent dynamic laser treatment about 10 years ago, it was very effective and I haven't progressed since. Basically they inject you with something that has to be activated by a certain wavelength of light and by timing the injections with the laser pulses they can just burn the new blood vessels and not the retina in front of the vessels.

My mom suffered from the same ailment a few years later and it appears that the treatment of choice is now an eye injection. It seems to me that the field is moving very fast, and the author of the article has been very unlucky not to have this just a few years later. Here's to hoping that we find a stable non-invasive cure in the very near future.

Asbostos 4 days ago 1 reply      
It sounds like the risks of not using the medication properly weren't clearly communicated to her. "You'd better take it every day because it's important" doesn't count - people need to be able to make their own decisions about these things. The fact that she tried skipping doses as long as it didn't cause short term problems suggests nobody told her it was quietly doing irreversible damage.

I can imagine doing the same thing myself - and have often skipped prescription medicines. But if I'd been told the risks (not exaggerated unquantified risks) then I surely never would.

ninjin 4 days ago 3 replies      
Thank you for sharing this, my most sincere condolences and my very best wishes to you in continuing to cope with your situation.

I am a recently diagnosed type 1 diabetic, hospitalised on my 29th birthday earlier this year. Not having received it as a child it is difficult for me to add to the post, but I do think that I can add some value by elaborating on exactly how tricky glucose monitoring and insulin dosages are.

Type 1 diabetes is uncommon in comparison to type 2 diabetes, if I remember the numbers correctly, it is about 10% of diabetics that have type 1. It is an autoimmune disease, meaning that your immune system, for some as-of-yet unknown reason, attacks your pancreas and destroys your ability to produce insulin. There is no medication or treatment other than injections of insulin into your bloodstream for the rest of your life. Personally, I had deteriorating eye sight over the course of several months and finally a very sudden urge to drink large amounts of sweet drinks and water.

As a type 1 diabetic you need to monitor your glucose level several times every day. The most common way to do so is to prick your finger and put a drop of blood on a one-use testing strip that goes with a digital monitor [1]. This is remarkably easy and you get instant results. It is however a relatively new invention and prior to this you had methods like urine test strips that gave far less immediate and accurate results. The most modern monitoring available would be continuous glucose monitors. They are essentially a needle with a sensor attached to a patch that you attach to your skin. This sensor then sends a signal to a device that you carry with you and that can warn you if your values are too high or too low. It is however not widely available in all countries, partially due to the cost.

[1]: https://en.wikipedia.org/wiki/Blood_glucose_monitoring

So, about dosages, how do you do it? Well, every person is slightly different, but there are tricks. You can use the same portion size of carbohydrates every day and keep your dosages fixed, or you can attempt to guess the amount of carbohydrates and adjust the dosages accordingly. What happens if you overdose? Nothing immediate, but over the next few hours or so you will start to feel lethargic, sweating, act "drunk", and in the end loose conciousness unless you compensate by taking in additional carbohydrates. As you grow older you may loose these signals and falling too low becomes increasingly risky. What happens if you underdose? Nothing immediate, but much much less immediate than with an overdose. You will have high levels of glucose in your system which gradually will wreak havoc on your eyes, feet, kidneys, etc., there will be consequences further down the line. This is one reason why getting insurance coverage as a type 1 diabetic is almost futile as pretty much anything could arguably be caused by your diabetes.

To add to all of this your dosages will vary due to what kind of food you are eating, carbohydrates from chocolate acts slower than pure sugar due to it being coated in fat that slows down the digestion. If you are sick your body is likely to be much more difficult to read and your dosages may change radically. Add to this that your immune system is significantly weaker as a type 1 diabetic. Dosages also vary depending on whether or not you do physical exercise, your stress levels, and more.

I really want a book describing all of this to me from the bottom up, molecular chemistry and all, so that I can better understand my own disease. For now I am gradually building a mental model from experience and picking up bits of pieces by reading.

There is hope in stem cell research and computerised systems that automatically read your glucose levels and inject an appropriate amount of insulin. But we are not there yet.

anderspitman 4 days ago 0 replies      
There's a lot of awesome work being done on T1 diabetes, particularly in the open source community. I highly recommend checking out the Tidepool Project (http://tidepool.org/) and The Nightscout Project (http://www.nightscout.info/).
mirimir 4 days ago 1 reply      
Amazing article!

Everyone does some version of that in their youth, I think. I did, for sure. But fortunately, my gotchas have all been relatively minor.

My vision is deteriorating, though. However, it's mostly cataracts and lens hardening. So I'll get artificial lenses, and probably won't need glasses anymore. Or I might end up blind, if something/someone screws up.

So it goes, I guess.

noir_lord 4 days ago 0 replies      
I think this was posted on here a while back http://www.techinsider.io/hacked-raspberry-pi-artificial-pan...

Seems like the kind of wearable that could have a profound impact on peoples lives if done properly.

known 4 days ago 0 replies      
https://en.wikipedia.org/wiki/Latanoprost eye drops can prevent/delay diabetic retinopathy
shna 4 days ago 0 replies      
I always wonder how people cope financially during their medical challenges. Life is already difficult even in normal times.
brezelben 4 days ago 0 replies      
Thanks for sharing your insights, very interesting read!
ninjin 4 days ago 2 replies      
As a type 1 diabetic, I am mostly frustrated with the lack of common knowledge about the differences between the two. Now and then I may rant that maybe it would have been better to call them disease A and B so that searching for advice would be easier for us that are in minority. But, not even once have I felt that "we" deserve more of the research money, both diseases are terrible and causes significant human suffering. The savings in healthcare that a type 2 cure could enable should also not be understated.
imaginenore 4 days ago 1 reply      
He might get his eyes back. Stem cell treatments are coming.


The Most Important Thing: Decline in poverty, illiteracy and disease nytimes.com
425 points by apsec112  2 days ago   207 comments top 24
manachar 2 days ago 7 replies      
It's really weird to me how commonplace the attitude of "It is what it is" and "Government can't do anything".

The air and water quality in the US is vastly better since the passage of laws that created and empowered the EPA (Created by the Republican President Nixon). The endangered species act has had many similar successes.

I live on Maui. The waters are now filled with Green Sea Turtles and Humpback whales that were nearly non-existent just 30 years ago. Yet when development and global climate change are threatening our reefs, people look at you like you're crazy for thinking that laws can be effective in safeguarding the environment for the future.

dredmorbius 2 days ago 6 replies      
Yes, a vast number of the world's poorest population have been raised out of nominal abject poverty, this is true.

What Kristof doesn't state, however, are these facts:

1. Those people live almost entirely in China and India.

2. The rise in nominal wealth has come from a tremendous increase in energy consumption.

3. Both nations have extreme environmental problems, including the worst air pollution situations in the world, result of extreme amounts of coal consumption, water shortages, water pollution, and extensive land contamination from mining, industrial, municipal, and other waste.

4. Though overall efficiency in GDP output per unit energy has increased modestly, overall, increases in economic wealth require vast amounts of additional resource consumption.

The combination of factors 3 and 4 above means that the gains in economic output are being accomplished largely by both strip-mining resources and exhausting effluent dumps in China and India.

5. Much of the apparent "de-materialisation" of economies in the developed world outside China and India is actually, on closer examination, based on exporting raw material (and effluent) demands to those countries. See research out of CSIRO (Australia) and UCSB.

6. Outside of China and India (though yes, they're both huge countries), progress in the "developing" world has been far more modest. In cases, backsliding. Add to this economic regression through much of the Middle East and North Africa (much of the disturbances of the Arab Spring is attributable to economic circumstances), and even in OECD/European nations, particularly Greece, Italy, Spain, and Portugal.

And finally: the foundations of modern economic wealth, vast quantities of nonrenewable resources, tapped at uttelry unsustainable rates, are quite simply not sustainable. Without looking to this basis, effectively the fuel in the tank, observations of altitude or velocity tell little.

mninm 2 days ago 5 replies      
"We cover planes that crash, not planes that take off."

It always fascinates me how much my world view and the world view of those around me is based on the exceptional and not the mundane. The news outlets report the news assuming that the audience knows what's normal. When the news is used to become informed about the world many people come away with a skewed impression of reality. I am of the opinion that people would be better served by news reports that provide context to explain why the news is in fact news.

Here's some context I find interesting:

According to the CDC:

Americans murdered per year: 16,121

Americans killed in car accidents per year: 33,804

Americans killed by smoking related diseases per year: 480,000


underwires 2 days ago 2 replies      
The only reason that I already know this is Hans Rosling. His work is excellent, check it out if you haven't already. http://www.gapminder.org/https://www.google.com/webhp?q=hans+rosling&tbm=vid
coffeemug 2 days ago 2 replies      
> We cover planes that crash, not planes that take off.

This is why everyone should read The Economist instead of The New York Times. The Economist does a phenomenal job avoiding the selection bias and finding a way to cover mundane trends. It makes the mundane interesting, and gives a view of the world that's much closer to reality than any other publication I've ever seen. I'd strongly encourage everyone to give it a shot.

Max_Horstmann 2 days ago 9 replies      
Since this is HN, here's a question: what role could technology play in eliminating extreme poverty by 2030?
mazerackham 2 days ago 6 replies      
Another big secret, is that the source of a lot of poverty being lifted is because of China. That's right, that terribly communist government suppressing its people and free speech, is actually responsible for lifting 500 million people out of poverty over the last 30 years. That is definitely a big secret of american media, and something that it neglects to report on, in favor of free speech violations, naval exercises, ip infringement, and "job stealing".
quan 2 days ago 7 replies      
Is there any inflationary effect to the $1.25/day figure that has been used since the 1980s? It's not clear from the article whether it's adjusted for inflation.
mc32 2 days ago 1 reply      
Implied but not stated directly is that because vast reduction in disease and proportionate increase in life expectancy people are having fewer children so that the small resources people have don't have to be divided into smaller slices but rather into larger slices.

Globalization, the boogeyperson of lots of people, has allowed some income redistribution to the developing world -has the developed world not globalized, it's not a given that had the developed world hoarded their industry and jobs the poor countries would have done as relatively well as they have.

It's also surprising to not see much comment on the interesting phenomenon where countries which had relatively good economies in the early 20th century (Mexico, Argentina, etc) went way down hill after WWii, partly severe corruption and simple non-investment due to antiquated policies and emphasis on natural resources rather than technologies.

oldmanjay 2 days ago 3 replies      
This journalistic impulse to sell the world as worse than it is will not end because a single journalist caught a quick (and likely fleeting) guilty conscience. It will continue until the profession recognizes that their addiction to narrative harms the entire population.
dkbrk 2 days ago 0 replies      
A minor note: the author refers to "Volkswagen corruption". I have not seen the company's actions called corruption elsewhere, nor was it justified in the provided link.

Volkswagen deliberately subverted regulations on emissions control. It would have been corruption if they also bribed officials to get away with it.

I'm not attempting to defend Volkswagen in any way, but the phrase is sensationalist and strictly incorrect, even if only very slightly so. This is interesting given the overall point of the article.

MisterMashable 2 days ago 0 replies      
Ending illegal wars, the death penalty and torture are even more important. Why would our government which has no problem killing, maiming and even "torturing some folks" even begin to care about poverty, illiteracy and disease? You may disagree but as a whole our government has a strong proclivity to violence which implies a lack of regard for life. Dealing effectively with poverty, illiteracy and disease presupposes our government is sufficiently motivated to direct its attention to those issues. Pick any metric you like and it clearly shows our government has other priorities. Poverty is up, illiteracy is up, education is overall declining, university tuition skyrocketing, cost of healthcare skyrocketing, planned parenthood under attack etc. Our government just bombed a hospital in Afghanistan, executed a woman in Georgia who should have served a life sentence instead, is about to execute a man whose guilt is unclear and still tortures people in dark forgotten corners of the globe. I don't think there can be any meaningful change until bad people with no respect for life are removed from office and replaced with good people who do respect life. It's no coincidence that since the warmongers seized control of this country that education and healthcare took a dive. Dr. Strangelove doesn't care one whit about poverty, illiteracy and disease .
astazangasta 2 days ago 1 reply      
This article is deceptive because it speaks of raw numbers rather than percentages. While the number of people living on less than a dollar a day went up throughout the 20th century, this was because populations were exploding over the same period. At the same time, the fraction of people living on less than a dollar a day plunged dramatically. In fact, this trend slowed after 1990 - since then this rate has declined less quickly than it did during the seventies, when it fell from almost 30% to less than 10% in 1985. See https://orderorder.files.wordpress.com/2015/04/capitalism.jp... e.g.
ekianjo 2 days ago 1 reply      
So this is where some folks on HN should come and complain and the raising inequalities, despite the fact that literally everyone is getting richer. Which is what ultimately matters.
Tossrock 2 days ago 0 replies      
Reminds me of this similar article on the global decline in violence: http://www.slate.com/articles/news_and_politics/foreigners/2...
tragomaskhalos 2 days ago 0 replies      
Disasters and tragedies sell, and we now get to hear about them from all over the world, so selection bias is sadly inevitable. This is why millenarian religions like the Jehovah's Witnesses (and, perhaps, radical Islam) are able to so easily convince the gullible that the world is on its last legs.
crdoconnor 2 days ago 1 reply      
>One survey found that two-thirds of Americans believed that the proportion of the world population living in extreme poverty has almost doubled over the last 20 years.

Curious that the author attempts to attribute this to media sensationalism rather than, say, an increase in American poverty.

Or perhaps not, since Kristof is a fierce critic of the anti-sweatshop movement.

_gopz 2 days ago 0 replies      
> The number of extremely poor people (defined as those earning less than $1 or $1.25 a day, depending on whos counting) rose inexorably until the middle of the 20th century, then roughly stabilized for a few decades. Since the 1990s, the number of poor has plummeted.

I assume this is adjusted for inflation, but I didn't see anything in the link. Does anyone know?

tchibon 2 days ago 0 replies      
As a side note, mr. Nicolas is the one feeding the media with "war, scandal and disaster". It's enough to take a look at his wiki:https://en.wikipedia.org/wiki/Nicholas_Kristof

or the topics he covers:http://www.nytimes.com/column/nicholas-kristof?action=click&...

xacaxulu 2 days ago 1 reply      
>Thats 95 percent of Americans who are utterly wrong.

Sad, funny and completely unsurprising. At least this writer understands his profession's complicity in the stultifying of America.

narrator 2 days ago 2 replies      
I think the average American is going to be shocked in the next couple of years when the rest of the world passes us by. The technological capability of developing nations has increased rapidly. Our rate of technological progress is slowing down. For example, Intel is having difficulty scaling down process technology past 14nm while the fabs in China are catching up at 28nm.
waltherp 2 days ago 0 replies      
Not to overreach here but thank you Internet!The single greatest equalizer in human history.
AnimalMuppet 2 days ago 1 reply      
It's just insane how good this is. Yeah, the middle class is hurting (worldwide, or pretty close), but the bottom is escaping poverty at an amazing rate.
Amazon Snowball amazon.com
371 points by polmolea  22 hours ago   184 comments top 32
lisper 7 hours ago 0 replies      
But can you trust it?

When I returned to JPL after working at Google for a year I was tasked with evaluating a Google Search Appliance. We ultimately decided not to keep it, and so we had to erase the disks, which now contained sensitive data. The appliance had a "self-destruct" feature that supposedly erased all the data, but there was no way to verify it. After lengthy negotiations with Google (some people just have a hard time grasping the idea that just because a file has been deleted doesn't mean the data is actually gone) we eventually got them to agree to let us open the enclosure and take out the disks. Forensic analysis revealed that they had not in fact been erased.

Caveat emptor.

hughes 21 hours ago 1 reply      
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway."

- Andrew S. Tanenbaum

devit 22 hours ago 6 replies      
As usual, the pricing is not very friendly, and apparently designed to lock your data into AWS or exploit your weak negotiating position once you buy in.

While you can send in 50TB for $200, taking the same 50TB out costs an additional $1500 charge (50000 * 0.03).

[assuming they are not transferring the data over the Internet, the cost to AWS should be the same or cheaper for reading]

vvanders 22 hours ago 6 replies      
The e-ink display showing a shipping label is brilliant.
jonkiddy 22 hours ago 4 replies      
An AWS version of a sneakernet.


mpeg 22 hours ago 7 replies      
I'm curious about how they're going to approach this from the fraud perspective. This is a $200 charge for a device that has 50TB storage, which would probably cost you around $2000 to buy.

There's people out there that will sign a contract under a fake name / address with a phone provider and sell the phones, and the way the providers fight against it usually by running credit checks and verifying address against them. Ultimately, this is very hard to detect when it involves identity theft.

msandford 22 hours ago 1 reply      
The XKCD about FedEx's bandwidth seems particularly appropriate: https://what-if.xkcd.com/31/
sengstrom 21 hours ago 2 replies      
The math for the time to transfer comparison is interesting:

"Even with high-speed Internet connections, it can take months to transfer large amounts of data. For example, 100 terabytes of data will take more than 100 days to transfer over a dedicated 100 Mbps connection. That same transfer can be accomplished in less than one day, plus shipping time, using two Snowball appliances."

With a 100 Mbps connection it takes over 100 days [1] but with a 100 times faster connection (10 Gbps) it takes less than a day :)

[1] Assuming no network overhead it is 92.6 days

itsjustjoe 22 hours ago 0 replies      
In my last role we would often need to upload large amounts of data for our clients to AWS. When this data got into the terabytes we would ship a NAS box to our customer and then send that to Amazon. On more than one occasion Amazon fubar'ed the upload on their end (why would you move drives around in a RAID 5/6 array?). Maybe since this is AWS branded solution it will be more reliable.
stonogo 9 hours ago 0 replies      
Forgive me, but what exactly is 'petabyte-scale' about a 50TB NAS with a dog-slow link?
driverdan 20 hours ago 2 replies      
This has interesting security implications for both sides. Is the device 100% offline or does it phone home when you connect it to your network or transmit any other data? What if someone gets the device and hacks it to scan Amazon's networks when sent back?
mkobit 21 hours ago 0 replies      
> Snowball currently supports importing data to AWS. Exporting data out of AWS will be supported in a future release.

I'm interested in hearing about how they are going to do this.

softwarerocks 19 hours ago 2 replies      
All the innovation at AWS is amazing. If they every stop charging by the Gigabyte for bandwidth and move to a flat model then I would be tempted to switch to them for all my sites.
imajes 22 hours ago 1 reply      
Say I wanted to do something similar, but move data around locally between two NAS appliances, but not incur a double disk charge - (i've got a pack of ~ 20 disks in NAS brand a, but i want to move to NAS brand b. The disks work in both, but need reformatting).

Does anyone know of a service where I could rent a 20TB device like snowball but not push it to S3?

fensterblick 22 hours ago 1 reply      
Getting terabytes of data into AWS/Hotel California is great, but wish there was a way to get the data out just as quickly!
mkobit 22 hours ago 1 reply      
The picture on the page doesn't give me an accurate estimate of the size. They are actually 50 pounds (says on the blog)!
devy 21 hours ago 0 replies      
It's amazing that Amazon has been vertically integrated so many of both its own and external product/services into this appliances:

- Kindle's E-Ink


- Amazon Carrier? (perhaps in the future?)

- GPS-powered chain-of-custody tracking (AWS working on it, perhaps Amazon Drone delivery in the future?)

meritt 20 hours ago 1 reply      
When I export data from S3, what do I get for a given bucket? Just basically a file system? How is the metadata stored? What about object versions?

I'm curious what the end result looks like in doing this.

chx 19 hours ago 1 reply      
I am sure most companies which are the target of this will welcome plugging in an outside device into their internal networks with open arms.
marcosscriven 22 hours ago 1 reply      
I thought they already allowed you to mail in hard drives?
pc2g4d 15 hours ago 1 reply      
Named after the horse in Animal Farm?
letstryagain 11 hours ago 0 replies      
They should have called it 'speedball'
iancarroll 22 hours ago 5 replies      
Isn't this a bit risky? What happens if someone keeps it? 50TB is a lot more than $200.
gcb0 21 hours ago 1 reply      
safer than google option of asking you to ship the un-encrypted drive.
jonknee 21 hours ago 0 replies      
Sneakernet as a Service!
spot 16 hours ago 0 replies      
Can this work with Arq?
ck2 19 hours ago 2 replies      
the E Ink shipping label will automatically update

Are you kidding me? Instead of a 25 cent shipping label they use a $100 e-ink display?

The display alone might get the device stolen.

rajeshmoov 10 hours ago 0 replies      
interesting how much we need to pay for the GB
beachstartup 9 hours ago 0 replies      
the biggest message this sends is something nobody is talking about: amazon is not afraid of sending hardware on-premises.
chinathrow 22 hours ago 0 replies      
Another easy method to move your customers data to AWS - where I'm sure some three letter agenices feast over each newly arrived platter of data.

I'm still waiting for the big leak on how AWS cooperates with NSA at large.

buro9 21 hours ago 2 replies      
Well, that's an unfortunate name.


Reminiscent of when Microsoft called an overlay dialog a "floater" and all the South Africans and Brits in the room started laughing.

http://www.urbandictionary.com/define.php?term=floater(the 2nd definition)

Users Have Been Betrayed in the Final TPP Deal eff.org
335 points by DiabloD3  2 days ago   98 comments top 15
walterbell 2 days ago 2 replies      
While we await public release of the final TPP text, here is a review of a draft IP chapter, http://www.freezenet.ca/an-analysis-of-the-latest-tpp-leak/. We need specialists and journalists to present credible analysis for the hundreds of millions of people who would be bound by the proposed laws.

"... breaking a copyright protection system (i.e. DRM, TPM, etc.) would land you in hot water ... if you need to circumvent a DRM for personal use, you are now liable for criminal penalties. Traditionally, for many jurisdictions, circumventing DRM is typically reserved for civil penalties. Criminal penalties implies that the government would foot the bill for enforcement. In civil cases, it is typically rights holders that go after individuals.

... civil damages do apply. How much are civil damages if an alleged infringer is found guilty? Well, that is up to rights holders, not a judge. Prosecution is able to determine the damages and a judge would have to accept whatever number comes out regardless of evidence to the contrary. A good example might be that a song may be sold for 99 cents, but the damages sought could be in the millions for all the prosecutors are concerned.

... an act of copyright infringement as set forth in this trade agreement doesnt actually have to occur before the authorities are sent after you. There just has to be evidence that infringement is taking place or is about to take place.

... if you have a cell phone, and you have something on it that could be considered infringing, authorities have the right to seize it because you are suspected of trafficking. Of course, you could also be liable for civil and criminal penalties as earlier outlined and could be fined any amount copyright holders feel like.

... you could be fined on the spot because of your cell phone on top of it all. No need for a judge here. Also, no, you may not get your cell phone back. It could be destroyed ... Who gets to pay for all of this? According to page 77, you do."

codingdave 2 days ago 4 replies      
While I'm certain there will be areas of concern, I'm unclear how we are supposed to intelligently send our concerns along if the text of the agreement will not be available for 30 days. We'll look like idiots if we start quoting language from a 2 month old leak if that language was not in the final agreement.
hackuser 2 days ago 0 replies      
Paul Krugman's take from March:

Id argue that its implausible to claim that TPP could add more than a fraction of one percent to the incomes of the nations involved; even the 0.5 percent suggested by Petri et al looks high to me.

These gains arent nothing, but were not talking about a world-shaking deal here.

So why do some parties want this deal so much? Because as with many trade deals in recent years, the intellectual property aspects are more important than the trade aspects. Leaked documents suggest that the US is trying to get radically enhanced protection for patents and copyrights; this is largely about Hollywood and pharma rather than conventional exporters.


hodwik 2 days ago 3 replies      
They're really hurting themselves with this legislation, I for one welcome it.

If pirating Beyonc will put you in jail, more people will be willing to listen to all of the great free music on sites like bandcamp and soundcloud.

That's the major benefit to this legislation as I see it. Overpriced pop media garbage will become decreasingly popular in exchange for better but less-popular works. As recording and film companies see large swathes of the population leaving their works for independent producers, they'll lower their prices to try to stay relevant.

I say this even as a frequent media pirate -- this is not the end of the world, far from it. If this legislation has teeth, it will birth a new world of free creativity, of the people and for the people.

Edit: The same, I propose, will be true of opensource hardware.

thedangler 2 days ago 3 replies      
This makes me think that it would be nice to have two passwords for your device. One to access it, and one to erase the device and go back to default settings.Unless that is illegal too.
hijjjdbind 2 days ago 0 replies      
The only effective protest would be mass coordinated breaking and flaunting of these laws if TPP gets passed.
nerdcity 2 days ago 1 reply      
Who fights for the User?
pc2g4d 2 days ago 0 replies      
A little early to freak out, don't you think, EFF? Sure, there was some nasty crap in the leaked text, but what's going to actually be debated and potentially passed as law is the text that was agreed in the wee hours of the morning and isn't available yet. Some things in the negotiations seem to have actually gone against the U.S. IP approach, e.g. biologics protection term. To me it seems wiser to wait until the final text is out before deciding whether/in what ways to oppose it.
ck2 2 days ago 0 replies      
If you think they would never hassle people over this stuff, let me remind you that every day the TSA holds people without the ability to contact a lawyer.
PSeitz 2 days ago 0 replies      
Not everyone has a twitter account, and not everyone wants one.
danharaj 2 days ago 2 replies      
The incumbent powers will say that this promotes a lot of freedom and a lot of rights. Free trade, free markets, freedom, freedom, freedom. Property rights are individual rights! You don't want to be a dirty socialist, do you? Property rights are the underpinning of all freedom. If you refuse to strengthen them, you refuse to strengthen liberty, you monster!

Here we see property rights being strengthened to the point of undermining popular sovereignty! How can they be called rights when the stronger they are and the more aggressively they are enforced the more authoritarian the world feels?

tosseraccount 2 days ago 0 replies      
Any free market policies being proposed for lawyers and doctors?
ranprieur 2 days ago 0 replies      
You can't be betrayed if you weren't promised anything. Users simply have no power inside the system.
spacemanmatt 2 days ago 1 reply      
We are all Chinese, now.
narrator 2 days ago 1 reply      
Music, entertainment, simple generic drugs and many other things are being made too inexpensive thanks to technology and globalization. I think these trade deals are attempts to put that genie back in the bottle and increase the price through extension of artificial rights and enforcement of criminal penalties against violators.
Show HN: Cancel your Comcast in 5 minutes airpaperinc.com
385 points by estsauver  5 days ago   257 comments top 62
dayjah 5 days ago 12 replies      
The second-worst experience I had with cancelling a service was with Comcast. Briefly: called up, told operator I wanted to cancel, they put me through to a special sales team attempting to keep me, eventually I pushed it through.

The real magic happens during the conversation with the sales rep; he asks why I want to cancel, I explain that the cost is high relative to the quality. He offers me more cost effective package, I explained that I had that package in the past and that the slow incremental increases in cost had turned me off them as a company. He goes on to say: "well, that is your fault, I'd never set up auto pay for a service" and continues to make the case that the convenience function of auto pay is actually an agreement with Comcast to increment the cost of my service without my explicit agreement to paying more.

Wow. Just wow.

Still, that's nothing compared to the calls I had with BT after someone hijacked my landline and placed hundreds of GBP worth of calls to Nigeria... but that is another tale for another HN story.

tombrossman 5 days ago 4 replies      
Nearly all of these contracts with Comcast or similar companies will have a postal address you can write to and cancel. Even if they provide no cancellation postal address, look up their 'Agent for service of process', this is always listed somewhere. I never call to cancel, this is a waste of time and not necessary.

I've done this repeatedly (use certified mail, or recorded delivery, the term varies by country) and it's dead easy. I have cancelled service with no early termination fees at both Sprint and Verizon, and kept my subsidized phone while continuing on month-to-month service when they raised fees - a 'materially averse' change - always read the fine print. Same thing with AT&T dial-up back in the day.

This service looks interesting but really there is no easier way than sending a proper business letter and keeping proof of delivery. Only once did a company dispute receiving my letter and I emailed a scanned copy of the proof of delivery and that was it.

It sometimes helps to remind the company that falsely reporting a bad debt onto your credit record carries huge fines and you will definitely be checking to make sure they don't 'accidentally' forget or lose your paperwork, and that you have complete documentation and are prepared to defend. Be polite but do be firm, too.

estsauver 5 days ago 12 replies      
Hey HN,

I was moving to Thailand ~6 months ago and I had to cancel my Comcast. Cancelling took way too long and was surprisingly frustrating. From talking with other people it seemed like the cancellation process was really frustrating for other people too, so we made this.

My partner Eli (HN username: EliPollak) and I really want your feedback more than anything. Were planning to expand into fixing other processes that are really more painful than they need to be. Were around to answer questions/chat and were also available by email at founders@airpaperinc.com.


Moral_ 5 days ago 7 replies      
I didn't even have to call to cancel my service.

I ran my connection 24/7 for a month, pulled 33TB of data down, they dropped me as a customer. Most pain free process, and a nice fuck you to Comcast as well.

frankthedog 5 days ago 4 replies      
I actually just cancelled my Comcast account yesterday. The process was pain-free and easy, just told them "I'm moving in X days and do not need service at the new address."

Granted, it was true, but if you are having trouble cancelling for low-quality service or any other issue, you may have better luck using that reasoning.

engi_nerd 5 days ago 1 reply      
Cancelling my Comcast service for cable television and internet involved these steps.

1) Gathering up all of my Comcast owned equipment (cable boxes and remotes, but importantly, NOT my cable modem, which I owned outright) and going to the local Comcast office. There I waited in a long line in a crowded, un-airconditioned office (in the heat and humidity of a hot August day in Maryland) for almost 2 hours. Then handing over the equipment and marking my account as cancelled took 25 minutes.

2) Calling the Comcast support line after they tried to bill me at the end of the following month for not actually supplying service to a home I no longer lived in. I was placed on hold for almost an hour before finally speaking to a representative, who told me that she didn't understand what the problem was because Comcast had no record of actually sending me a bill for the month after I moved.

3) Receiving a call a week later demanding I return my "Comcast owned equipment" before I could receive a refund on my deposit. I countered that I owned the modem outright and had merely given Comcast the MAC address of the modem so that they could authorize it on their network. The representative demanded that I prove that I actually owned the modem. The receipt I emailed from when I purchased the modem was not enough for them, they demanded "more proof" but could not actually offer an example of what proof I could offer. I hung up on that representative and immediately filed a Better Business Bureau complaint (I know, the BBB is a bit of a racket, but sometimes it gets some results).

4) The following day I received a call from a senior customer service manager apologizing for the "mix-up" with marking my modem as being a Comcast-owned piece of equipment. I received a refund of my deposit a few days later.

asd 5 days ago 1 reply      
The way I see it, they are rude to you by making you go through hoops, so I am rude to them. I repeatedly say "Please cancel" to every new rep I am transferred to and/or every time they ask me a question or go into a spiel.

The only time I will not say "please cancel" is when they ask me any information that needs to be verified in order to cancel. This has worked with Comcast as well as a handful of other service providers that I wished to cancel. I have never been on the phone more than 6 minutes. Try it out.

aerovistae 5 days ago 3 replies      
You know you've pissed off your customers when people start building business models around cancelling your services so they don't have to deal with you.
walterbell 5 days ago 1 reply      
How does this work legally, are you acting as an agent, proxy or other authorized representative of the customer, when interacting with the vendor?

On the topic of security: is the customer's identity information, e.g. authorization passcodes required by the vendor, deleted from your database after the transaction is complete? You may also want to add some details on encryption.

treffer 5 days ago 1 reply      
There is an online cancellation service in germany that will help you to cancel just about every contract: https://www.aboalarm.de/

They have ready-made templates for many compaines (esp. ensurance and telco ones) where you just fill in your details, sign it via your touchpad and let them send it in a conforming way.

I /think/ the business model is focused on post-cancellation: they'll show you other options for the canceled contract.

I'm stunned that this does not exist for the US. It seems to work in Germany.

koenigdavidmj 5 days ago 3 replies      
Am I the only one who never had trouble canceling? The process was just this:

1. Call, specify your account info, and say you're canceling.

2. They confirm that it's canceled and tell you to return the cable box (and modem, if you're renting one) to the local office.

3. You either put them in a big box out front, or if you're paranoid like me, you wait in line so that you get a receipt.

4. In a couple weeks, you get a check in the mail refunding you for the rest of the current payment period.

rochers 5 days ago 1 reply      
Great idea and nice execution. Suggestion: add SSL / HTTPS to your website if you're collecting all this sensitive stuff about me.
kayhi 5 days ago 0 replies      
Here's a recorded example of one person trying to cancel their service:


xmly 5 days ago 2 replies      
I canceled comcast service at least twice so far. Not much trouble. But the key problem is after canceling, which other service you are gonna use...
nathancahill 5 days ago 1 reply      
Cool idea! Your CSS/layout needs some love though, it's broken in FF (works better in Chrome).
pmalynin 5 days ago 1 reply      
I think you might have accidentally cancelled your site's internet too.
nobleach 4 days ago 0 replies      
I moved across the country to Comcast land. I had a decent job, so I signed up with all services (HD, phone and internet) After a few months, I realized I didn't need a phone and didn't care at all about HDTV. So I called and told them I'd be backing down my service to Internet and normal cable. I got a guy that informed me that I was under a 2 year contract and had no choice. He was absolutely terrible.... like so mean and insulting. I took it for a bit then I realized, I don't have to be yelled at by a CS rep! I told him that I felt very uncomfortable with the way he was talking to me. He got even more belligerent. It was surreal, I have never had someone speak to me like this. He told me I was stuck, and that his hands were tied. I hung up. I called back because I realized he might just have been full of "poo-poo". I got another CS rep. She was very pleasant. She said, "Mr Nobleach, it appears you're under contract." I said, "that's fair, can you send me a copy of the signed contract?" she replied, " well, your acceptance of equipment equates to a signed contact". I said, "ok, the contract I didn't sign will suffice. I do need my attorney to look over it". They agreed to knock me down to no contract and whatever service would work best. I can't say it's the right course of action. I really did have an attorney wanting to look at my "contract". I'm sure they'd stand no chance against Comcast. But it did get me out of some contract I never knew I was in.
lazyant 5 days ago 2 replies      
There should be a law (I know, too many laws!) that cancelation process should be the same as sign up, for example you can sign up online for many services but need to call for cancellation.
ergothus 5 days ago 1 reply      
While I'm a big fan of the idea of paying a small fee to let others deal with such hassles, this is one of the tasks I vastly enjoy. "You suck, you've always sucked, you overcharge, underdeliver, and have terrible customer service". Repeat with more emphasis the more they argue.

Mostly I don't enjoy being a jerk, but most of the time they demand it. (Occasionally, when moving, it's been painless, and I have no need to be rude, but that's been the exception)

j_s 5 days ago 0 replies      
Toll free customer service numbers are free calls in Skype, but Verizon Wireless doesn't accept Skype calls anymore. For them I've switched to using Google Hangout. This wound up with the bonus of supporting including my parents in the conversation (adding the phone call into a normal audio group chat).

Sitting on hold isn't so bad when it's free and you can focus on reading Hacker News or whatever easily while you wait.

BlakeCam 5 days ago 0 replies      
Organizing customers against companies that try to exploit their customers is an awesome concept. I suspect it will be a battle, but I wish you success, and hope you earn enough to make the effort worthwhile.

What about expanding this?! Some companies frustrate me with their customer abuses, and you could organize customers against them.

- Cell phone companies send bills that are confusing gibberish so we won't read them. So, provide a service that scans my bill and sends me a readable summary with red-lines on anything surprising or changed.

- Banks and credit card companies send 'privacy statements' that are rambling non-sense. Provide me a service that scans my statements and gives me a nice summary of anything surprising.

- Companies send endless offers for loans or useless services (paper 'spam') mixed in with bills. Filter this out for me.

- Alert me to hidden fees.

- Read the legalese and summarize it.

Basically provide me a service that I will try to accept my bills and statements from monopolistic companies and give me a short, readable, reasonable summary. When something is unreasonable, help me protest or fight it, or find an alternative.

Something like a consumers union, I suppose.I'd gladly pay well for this.

andhess 5 days ago 0 replies      
I had to open a Comcast account after a roommate moved out. His checkout/my opening was very straightforward until the modem that worked for his account didn't work for mine. The "new system" that our account was directed through didn't accept the exact model for our modem, forcing us to get another one. And there is no way I would use their modem/router setup.
x1798DE 5 days ago 0 replies      
Maybe I just had a good person, but when I cancelled my Comcast, I just called up, they asked why, I said, "That's not important, just cancel it." and they just did it.

That said, I think this might have been a few weeks after some high-profile example of someone recording them dicking someone around for a while, so maybe they had been told to dial it back.

lerxst 5 days ago 1 reply      
Their website seems to have some trouble loading right now. Cached version: http://webcache.googleusercontent.com/search?q=cache:JQPoiC4...
monopolemagnet 5 days ago 0 replies      
Visa to China sounds like a winner: Never had to do that myself, but it seems arcane, shady magic that some friends from Hong Kong know how to perform whenever we needed to go.

There are plenty of these infrequent, but non-zero, official, lifestyle and corporate customer processes that are PITAs, for a variety of reasons. People would gladly pay to avoid the hassle themselves, where cost-efficient, eventually-expert, standardized "schlepping" can make a good business model or 10. Relieve pain for $; enough people will gladly hand you $ to make it happen.

chrisBob 5 days ago 0 replies      
I like the idea a lot. This is one of the cases where most people try to fix something by calling even though they would be much better off sending in a form letter. Check your contract. It probably says something along the lines of "service will be terminated within 5 business days after written notification" but they sure don't guarantee anything good will come from calling customer service on the phone.

I would be a little worried about basing a business on this because someone else could start a site with a free collection of the letters to download. Until then this would be well worth the money.

tracker1 5 days ago 0 replies      
I had XM radio pretty early on... at one point I had three radios on the account (briefly), when I had called to cancel the third (as I no longer had that vehicle), I was then placed in a special queue for a different "cancellation department", not once but twice (after a 45 minute wait), the call was mysteriously (yeah right) dropped when they couldn't talk me out of it. After the third call and hold time, I cancelled the service outright...

That is the most horrible cancellation experience I've ever had... I imagine people have seen equally bad from Comcast, and back in the day, AOL.

madgoat 5 days ago 0 replies      
I've cancelled Rogers up here in Canada in a painless, lie free, asshole way without resorting to swearing.

It's easy, just say you want to cancel, act like an imbecile, refuse to be transferred to their retentions department. When they offer you a better deal, or ask why, just repeat "I want to cancel" over and over, and if they still give you guff, ask to speak to their supervisor (all after getting their name and agent number).

I was able to cancel everything in under 5 minutes.

Remember, you don't have to be nice or polite to them when they're trying to upsell.

csears 5 days ago 1 reply      
Really like the conversational approach. Nicely done!

It would be good to explain upfront how you actually do the cancelation, in this case sending a physical letter on my behalf. I had to fill out the form to get to that part.

rascul 3 days ago 1 reply      
Took me about 5 minutes and no hassle to cancel Comcast. I simply took the equipment into the local Comcast office and turned it in, no dealing with phone representatives or anything. When asked why, I simply refused to answer and that was that.
maaja13 2 days ago 0 replies      
Someone is already offering to do it for $4 lol


United857 5 days ago 0 replies      
This won't work for everyone, but if you live near a Comcast brick and mortar store (e.g. Sunnyvale, CA), just go there.

If you tell them you want to cancel in person, and bring all your equipment with you, the reps aren't going to argue with you much.

There is of course still the time cost to wait in line (15-20 minutes on a weekend IME) but at least it saves in frustration in them trying to get you to change your mind.

kisna72 5 days ago 1 reply      
The only problem I see is you are asking personal info (Credit Card etc) without encryption. You should fix that asap. other than than, great job.
orthoganol 5 days ago 0 replies      
Chinese visa was painless for me. I think it's gotten a lot easier maybe with the advent of the 10 year American, I don't know when you last tried.

How do you handle Comcast's security questions and all that? Why is there no https for a site with sensitive info? Have you guys actually done this for others, or is this a see what happens, figure it out along the way kind of side project?

markbnj 5 days ago 0 replies      
Comcast's customer service can be spectacularly aggravating, but are that many people really struggling with canceling their primary ISP? I would think something like "Move your Comcast service in 5 minutes" would be more useful. Perhaps even "Get a Comcast service call in 5 minutes." Now that would get some downloads in the app store.
dheera 5 days ago 0 replies      
Is there anything that prevents me from just sending an e-mail/snail-mail to Comcast with the statement "Please terminate my contract effective MM/DD/YYYY. You do not need to respond to this message. Thank you for your compliance." and just stop payment at the appropriate date as defined by the contract (likely the end of month MM)?
wnevets 5 days ago 1 reply      
I canceled my comcast last week and I was on the phone for maybe 5 minutes. Maybe I'm one of the lucky ones?
basseq 5 days ago 0 replies      
My recent Verizon cancellation went very smoothly. No "retention" specialists. Just said, "I'm switching to a competitor."

I'm saving something like $50/mo with the change. Verizon sent me an email offering me a $40 statement credit to re-up. I laughed in their virtual faces.

kw71 5 days ago 0 replies      
Even after allowing thirdparty javascript I still can't read the site (get a san francisco park-what?) and the "Cancel my Comcast" button doesn't work. Blah. I'll call Comcast myself, and for $5 I'll show you how "http aref" works
homulilly 5 days ago 2 replies      
Cool idea but I don't think I'd want to send personal information on a website without HTTPS.
fgtx 5 days ago 1 reply      
Hi!Since I'm not from US I have no idea whether this idea is good or not, but here are a few points from your fix form.- no email validation / phone number, zipcode, comcast client# mask: Your user could easily mistype something without noticing
vskarine 5 days ago 1 reply      
Error 522 Ray ID: 22f336647a6e1e65 2015-10-02 20:38:57 UTCConnection timed out

Comcast is trying to DDoS? :P

kylehotchkiss 5 days ago 0 replies      
Once upon a time I lived at an apartment and comcast did not bill me for an entire year of internet. It was awesome. And more reliable than the comcast my parents paid for.

Now I have Lumos, which is old Ntelos, which is even better than my free comcast.

sabrinaleblanc 5 days ago 1 reply      
Really interesting idea! But I'm not sure how I feel about a company impersonating their customer cause that's what you would do right? I think people will be reticent to provide personal information...
smoreilly 5 days ago 1 reply      
This is perhaps the most glorious thing I've seen on here in a long time. I didn't find a Chinese visa all that difficult though so I'm surprised to see that on the list of next issues.
chrissnell 5 days ago 0 replies      
I've never had a problem cancelling Comcast. I just tell them that I'm moving out of the country. There's no point in bothering to try to retain me.
estsauver 5 days ago 1 reply      
Hey Everyone! We were a little surprised by the response but our servers should be back. Please let me know at earl@airpaperinc.com if you see any more outtages.


blazespin 5 days ago 0 replies      
All those legal services(wills, divorce, immigration, etc etc), of course, though I imagine they're already well picked over. Maybe you can partner?
kelseyfrancis 5 days ago 0 replies      
If you just call and tell them you're moving to an area that Comcast doesn't service, they don't give you any hassle at all.
jason_slack 5 days ago 1 reply      
The concept here reminds me of GovWorks.com (in the movie Startup.com). Their aim was to fix government red-tape.

This reminds me of that documentary.

munificent 5 days ago 0 replies      
Ha. I just got fiber installed yesterday and I've been meaningful to cancel my Comcast since then. Timely!
dreaminvm 5 days ago 0 replies      
Love the idea, although I wondering how user data will be protected across all of these different workflows?
trevyn 5 days ago 1 reply      
Tried to sign up for the SF parking permit thing, got a CloudFlare error 520.
m3andros 5 days ago 1 reply      
Getting this error:Error 522 Ray ID: 22f2db80ca4a181cConnection timed out

Traffic overwhelm?

ck2 5 days ago 0 replies      
Member retention at many companies is often closed before 8am, just call early and then you end up talking to a regular person in billing and they do an immediate cancel.

Alternately just pick a place the company doesn't have service and say you are moving there.

enahs-sf 5 days ago 1 reply      
this is a great example of using typeform to bootstrap your way into providing a service. Really cool idea and i'm glad these guys are doing this I can't wait to cancel comcast!
pavornyoh 5 days ago 1 reply      
I like this idea. Good job!! When will other services be added?
abritishguy 5 days ago 0 replies      
Just say you are leaving the country - no more rententions.
zobzu 5 days ago 0 replies      
why'd you have to lie to leave a service is inconceivable to me. such companies should be paying good money until this is rectified or simply banned.
outside1234 5 days ago 0 replies      
Great idea!
lfender6445 5 days ago 1 reply      
i'd like to try this without fearing it will actually cancel my account
alokedesai 5 days ago 1 reply      
Damn, this is genius
I met you in the rain on the last day of 1972 craigslist.org
436 points by brandonmenc  5 days ago   95 comments top 28
Cidan 5 days ago 6 replies      
I think a lot of people miss the point of this posting -- it doesn't matter if it's real or if it's fake, but rather it's an evolution of artistic expression. Someone is using Craigslist as an entirely new medium for writing a bit of poetry or short story. The potential for this type of work to reach people is huge, and that to me is the most exciting part of it all.
Camillo 5 days ago 14 replies      
I'm not sure what people find "beautiful" or "touching" in this story. A man is wracked by guilt over what he has done to the point of deciding to end his life, but then he forgets all about it as soon as a pretty girl gives him attention. Inasmuch as it is believable, this is a story about how all our lofty notions of justice, honor, purpose etc. are just bullshit, and it's really all about fucking, consuming and keeping those genes alive. Matter over mind, humanity revealed as a mere dusting on thought over the throbbing mass of limbic functions.

It is depressing. Its only redeeming quality is the fact that it's obviously fake.

mjs 5 days ago 1 reply      
I once read a story (I think in the Washington Post) about two people meeting up in similar circumstances sometime in the 60s or 70s, and then, when parting, promising to meet again at a specific location and time far into future.

And so after decades, one party went to the arranged meeting point with great anticipation having told their family about the story, etc. But the other party didn't turn up. Eventually the reporter tracked them down, and they couldn't remember anything about the original meeting.

Unfortunately, I've never been able to find this story again, despite spending quite some time looking! But, going by this story, I think it's quite possible that often when something like this happens it's far more significant to one person than the other, partly because some people have more drama in their lives than others.

matryoshka4811 5 days ago 0 replies      
How stunningly beautiful. It doesn't matter to me if it's real or not. The writer created a world and two people and I got to live that as the reader for a little while. If the writer sees this, thank you.
JadeNB 5 days ago 0 replies      
I didn't know what to expect from the title, but it wasn't that. That was amazing and heartbreaking, and reads like an elegant work of fiction. (The cynical part of me wonders if it is, but I don't like the idea and so relegate it to parentheses.)
runamok 5 days ago 1 reply      
This reminds me of an anecdote where someone who jumped off the Golden Gate Bridge wrote that if a single person smiled at them while they walked to the bridge they would not jump off. That sadly didn't happen... I think caring for one another in even minor ways can have a huge impact. http://www.sentinelandenterprise.com/news/ci_25438684/just-s...
blacksqr 5 days ago 1 reply      
How hard could it be to check Boston society page engagement and wedding announcements for 1972 and 1973?
rkho 5 days ago 2 replies      
Fascinating. Using Missed Connections as a platform to set a context. It could be fictitious, but it's more than achieved its goal for everyone but the intended recipient.
upwords 2 days ago 0 replies      
I read your story in the rain on the last day of my failed vacation. I do not know if you are reclining in your Lazyboy bemused and self congratulatory or in an more dramatic parallel irony- you haven't a clue what Ycombinator is and don't know that you are an old man who did achieve his 15 minutes. Either way-delightfully crafted.
_nato_ 5 days ago 1 reply      
Quite Murakami-esque. Touching.
rilut 5 days ago 0 replies      
Beautiful. I also like this NYC Craigslist story http://www.craigslist.org/about/best/nyc/4301059953.html
j2kun 5 days ago 0 replies      
FWIW, I have seen some fiction I've enjoyed in the format of "craigslist ad." Here's one that has stuck with me as a good example:


cousin_it 5 days ago 0 replies      
Sad story. I guess the girl was interested in the guy at first, but then lost interest when he started talking about his emotional problems.
malusmage 5 days ago 0 replies      
Thanks for sharing! I didn't expect this to be a real post given the title, but it was a great read.
learning_still 5 days ago 0 replies      
I don't believe this post, and I think the author should have done a better job of making it realistic since that was clearly the goal of using craigslist. There are countless authentic (and believable though false) anecdotes available today because of the internet. I would have liked to see the author do more research, to have read some of the aforementioned writings, ditch the traditional rulebook, and create something I could believe. They clearly have a knack for writing, but their adherence to so many traditional rules destroys what they were trying to accomplish. Every day people with much less talent make up stories that are readily believed. (Reposted links with claims of being the original poster are the easiest to find examples of this.) It's a shame. This was a marvelous idea. And I hope the next time I read this author's work, I'll have no choice but to believe.
ww520 5 days ago 0 replies      
That's very touching. Wonder if it would lead to anything at all.
Mz 4 days ago 0 replies      
Ah, the glory of unrequited love, always so much better than the real thing where we have to deal with their literal and metaphorical dirty laundry. In the absence of actual reality, we get to imagine them a perfect being. We get to fill in the blanks with such fierce emotion and lofty assumptions about someone we never really knew.

The beginning of the piece has such wonderful and evocative detail, it is either a genuine memory or well researched fiction or some mix of the two.

Unlike other cynics here, I don't think it can be easily dismissed as merely "Wow, our gonads sure drive us." But then I believe we are spirits in the material world and I have been sustained through many hard things because of strange and kind encounters myself. Edit: Also, my father and ex were both soldiers and I have known many military personnel. So, there's that.

DonHopkins 5 days ago 0 replies      
I met you on the London Underground Tube in 2006, and I was such a such horse's ass for not holding the doors open...


malkia 5 days ago 0 replies      
Thank you!
InclinedPlane 5 days ago 0 replies      
People often denigrate new media because they have a habit of comparing the average of the new with the cream of the traditional. "Look what I ate for lunch today" on instagram vs "Hamlet by Wm Shakespeare".

Here we have an excellent example of something that needs to be slid into the folder marked "In defense of the internet".

mlamat 5 days ago 0 replies      
Is that you, John Mccain?
kimura 5 days ago 1 reply      
Good read. Thank you for sharing. I hope they reconnect.
Uptrenda 5 days ago 0 replies      
Oh god, it's raining inside ;-------; That was such an elegant short story.
Gobiel 5 days ago 0 replies      
It lost all credibility at "thusly".. https://en.wiktionary.org/wiki/thusly
njharman 5 days ago 0 replies      
Nice story. But, I'm gonna be a pooper and state people who want to kill themselves just do it. People who take multi hour walks are looking for hope, anything to use as an excuse not to kill themselves. Good this guy found his.
Animats 5 days ago 0 replies      
Is there a search engine for that?

You should be able to search all Government surveillance cams for your own history.

Asbostos 5 days ago 1 reply      
Could be summarized as "I murdered some people and felt guilty, then I flirted with a beautiful woman which saved me from suicide, then I forgave myself for my crimes, now I'm lonely so I want to meet that woman again."

I don't have any sympathy for this man who's had just about the best possible life he could have, given what he did.

I'll bet there are plenty of prisoners serving life for murder who would also like to meet some old flame but don't have the freedom to post on Craigslist.

Amazon QuickSight Business Intelligence by AWS amazon.com
332 points by polmolea  23 hours ago   137 comments top 23
ececconi 22 hours ago 6 replies      
I've been working exclusively in implementing BI solutions for the past five years. The thing that depresses me the most is not that BI solutions take forever to implement and cost a lot, but that clients many times just don't understand the data they are trying to report on. Many times it leads to over-engineered BI solutions just for one report that a client says is mission critical, but is never used.

I'm sure more technology focused companies don't have any issues using these self-service models, but you wouldn't believe the innumeracy that some people have in industry.

vkb 22 hours ago 6 replies      
Having worked in some form BI for almost the entirety of my career now, there is not a single, consistent form of BI dashboard that is prevalent across any company. Every solution ends up being unique, because every company has a unique data set-up, stakeholders, definition of metrics, and access needs.

I've worked with Tableau, Domo, Oracle products, you name it. What's the solution that is passed around the most? Excel sheets, because they travel easily and have all-around permissions.

I've been waiting for an out-of-the-box solution that's at least relatively easy to leverage across different organizations, but I haven't seen a painless one yet.

I'm hopeful that Quicksight, while not the be-all end-all solution, provides an example for others to follow, if it does end up being easy to set up and use.

jsmeaton 14 hours ago 1 reply      
There are so many BI tools around that it's hard to figure out which solution is going to be best for a particular use case. Creating further confusion, it seems like most "enterprise" BI products aren't explained properly on their websites and are hidden behind "request a demo". It's nearly impossible to evaluate all the possibilities without going crazy.

I want to be able to generate my domain models in some way. Point and click data descriptions are awful. Letting certain people work with raw data is fine, but a lot of users are going to want to work with names that make sense to them. Let me define models with text, just like ORM models.

I want row based security. Let me assign groups to values on certain models. This essentially boils down to hidden filters and required tables.

It should all be web based. I'm not exposing my database directly to customers.

It should definitely not cost 100k a year.

I like the idea of QuickSight, but I can already see that it's not going to work for my needs. But at least they give an upfront description and price. Here's hoping the pricing model drives down the crazy license fees the other vendors are extracting.

m0th87 22 hours ago 2 replies      
We use Chartio at my company. If you haven't tried it yet, this product is stellar - almost all of our BI needs are handled directly by stakeholders rather than having to go through an engineer, and their support is very helpful and responsive.

I would consider moving off to something like Quicksight if it supported redis. Some of our BI-related data is stored there, and currently to get at it we have an app that proxies data from there to postgres for Chartio's sake.

mirkoadari 21 hours ago 0 replies      
We're trying out Apache Spark with Apache Zeppelin and it's been a pleasure so far. We faced the same problems that everyone else mentioned here -- data is not accessible to people who need it and every datasource requires different tools.

What we like about Apache Spark is that it can take any source and provide the same very fast and programmatic (code reuse!) interface for analysis. Think JSON data dumps from MixPanel, SQL databases, some Excel spreadsheet someone threw together etc.

Apache Zeppelin is a little bit limited in the visualization that comes out of the box, but the benefits of having a shared data language across the company is just such a huge plus. Also, super easy to add data visualization options and hopefully companies will start to contribute these back to the project.

brador 22 hours ago 9 replies      
I say: Bezos is the #1 CEO in Tech right now.

Amazon takes risks, and ships innovative products and services every month. Their risk taking is relentless and they take failure in their stride. They "get" how to do push.

No other tech company comes close to their pace. And the key to that is the juicy under the radar micro manager that is Jeff Bezos. If there was an award for best tech CEO. 2015, i'd nominate him in a flash.

solutionyogi 22 hours ago 2 replies      
For me, the most interesting part is SPICE. To implement a data warehouse, one creates a star schema (OLAP) database from a regular OLTP database. This involves massive amount of work. It looks like SPICE aims to replace the need for OLAP database and produce similar data directly from OLTP systems. I would love to know more about this engine. I hope Amazon open sources the engine (I highly doubt that they will do it).
andrewstuart2 22 hours ago 2 replies      
I'm glad to see they put a little more effort into the product page for this release than seems typical for AWS products. It's much easier on the eyes and, at least for me, much more readable.
ticktocktick 22 hours ago 0 replies      
A lot of what you are paying for in BI solutions is implementation costs and then to a lesser extent yearly maintenance and support costs. I wonder how they intend to lower implementation costs, since that is kind of lengthy and inherently difficult process.
realcul 21 hours ago 2 replies      
nostromo 22 hours ago 5 replies      
At first glance, this seems like a Tableau competitor. So I'm surprised to see Tableau listed as a partner.
buremba 22 hours ago 1 reply      
SPICE is definitely the most interesting part of this announcement but there isn't enough information about it.
dsjoerg 22 hours ago 0 replies      
AWS is the new IBM
tidon12 22 hours ago 0 replies      
Seems like a cool way to further increase the "stickiness" of AWS
erik-n 21 hours ago 1 reply      
>Super-fast, Parallel, In-memory Calculation Engine (SPICE)

Now that's a nice acronym! I think Amazon have skilled people in charge of marketing. They make their announcements feel exciting but not too "markety".

pjc50 22 hours ago 4 replies      
Presumably this is for data that you want graphs of but doesn't fit in Excel?
sjg007 21 hours ago 0 replies      
Amazon is basically eating away at all the big data startups.
gshx 22 hours ago 1 reply      
For one, this: "makes it easy for all employees to.." work with business insights data is usually inaccessible to everyone for a reason.
simo7 16 hours ago 0 replies      
So is SPICE an actual data warehouse or is it more a query engine such as Presto, Hive or Spark?
curiousjorge 21 hours ago 0 replies      
I'd really hate to be in the same space as amazon.
tlunter 21 hours ago 2 replies      
I don't think you understand re:invent. This happens every year.
blahyawnblah 22 hours ago 0 replies      
I really like http://www.manifestinsights.com/ for visualizing data.
smileysteve 22 hours ago 0 replies      
They mention integration with RDS, which is kind of scary considering the data a lot of databases hold.
Nylas N1 extensible open-source mail client nylas.com
409 points by ropiku  2 days ago   121 comments top 32
drdaeman 2 days ago 4 replies      
Beware: from what I understood, this isn't a self-sufficient MUA in traditional sense, it's a thin client that offloads the interaction with the actual mail server to a remote system. That said, the credentials to access the mailbox are entrusted to a third party.

> See the README for full instructions on setting up the sync engine, API, and N1 on your local machine.

Sadly, there is no instructions in the linked document yet.

I wonder, can the sync engine and whatever else be bundled with a client, so it'd be a self-sufficient standalone piece of software?

ar7hur 2 days ago 2 replies      
I've used the Nylas API for 9 months for my side project iOS email client. It works very well and the team is super responsive. Documentation is clear and complete. It makes it super easy to build any email client.

Eventually though, I decided to stop using it because I'm not confortable with having all my messages stored by "one more" entity in the cloud. Back to good old IMAP with all the pain it brings.

I'm often wondering if such an API could work in a full end to end encrypted mode, i.e. messages would be encrypted as they arrive before they are stored on the server, and only the client would have the key to decrypt them.

grinich 2 days ago 5 replies      
Hi HN-- Michael from Nylas here. We're super excited about this launch, and happy to answer questions about N1, including plugin architecture, design, features, etc.

A few Nylanauts will likely hang out in this thread today. Would love to hear what folks think! :)

devit 2 days ago 2 replies      
I don't quite understand how the business model of the Nylas platform that this client depends on works.

Since it provides up to 10 accounts for free, it looks like that end-users (for whom 10 accounts are more then enough) are not supposed to be the paying customers, but rather other companies that want to integrate with Gmail, etc.

But... does this mean that these companies are supposed to encourage the user to connect their e-mail accounts to the company's Nylas platform account, thus giving Nylas all the user's e-mails and weakening e-mail privacy even more?

Or is the business model something else...?

sam_goody 2 days ago 1 reply      
First, we had SquirrelMail and its ilk.

Then, Roundcube became the standard.

Their dominance is being eroded by Mailpile, RainLoop, and Nylas.

Around the edges, we have Peps[1], Mailr[2] & Kite[3] which may all someday take off.

Zimbra and the well known groupware vendors (OnlyOffice, Horde, Citadel, Kolab) are competing in more-or-less the same space.

Probably dozens of other projects that are on the same level that I have never even heard of.I would love some sort of comparison, even a biased one written by the respective teams.Anyone have useful links?

[1] https://github.com/MLstate/PEPS[2] http://pusto.org/en/mailr/[3] http://khamidou.github.io/kite/

Jemaclus 2 days ago 2 replies      
Did anyone else get an email from Nylas that they never signed up for? I don't recall ever signing up for newsletters, so I'm curious as to how I heard about this in my inbox instead of on HN. Kind of annoying.
natrius 2 days ago 2 replies      
Google has all my data, and that allows them to build incredible features for me, like Google Now and Inbox's automatic parsing of travel and event emails.

I'd rather have all my data myself so I could pick great features from anyone who makes them. Nylas sounds like a part of the puzzle to make that happen. I wish you guys the best of luck.

ori_b 2 days ago 2 replies      
Is it too much to request a modern mail client that supports threads in the UI?

I find it hard to follow conversations without some form of visual indicator of who replied to what within a thread.

charlesdenault 2 days ago 0 replies      
I'm super excited about this. I'd love to see a serious collection of apps/extensions for the platform. I love the ecosystem around gmail, but I despise using their web-ui. I'd love to have the flexibility baked into a modern app. I've tried every email app and none of them cut it. They start off promising, but quickly degrade into feature-bloat (Airmail) or development is discontinued (Sparrow, Mailbox?).
polpo 2 days ago 1 reply      
Is JMAP [1] support on the roadmap? Once email providers (namely Fastmail, who wrote the spec) start supporting it, of course.

[1] http://jmap.io/

numbsafari 2 days ago 0 replies      
Plugins via JS ... another sad day for GNU Guile. ;-)
zx2c4 2 days ago 0 replies      
kstenerud 2 days ago 1 reply      
Damn. Was hoping I could just build it myself, but the bootstrap script is broken.

---> Cleaning apm via `apm clean`

dyld: lazy symbol binding failed: Symbol not found: _node_module_register Referenced from: /Users/karl/tmp/N1/apm/node_modules/atom-package-manager/node_modules/keytar/build/Release/keytar.node Expected in: dynamic lookup

dyld: Symbol not found: _node_module_register Referenced from: /Users/karl/tmp/N1/apm/node_modules/atom-package-manager/node_modules/keytar/build/Release/keytar.node Expected in: dynamic lookup

/Users/karl/tmp/N1/apm/node_modules/atom-package-manager/bin/apm: line 28: 27072 Trace/BPT trap: 5 "$binDir/$nodeBin" --harmony_collections "$binDir/../lib/cli.js" "$@"

kawera 2 days ago 1 reply      
Terrific job, congrats to all the team! Curious to know why you choose MySQL and not PostgreSQL.
arsalanb 2 days ago 1 reply      
Finally! Always wanted something like this! It looks beautiful! Sorry if this sounds to flimsy but how an app looks is probably a huge factor for me to consider using it. This always pushes me away from open source because they aren't designed (visually) as properly as their proprietory counterparts. Love this project!
jbb555 2 days ago 0 replies      
I thought this looked nice until I saw that it's not an email client at all, it's an interface to a backend service.Sure, you can get the backend and run it yourself but who wants to do that. Pity, it looked nice.
mundanevoice 2 days ago 1 reply      
Mutt FTW :D
grinich 2 days ago 0 replies      
Hey folks-- sorry about the invite system. I know we sent out download links, but it turns out way more folks have signed up than we originally planned.

Relatedly, if any experienced devops/sre folks are looking for a new job, please ping me. ;) mg@nylas.com

pbreit 2 days ago 1 reply      
The only feature I really care about these days is automatically grouping "important" and "non-important" emails (and notifying on important).
cjbprime 2 days ago 1 reply      
Congrats, this is great! What's the Nylas story for mobile?
ultim8k 2 days ago 0 replies      
Now this is big bold news! Kudos guys! I'm extremely excited!
nicksergeant 2 days ago 1 reply      
I love this and want to try it, but I find it strange that they emailed me to tell me about it being available, when I can't actually use it without an invite code.
j0hnM1st 2 days ago 1 reply      
Tried the documentation for setting up the sync-engine, I must say its bit confusing. Can we install on say, on a AWS VM ?
brianjking 2 days ago 0 replies      
Interested...hesitant though with the idea of giving up more access to my private files. Signed up, looks like I hit a waitlist.
ashemark 2 days ago 1 reply      
Hi, do you plan on putting together a pkgbuild for arch linux based systems?
humility 2 days ago 0 replies      
Congratulations and thanks, had been looking for one for eons now. Looks promising!
tapoxi 2 days ago 1 reply      
Any hope for an RPM or Yum repo for us Fedora users?
mdeebz22 2 days ago 0 replies      
this is INCREDIBLY cool - have been waiting for this for a long time. Way to go team! nicely done!
stuaxo 2 days ago 0 replies      
Kind of reminds me of Geary.
4tacos 2 days ago 0 replies      
I'd pay for this....
anonbanker 2 days ago 0 replies      
This looks like a fantastic roundcube or rainloop competitor. Thank you!
grandalf 2 days ago 0 replies      
I've been using Mailbox by Dropbox for a while and it's still very rough, not even beta quality by Google standards.

Most frustrating is that search doesn't work at all.

I have multiple accounts, but if I didn't I would still prefer the gmail web interface to any thick client app I've used (so far).

React v0.14 facebook.github.io
308 points by spicyj  21 hours ago   111 comments top 13
clessg 21 hours ago 8 replies      
A lot of great ideas in this release and it makes me excited for the future of React.

I've been on 0.14-rc1 and my favorite feature so far is stateless function components. So much cleaner than class-based components.

One downside is that hot reloading doesn't work with it yet (AFAIK). Hopefully that will come soon now that 0.14 is officially released. (And I believe Dan is/was on vacation.)

Andrew Clark (core contributor to Redux and creator of Flummox) just released Recompose which provides powerful capabilities for stateless components: https://github.com/acdlite/recompose.

Some of the benefits of stateless function components as outlined in Recompose's readme:

* They prevent abuse of the setState() API, favoring props instead.

* They're simpler, and therefore less error-prone.

* They encourage the smart vs. dumb component pattern.

* They encourage code that is more reusable and modular.

* They discourage giant, complicated components that do too many things.

* In the future, they will allow React to make performance optimizations by avoiding unnecessary checks and memory allocations.

Touche 20 hours ago 4 replies      
> Like always, we have a few breaking changes in this release. We know changes can be painful (the Facebook codebase has over 15,000 React components), so we always try to make changes gradually in order to minimize the pain.

Since React is a semver project, from the website:

> How do I know when to release 1.0.0?

> If your software is being used in production, it should probably already be 1.0.0. If you have a stable API on which users have come to depend, you should be 1.0.0. If you're worrying a lot about backwards compatibility, you should probably already be 1.0.0.

losvedir 20 hours ago 11 replies      
The enthusiasm around React is infectious and I'm thinking about integrating it into one of my projects but I can't quite tell what exactly it's supposed to be used for. Is it only for SPAs or is it reasonable to consider react when you just want to add some interactions and dynamism to a page that was rendered server side?

React seems kind of like an all-or-nothing approach. It seems like overkill if you just want to provide a little more structure to jQuery spaghetti code.

Can React be a competitor to, say, Knockout? React seems more in the realm of angular or ember to me.

drinchev 20 hours ago 3 replies      
React is cool. Switching to React helped me do :

1. Get rid of any jquery-like library.

2. Get rid of my HTML/Handlebars files :)

3. Generate dynamic CSS classNames for my components allowed me to get rid of stylesheets naming conventions / strategies ( BEM, etc. )

4. With the help of Flux ( yahoo's implementation ) I got rid of writing two code-bases ( front-end / back-end ). Now I'm putting everything in one place, this is awesome! ( I did it with server-side rendering, so I didn't loose anything at all )

omouse 19 hours ago 0 replies      
I'm glad they separated out react from the DOM. This makes it much more likely that people will export the ideas and the API of React to other languages. The API is sane for data-flow programming and the lifecycle is well-defined. The separation of properties from state can be useful in other contexts.
desuvader 20 hours ago 2 replies      
Wohoo! .getDOMNode() is finally gone!

Edit: Haha, I just noticed that someone else was also happy about this.

Also, now that classnames is standalone module, I recommend that people start using a standalone implementation of keyMirror (for flux) as well!

suchitpuri 3 hours ago 0 replies      
Well i think it will take some time for libraries and components to be updated. For me currently at least react router should provide a compatible build with react 0.14
gh0sts 21 hours ago 0 replies      
Great news. I just upgraded our app to 0.14-rc1 last week and particularly enjoy "refs" being direct references to the DOM nodes. Can't wait to move some of our stateless components to functional.
aaronkrolik 20 hours ago 1 reply      
I'm fairly new to react, and have wondered if I was using it correctly (as intended). Almost always I have a root component that manages state, passing it to children via props. All changes to state go through that root component, either by callback (also passed as prop) or some external event.

Is this update in a sense validation of that approach?

oblio 16 hours ago 2 replies      
Is there some sort of React component repository or something?
crudbug 12 hours ago 0 replies      
Are there any good component libraries for React ?

JSF ecosystem is a good example of component based web UIs.

zkhalique 18 hours ago 5 replies      
I have a question, why is dirty-checking so great?

Whether it's Angular doing dirty-checking in the $digest cycle or React walking the virtual DOM and doing dirty-checking?

Why not just subscribe to events and update things when the model changes? That would seem to be far more efficient, and also make clear the mapping of dependencies of views on data.

kraig911 19 hours ago 1 reply      
Anyone find a list that says what is broken by this update? I couldn't find an easy to parse list of functions/methods/etc that have changed ||&& are broken.
Global eradication of wild poliovirus type 2 declared polioeradication.org
233 points by panic  1 day ago   76 comments top 10
tomkinstinch 1 day ago 3 replies      
My med student girlfriend points out that while this is a huge victory for public health, only polio virus type 2 is being declared eliminated. Types 1 and 3 are still endemic in pockets of the world, and are next to eliminate. Each type has a different capsid protein.

It's pretty amazing that for my parents who were born in the mid-20th century, polio was still a concern. When they were growing up, FDR had recently died of complications from the disease and they knew people who were debilitated by it. Within their lifetimes the vaccine literally changed the world, to the point where polio is not present in developed countries. Vaccines are one of the greatest triumphs of human ingenuity.

gburt 23 hours ago 5 replies      
This is a clear win for "public health", leaving just one strain (WPV1) which is geographically constrained.

A question for those well-versed in the biomedical and ecological sciences: Could our attempts to eliminate viruses be a bad thing? I wonder about the ripple effects of ecosystems on each other. We do not brag about extinction of any other natural entity.

And, when eradication does not work, when we only hinder the growth of a virus, does the attempt to prevent infection (or any form of "mostly cleaned up" coverage) encourage the development of superviruses by selecting for the strongest in the set, in the same way bacteria do? If not, why?

fencepost 23 hours ago 0 replies      
For those not actually familiar with it, polio can spread into the spine and portions of the brain, killing neurons. This can cause paralysis and severe pain, and for some cases affect the ability to breathe. It was the original target of the March of Dimes, which helped provide support and (once available) vaccines.

Polio patients with breathing problems are where the "iron lung" was most widely used; it's basically a box or tube for the whole body except the head, with just enough seal and a timed low-level vacuum to allow the lungs to expand then contract when the low pressure is released.

tclark225 23 hours ago 1 reply      
Big win for public health. I'm sure the others won't be far behind.

In this same vein, go read the first sentence of the Wikipedia article on smallpox for a jolt of pride in human progress. "Smallpox WAS an infectious disease..."

jdnier 21 hours ago 0 replies      
It's interesting that they're already planning to remove the WPV2 component from vaccines, because it can (very rarely) lead to vaccine-associated paralytic polio. "With WPV2 transmission already having been successfully interrupted, the only type 2 poliovirus which still, on very rare occasions, causes paralysis is the type 2 serotype component in trivalent OPV. The continues use of this vaccine component is therefore inconsistent with the goal of eliminating all paralytic polio disease."
flurpitude 18 hours ago 0 replies      
jf 23 hours ago 3 replies      
"the only remaining endemic WPV1 strains are now restricted to Pakistan and Afghanistan."

Not mentioned is that this is likely due to the hunt for Osama bin Laden: http://www.scientificamerican.com/article/how-cia-fake-vacci...

martincmartin 22 hours ago 3 replies      
relativisticjet 18 hours ago 2 replies      
Hmmm, so we deliberately rendered a species of organism extinct?

Sounds like it could be a slippery slope. I'm thinking there's a window of moral relativism to explore here.

handedness 22 hours ago 4 replies      
> superpowers like the USA

Just how many superpowers are there?

Google Cloud Shell cloud.google.com
325 points by mikecb  4 days ago   159 comments top 23
kaa2102 3 days ago 2 replies      
My company is launching a new web design and development service using Google Cloud services. Initially, this was an experiment because we received some Google Cloud hosting credits. We run LAMP (Linux-Apache-MySQL-PHP) stacks on virtual machine instances on servers worldwide of our choosing.

After Google I/O, some webinars and testing I learned about how quickly new websites and apps could be deployed, Google DNS for managing domains, App Engine and other load balancing features that can help manage cost.

The ability to SSH into a virtual machine has proven quite useful to enable my team to coordinate and manage projects. I am still open to using other services like AWS but Google Cloud has been great - especially because they also offer customer service for tech and billing issues.

rexignis 4 days ago 3 replies      
I like this, and it feels "Google-y". Worried that a handful of us will use it, lean on it heavily, and then Google will do what Google does and pull the plug on it.
Wonnk13 3 days ago 5 replies      
Coming from an AWS shop, I guess I don't understand the use case(s) for this. What this enable me to do that I couldn't accomplish by ssh'ing to any of the boxes in my environment?
mark_l_watson 4 days ago 2 replies      
Nice idea. I do a lot of my development SSHing to a resource rich VPS. For things that take a long time to run (Haskel stack builds, a long running machine learning calculation, etc.) it is great to not run that stuff on a laptop.

Cloud Shell is a bit different in the sense you would not want to try anything resource intensive on a micro instance. But for coordinating other services it makes sense.

I think Google needs to add one more thing to their cloud development toolkit: a public version of something close to their cider web based IDE that gives you access to work with any code you have in you private Google-hosted git repos, AppEngine and VPS services. nitrous.io has something like this, and I think that Google would do well to offer something similar for using the public version of their infrastructure.

p0ppe 4 days ago 3 replies      
> Use of the Cloud Shell is free through the end of 2015: you will not be charged for any resource utilization.

Would be nice if they could have stated how much it's going to cost later on. We're less than three months from 2016.

iamleppert 3 days ago 2 replies      
What's the point of this? I can already ssh into any of my instances on AWS, Linode, whatever platform.

Is it just because it's in the browser. Who cares...

trident523 4 days ago 4 replies      
So, what if you don't have that button? Is there a minimum amount of product you need to have before this service becomes active?
colinbartlett 4 days ago 3 replies      
Wait, so this runs in a browser? Is the idea that you can then use a machine like a Chromebook to handle your development?
stephendicato 4 days ago 3 replies      
This doesn't actually appear to be new; at least not that I can tell.

Google Cloud has supported being able to open an SSH session to any of your instances right from the browser for awhile. I've found it to be a killer feature and am really surprised Amazon Web Services does not offer the same thing.

andmarios 4 days ago 0 replies      
Albeit much different, yet with some common ideas, I've created a docker devops environment image, that if you provide with a GCE service account, it will auto-activate google cloud and even setup ansible to work with your gce project. Emacs and vim are provided, pre-configured to a certain degree. The best usage scenario would be emacs+golang but there is basic support for other languages. There are some other conveniences too, like bash-completion being enabled (no more need for remembering all git flags).

If anyone is interested or want to borrow some ideas:



rmac 3 days ago 1 reply      
As a long time user of googles web based terminal (accessed normally through google cloud platform console by listing your VMs and clicking the SSH button):

I love it because I don't need to worry about key management and can access my machines anywhere that has a browser.

However, my few gripes are:

1)when copying text out of the web terminal window that spans multiple lines (on Mac osx chrome), newlines are inserted.

2) ctrl v doesn't work (nano/pico)

3) Ctrl c sometimes doesn't work

paradite 3 days ago 1 reply      
I don't see how this is better than c9.io, nitrous.io or ssh into your own VPS. I guess it is developed just for the people who use Google Cloud?
readstoomuch 4 days ago 0 replies      
> We are also working on a solution that would persist custom packages across sessions on different VM instances.


rsync 3 days ago 0 replies      
I'm going to spin this up today and see how it integrates with rsync.net.

We have 'gsutil' in our environment, so you can do things like:

 ssh user@rsync.net gsutil ... blah ... blah ...
but if there is a google shell that you can use to manipulate those same items, then presumably you could do data transfer to/from rsync.net from within that shell.

Not everyone has a use-case like this, but some folks do ... so we'll see how it works ...

i336_ 3 days ago 0 replies      
Congratulations for getting this far down the page.

Now open the shell and traceroute (install it) to google.com. :)

(Other fun things: traceroute can't find your external IP; you have ~250Mbps download; you're on a Xeon with 32GB RAM of which you have 512MB; I can't remember anything else.)

electic 3 days ago 0 replies      
I really wish GC had some sort of client facing VPN service so that I do not have to create VM instances that have SSH open to the public. This is a good first step, but from what I can see it does not give me access to my LAN just the gcloud cli.
ne01 3 days ago 0 replies      
At SunSed.com we moved from linode to GC a year ago! It has been great! The only thing that I am currently looking for is GC HTTP(S) load balancing support for HTTP/1.1 100 Continue response.
obulpathi 3 days ago 0 replies      
This is perfect for services like Bastion, Gateways, CRON jobs.
therealmarv 3 days ago 0 replies      
Reminds me that I'm missing GoogleCL a lot. Info: Project is discontinued and not working anymore because it was using OAuth1... they could not upgrade to OAuth2 oO
brohoolio 3 days ago 0 replies      
I've been pushing my current company towards Google cloud because of the billing. Sure AWS is great with reserved instances etc but in our environment I don't know what we will look like 6 months from now in terms of instances. With Google we'd be burning way less cash with our 100 or so instances.
swiley 3 days ago 0 replies      
Still not good enough to compile most of their "open source" projects.
betaby 3 days ago 1 reply      
Does it have IPv6 address?
mahouse 4 days ago 0 replies      
This is cool, I just want an IRC bouncer.
Global coalition to Facebook: Authentic names are dangerous for your users eff.org
256 points by DiabloD3  2 days ago   253 comments top 25
morgante 2 days ago 20 replies      
I wish the EFF and others who are against using real names would at least acknowledge the drawbacks of pseudonyms and the proliferation of fake accounts and fake names which comes with them.

The real name policy has huge benefits. I'm not constantly bombarded by fake friend requests like I am on Twitter, Skype, or any other platform which allows random usernames. Even more importantly, I can easily find the people in my life without having to go through an awkward song and dance. If I'm working on a project with someone, I can immediately find them and message them without worrying that it's some random imposter or some such.

Facebook is a platform designed for digitizing your real world relationships, not for accumulating thousands of "followers." There's a reason they limit you to 5,000 friends. People who are trying to disconnect their Facebook identity from their real world identity are not using Facebook the way it's meant to be used and should probably just leave it.

That being said, Facebook could definitely do a better job of making it easier for people to prove that a name is their everyday name even when it's not their legal name.

I fully expect to lose a lot of karma over this comment, but I'd love to hear an argument over why access to Facebook is a right together with your down vote.

dredmorbius 2 days ago 2 replies      
One of the best commentaries I've seen on the matter of "authentic" or "real" names policies comes from Yonatan Zunger, chief architect of Google's Google+ social network.

I've had plenty of disagreement with Google and Yonatan over many aspects of G+, and have given both the company and him much grief on multiple counts, including Real Names (Google's varient of Facebook's policy), though I'll also note that Yonatan's generally heard me out quite patiently. But his comments strike me as wise and hard-won, painful knowledge:


In practice, the forced revelation of information makes individual privilege and power more important. When everyone has to play with their cards on the table, so to speak, then people who feel like they can be themselves without consequence do so freely -- these generally being people with support groups of like-minded people, and who are neither economically nor physically vulnerable. People who are more vulnerable to consequences use concealment as a method of protection: it makes it possible to speak freely about controversial subjects, or even about any subjects, without fear of harassment.

That's quite an evolution of opinion. I respect Yonatan deeply for both conceiving of, and publicly stating it.

junto 2 days ago 2 replies      
I use Facebook as a developer using a completely fake but real looking identity. I have to have a developer account because of my job. Otherwise I wouldn't touch the site with someone else's bargepole.

Thank you http://www.fakenamegenerator.com

If that account gets blocked then I'll just have to create another fake one.

otto 2 days ago 0 replies      
I was recently kicked off of Facebook for lack of an "Authentic Name." Humorously my name was my legal name.

Getting kicked off of Facebook was the best thing to happen to my productivity and mood. My only frustration is that there are several people on Facebook I have no easy method of giving alternative contact info to.

pdonis 2 days ago 0 replies      
While I sympathize with the underlying sentiments of this, I have to disagree with what it actually says. If the EFF were really trying to help users, it wouldn't be trying to get Facebook to change; it would be trying to get users to stop using Facebook. The problem is not that FB needs to improve its name policy; the problem is that we have a single centralized social network for everybody. What we should have is a way for people to build their own independent social networks, so that someone who wants to be able to connect online with friends while avoiding their creepy ex can do so. Why isn't the EFF pushing for that?
1ris 2 days ago 1 reply      
And in fact illegal in Germany and probably a hell lot of other places. I wonder if that gets enforced anytime soon.
kristopolous 2 days ago 0 replies      
It's to decrease the perception of click fraud.

Facebook's real name policies have to do with how people perceive the plausibility of the numbers reported for their a-la-cart paid advertisement customers.

This policy doesn't actually decrease fake shell accounts. It's an intentionally ineffective ceremonious anti-fraud campaign.

Facebook knows that third parties conducting click and follow fraud for paying advertising customers brings them a lot of money.

They came up with a policy that gives the perception of combating it ... actually closing real human accounts. However, all the fake bot accounts are and have always been named like "Jane Doe" and "Bob Smith" ... they remain untouched.

What's the effect? The paying customer thinks that Facebook is making an effort to combat fraud but in truth they have every interest in keeping the fraud around and creating a false perception that things are changing - like the oil and tobacco companies; like nike and nestle; like fast food - like every other billion dollar company ever.

You don't amass $30 billion by being an honest Joe.

Spooky23 2 days ago 6 replies      
I hate Facebook, but I think they are right about this policy.

If you're a person at risk due to many of the issues described here, you don't belong on Facebook. If you're a domestic violence victim avoiding a dangerous person "liking" the wrong thing or somebody's innocent repost can put you in danger, pseudonym or not.

The transgender situation is similarly tragic, but it's an issue that trans people run into when presenting ID to buy beer as well. We should get states to provide some sort of transitional ID or something.

personjerry 2 days ago 4 replies      
I've only heard of the issue from the LGBTQ stance, and I'm not very clear here. But I'm wondering, for these people, why not just legally change your name?
xacaxulu 2 days ago 0 replies      
TheLastPsychiatrist explains expertly why the whole form of this argument is wrong in a post questioning Randi Zuckerberg.It's well worth reading, especially if you are considering the benefits of opting out of Facebook (and I strongly recommend you jump in, the water's find :) http://thelastpsychiatrist.com/2014/01/randi_zuckerberg.html
cwyers 2 days ago 0 replies      
Pseudonymity is a double-edged sword; it enables people to hide from abusers online, but it also enabled a lot of online abuse (Twitter is rampant with the stuff, not that Facebook is immune). I don't know which side is right, and I wish that there was some way to combine the best of both but so far nobody's struck that balance or even really come close. But if that balance is struck, I don't think the sort of stridency the EFF is engaging in here is going to be part of the solution.
Kiro 2 days ago 0 replies      
I've supported everything EFF has done so far but this is a really bad idea. Have people forgotten the chaos of MySpace and other social networks before Facebook?
rwhitman 2 days ago 0 replies      
There has to be some sort of middle ground, where people who need to mask their identity for various reasons can be allowed to without tipping the scales towards platform abuse. This doesn't have to be a black or white issue.

FB simply needs to figure out a fair way to validate identity but allow a user to use a sanctioned alternate identity. I think this is what the EFF action is really about

steve_taylor 2 days ago 0 replies      
Facebook beat all other social networks because it is the best at connecting people who already know each other. And its most effective tool to connect these people is its search functionality whose effectiveness requires that all its users use their real name. It is unrealistic to expect Facebook to risk its very existence just because the EFF demands that it do so.
warewolf 2 days ago 0 replies      
People just need to stop using Facebook. Join Twitter and be Happy #JackIsBack

"I don't make the rules, I just pick which rules I follow"

smoreilly 2 days ago 1 reply      
You guys should really understand the policy before ripping on it. There are tons of other ways to have your name verified. https://www.facebook.com/help/159096464162185
Slushpuppisan 2 days ago 0 replies      
How do you get blocked? I changed my account to a ridiculous name and I know a couple of people that have changed their equally ridiculous pseudonyms multiple times. Maybe they only block you if you have a real-sounding name?
strommen 2 days ago 0 replies      
I don't understand why people feel entitled to use Facebook in this way. Facebook is a place to communicate with your friends and family, not a publishing platform.

If you want to write anonymously, then get a free account on any number of other platforms that are designed for anonymous publishing.

alansmitheebk 2 days ago 0 replies      
This petition strikes me as painfully naive. It should be obvious to anyone that Facebook's user data is what makes it a multi-billion dollar company. That data is worth fuck all if it correlates to superhero names and cat pictures as opposed to real people.
curiousjorge 2 days ago 1 reply      
How does EFF get it's funding? I feel like they should be given more credit for what they are doing.
seiji 2 days ago 3 replies      
Isn't it kinda obvious the long game of facebook is to become the single globally mandated identity service? Governments will contract with them to maintain your official citizen ID records. Game over.
yuhong 2 days ago 0 replies      
I dislike real name policies, but I do want the problems with using real names to be fixed if possible.
PhasmaFelis 2 days ago 1 reply      
I wish to God that Google+ had done the right thing on names during their five minutes of relevance. They might have actually lasted as a viable Facebook competitor.
cableshaft 2 days ago 1 reply      
Every teacher I know either doesn't use Facebook at all, or if they do, they don't use their real name. Why? Because they know that their students are going to look for them on Facebook out of curiosity and gossip amongst themselves about any detail that's on there (or if it's locked down, try to sneak onto their friends list and then gossip). They often have to be very careful or else they could easily get in trouble.

One of those people will actually leave a movie theater if they spot one of her students in the same movie, because she knows from experience they'll take pictures of her on their cell phones and try to catch her saying or doing something even slightly off.

Teaching can't be the only profession where people want to be careful about using their real names so they can easily be found by anyone who wants to look. And it certainly isn't just a tiny number of drag queens either.

vectorjohn 2 days ago 4 replies      
I don't understand why people make anything of "real names" policies. They just want the users to have real looking names. They don't actually have any way to force real names. So it's a non-issue.

There's no oppressed person in a police state signing up for Facebook that reads a EULA and concludes "well, Facebook said I have to reveal my true identity, I guess that's the way it is."

Why the Internet Needs IPFS techcrunch.com
340 points by confiscate  3 days ago   146 comments top 27
nicklaf 3 days ago 1 reply      
I view the favorable performance characteristics of IPFS at scale (over the current centralized client-server architecture of the present web) more as a general symptom of the pathology resulting from the incomplete, ill-planned architecture of the web in general.

The greatest advantage of an architecture like IPFS instead lies in its friendliness to more a democratic, semantic web, in which users and programs may make use of URI's at a fine-grained and peer-to-peer level. If we can decouple resources from a central server, and then build programs around these static resources, the web will not only become more permanent, but also more robust against walled-gardens robbing us of programmatic control of those resources.

To paraphrase the Plan 9 authors[1], it might make sense to say that a p2p, content addressable web should try to build a web out of a lot of little systems, not a system out of a lot of little webs. (Well, that doesn't completely make sense, and I'm starting to mix my metaphors, but my point is that what we have now is significantly hindered by lack of fine-grained semantics allowing interop. Hypermedia has the potential to be so much more than simply manipulating the DOM with some javascript!)

[1] http://www.plan9.bell-labs.com/sys/doc/9.html

friendzis 3 days ago 3 replies      
I see two core problems with content addressable web:

 * Dynamic content * Private/sensitive information
Dynamic content on a distributed system poses rather hard challenges on data integrity - how would it be possible to ensure that different pieces of data you have are actually from the same [temporal] set? The very same HN - how would it be possible to see at least all the children of parent you want to comment to in a distributed manner? Anything I can come up with involves some sort of central dependency repository. Two child "commits" create branches in content tree. If these children are duplicates - we pick the better one and cast the other into abyss, the bigger challenge is how do we merge them together. This is a wide topic, so please excuse my brevity.

Private information is yet another rake to step on. I would really love my bank account information to remain accessible to me and the bank only. I don't even trust myself not to lose decryption keys, I don't trust long-term security of encryption algorithms (flaws could be found, brute forcing might become viable), therefore the information better stays in as little places as possible. Another end of this stick is authentication/authorization. Encryption does not work, because access rights change. What I'm authorized to view/do today might not be the same tomorrow. The only solution is not to serve the content in the first place. As for authentication I don't see a solution at all.

Although content addressable web is awesome solution for [more or less] static content - wikipedia, blogs, etc.

kwijibob 3 days ago 1 reply      
I think the IPFS vision is fantastic.

The article misses the main advantage when it tries to say that IPFS would help corporations like Google/Youtube/Netflix.

Big players will always be able to expertly run powerful distributed CDN's, but newer smaller websites will always start with one server under the current model.

IPFS would help to level the playing field for distributed data services.

AstroJetson 3 days ago 2 replies      
"As I explain in my upcoming book...." now the article makes sense, TC is helping her pimp her book.

What the internet needs is a new financial model since the one we are have now isn't working in the long term.

smegger001 3 days ago 2 replies      
IPFS is essentially web (here meaning htmls css & javascript) over torrent + a distributed hash table. Correct me if I am wrong but isn't this just freenet without the anonymity or kind of like tribler but for websites instead of pirated files?
Renaud 3 days ago 3 replies      
Criticism of the article aside, having a decentralised, heavily redundant, safer and harder to disrupt web is an excellent idea worth pursuing.

I can see how that could work for statics resources, but I don't get how you can decentralise the dynamic portion of a website without single points of failures to the backend.

wpietri 3 days ago 3 replies      
Sigh. There's talk about how the Internet's "own internal contradictions [will] unravel it from within", but I'm not seeing it. The first time I can recall an "the Internet is doomed" prediction is 1995:


At this point, there are circa 3 billion Internet users, nearly half the planet, so I think we're well past the point where "ZOMG GROWTH N STUFF" is a reasonable justification for this sort of hype. The Internet's growth rate peaked in the 90s:


Now it's under 10% user growth, which seems entirely manageable.

karissa 3 days ago 2 replies      
This kinda makes me sick to my stomach -- essentially, she's saying that a peer-to-peer internet is great because companies will be able to have better uptime.

She doesn't even cover the clearly obvious economic aspects of this -- why would I run an IPFS node if it just benefits the company and not me?

jerguismi 3 days ago 1 reply      
We use content-addressing so content can be decoupled from origin servers, and instead, can be stored permanently. "

Why should I believe that claim? What are the incentives for storing my data in the network? As far as I understand, no incentivization is done for running the network. That's why I wouldn't trust it to store anything but a trivial amount of data.

euske 3 days ago 1 reply      
This might mitigate some bandwidth problem, for sure, but how can this improve our privacy and the right to be forgotten? I think I'm missing something. Isn't this just a modern reimplementation of distributed hash tables that were researched circa 2005?
sklivvz1971 3 days ago 2 replies      
Meh, it's a distributed static website thing.

Of course if one doesn't need transactions, everything is easier even today -- I mean, who doesn't use a CDN? It's a very similar concept.

The problem is much tougher when the content is personalized or when there needs to be a transaction.

This makes distribution a huge huge problem no one has solved yet (even a blockchain, IIRC can't really scale up in transactions...)

zobzu 3 days ago 3 replies      
Such articles should start by:IPFS = Internet Protocol File system

Just because it makes stuff clearer. Heck the project page could use some of that too ;)

firebones 3 days ago 1 reply      
One milestone for IPFS: get past the point of the primary informational site about it (ipfs.io) being categorically blocked by corporate content filters as "blocked: peer-to-peer". It's hard to convey the vision within your corporation when you can't even share the primary informational site.
skybrian 3 days ago 4 replies      
For publishers, it's probably best thought of as a combination of a free CDN and the Internet Archive.

But I don't think it's going to take off until there's a way to take down files you don't want published anymore. Even with the Internet Archive, if you add them to robots.txt, they will take it down. [1]

Removing things from the Internet is always going to be imperfect since there will always be people who archive old copies of files (and that's a good thing). But the official software should honor retractions or mainstream publishers won't be interested.

[1] https://archive.org/about/exclude.php

tptacek 3 days ago 2 replies      
I'm a believer in this idea generally --- that we should replace applications built directly on IP/TCP with applications built on a content-addressed overlay network running on top of IP/TCP --- but I think the logic used here is faulty.

For instance: I'm not clear on how IPFS protects applications from DDOS. Systems like IPFS spread the load of delivering content, but applications themselves are intrinsically centralized.

Animats 3 days ago 2 replies      
It's a lot like BitTorrent - everything is identified by its hash.

Many of the claimed advantages of IPFS can be achieved with subresource integrity. If you use subresource integrity, files are validated by their hash. We just need some convention by which the hash is encoded into the URL. Then any caching server in the path can safely fulfill the request.

crablar 3 days ago 0 replies      
guessmyname 3 days ago 0 replies      
"[...] Before it's too late"... But when will it be too late?

EDIT: OP changed the title, it was originally written as "Why the Internet Needs IPFS Before It's Too Late"

grayfox 3 days ago 0 replies      
Hey, _prometheus!

What's an email I can reach you at? I'd like to pick your brain about some of this stuff and how it relates to databases.


jsprogrammer 3 days ago 1 reply      
>our rapidly dwindling connectivity


Sorry to be extremely obtuse, but since when has our connectivity been rapidly dwindling? It seems to have only been growing since the network launched.

mirimir 3 days ago 1 reply      
I wonder if IPFS can access resources via Tor onion services.
MCRed 3 days ago 1 reply      
I recently learned about MAID SAFE-- http://maidsafe.net -- but I haven't researched it much. I would be interested if someone could compare and contrast the two.
ilaksh 3 days ago 0 replies      
You can tell this is a great idea by how much some people hate it.
vezzy-fnord 3 days ago 1 reply      
It seems like a pastime of TC is to write sensational filler articles out of the more questionable bile emanating from HN. See also "Death to C": https://news.ycombinator.com/item?id=9477878
chmike 3 days ago 2 replies      
This is a naive idea that is wrong is so many way. It keeps popping up recurrently and I don't understand why people didn't see yet that it won't be the next Internet revolution. Oh well!

Let me try to explain it again. The idea of content base key is strongly narrowing the application domain of the system. Modifying your information (I.e. typo correction) invalidates all the keys. More precisely, people holding the old key won't be able to access the corrected information. of course this model fails completely with dynamic data.

The other problem that is often overlooked with such idea is the function that translate your key into the location of the information. That is: determine the IP address of the server(s) hosting the information from the hash key of it. One needs something like a distributed index for that. Don't use an algorithm because you don't want to add the constrain that your informations can't be moved or replicated.

Another problem is staying in control of your information. An owner of information want to be able to fix or modify it, or delete it when he wants.

Finally, another naive idea is sharing a distributed storage. This is a nice idea but it simply doesn't work. Some People will abuse it. To avoid this you need a system to monitor and control usage. Accounting ? good luck. By the way, this is the problem of Internet today. It is a shared resource, but a few are consuming a hell lot of it and earning a lot of money without sharing back the profit. I'm looking at you google with YouTube.

I'm thinking and working on this problem for a long time. My conclusions are :

 1. Decouple keys from content 2. Make the distributed locating index your infrastructure 3. Optimize keys for index traversal 4. Owner of information must stay in control of Information 5. Owner of information must assume the cost of hosting

huslage 3 days ago 1 reply      
There are better things than this hack on top of IP. It won't work. It cannot work because the basic underlying protocols mean that the overhead of routing and other meta-traffic will rapidly overwhelm the scale required for it to work. Can we please stop talking about IPFS as if it could actually work???

Check out Content Centric Networking (ccnx.org) from Parc for something that actually has a chance at being a real solution.

Staffpad: a new class of notation app staffpad.net
321 points by kayfloriven  1 day ago   96 comments top 21
owenversteeg 1 day ago 4 replies      
Holy shit, this is amazing. For those that haven't written music before, previous programs are some of the clunkiest, slowest, most infuriating pieces of crap that I've ever had to deal with. This makes the process literally ten times easier, and so much more fun to use - and the MP3 export? Amazing.

I'm actually considering using Windows because of this and the new hardware. If you had told me that two days ago I and my beautiful Arch setup would have laughed in your face. Kudos to Microsoft/Staffpad, this is really great.

codeulike 1 day ago 1 reply      
Interestingly, it interprets each pen stroke in the context of the wider piece of music, and makes use of temporal information such as the order the strokes are made in. Whereas classic OCR would just try and interpret the static 'picture':

Key to the way StaffPad works is its method of recognizing your scribbles. It looks at every individual stroke you make and then interprets what you wrote based on the relationship of each stroke to all of the others. David says that its more efficient and accurate to take the position and temporal information from the pen, and then use musical context to decide what the music is trying to be. That way, you can do things that would totally confuse OCR. Because we know the order of the strokes and where they are in relation to the notes, we can say, OK, thats a natural, thats a sharp.


restlake 1 day ago 1 reply      
Staffpad is incredible but it's not brand new or developed by Microsoft. Hanselminutes (Scott Hanselman) did an excellent interview with the engineer who created it a few months back, definitely worth listening to if this piqued your interest. Thoroughly covers how it was conceived and coded in C#/C++, with comments about the relationship to Microsoft: http://www.hanselminutes.com/473/developing-staffpad-a-new-c...
danielhunt 1 day ago 3 replies      
As far as I'm concerned, this is world-changing technology - a (seemingly?) easy to use, natural-looking, efficient data-entry mechanism that looks like something from the future.

That is an absolutely fantastic demo - I was hooked on the background-video within 3 seconds, and found myself wishing the MP3 would play when he was exporting it at the end.


archagon 21 hours ago 1 reply      
I've been working on an iPad app in a similar direction, though with a different end goal. As I'm sure the founders of this company can relate, I've found Sibelius and Finale incredibly frustrating to use, especially for anything resembling experimentation. However, I also feel that sheet music is generally incompatible with modern popular music. (See the Aphex Twin joke below!) Much of my favorite rock and electronic music has syncopation, changing meters, pitch bending, and a general fluidity about it that's incompatible with the rigidity of staff notation. Plus, staff notation is very difficult to read.

With my project, I'm hoping to bring some of that flexibility back into the compositional process. In short, it's an infinite canvas with time on the x-axis and pitch on the y-axis that you can simply paint with your fingers. You're not constrained by key signature, time signature, or note length, though there are snapping tools to help with that if you need it. It's meant more as a tool to take musical "notes" than as something that can produce a clean final product. As someone who doesn't play the guitar very well (yet), I'm looking forward to using it to experiment with guitar solos.

Here's an early demo video: https://www.youtube.com/watch?v=Ra8OvnoxKQw

I hope to release it in the next few months, though we all know how that goes! iPad Pencil support is definitely something I want to do as well.

ThePhysicist 1 day ago 7 replies      
When I saw this my first thought was: We need something like that for Math / technical drawing.

I can understand that music is a good first application of this technology, but entering mathematical and engineering-related content through such an interface could be a huge deal as well.

KrisJordan 23 hours ago 1 reply      
FWIW, Brown University was doing research in this *Pad field (stylus-based input apps for domain specific notations) in the late 90s/2000s. I was a member of this group for a brief period of time. It's really exciting to see a resurgence in high quality pen computing with affordable, entry-level consumer hardware like the Surface 3.

Music Notepad (original): http://graphics.cs.brown.edu/research/music/home.html

Music Notepad (tabletpc):http://graphics.cs.brown.edu/research/music/tpc.html

Mathpad:(Commercialized) http://www.fluiditysoftware.com/https://www.youtube.com/watch?v=BAFGONn4KoQ


jerf 1 day ago 4 replies      
Curiosity: For all those ethusing about this, do you have keyboard skills? Piano was my primary instrument and my take on this may be skewed by the fact my primary instrument also turned out to be pretty good for composing music with, when you can literally bash out full chords in real time if the mood strikes you. If your instruments were all single-note instruments that don't hook up to computers worth a darn I could see this being much more exciting.
gjm11 1 day ago 2 replies      
Context: This was a prominent part of Microsoft's "Windows 10 devices" event yesterday. MS used it to show off the capabilities of the new Surfaces.
tatx 1 day ago 6 replies      
Just wondering - wouldn't it be easier, not only for music notation but also for math and other scientific notation, if there was an on-screen keyboard custom-built for this very purpose with the required symbols and movement keys? After all, why require a stylus and why sketch the characters when you can just as easily type (actually, typing may turn out to be a much faster input method). Is there a real benefit to sketching or is it something else.

Why wasn't handwriting recognition a big success?

Jedd 1 day ago 1 reply      
So, the MyScript team (behind the Stylus beta handwriting tool on Android) released the MyScript calculator[1] a while back that lets you start doing (albeit relatively basic) math with the stylus on a touchpad. (I use it on my Samsung Note Pro.) It has a similar feel / intent to this thing, but somewhat less sophisticated (mind, it's evidently a proof of concept demo more than anything else).

The StaffPad app was reviewed at least 5 months ago [2] and truly does look very impressive, even if it's of particular interest for a relatively small demographic. I noted there was no mention in the SP4 announcement of the USB / sound device latency for the SP4 with W10 - so it's either so good we don't need to mention it, or a bit uncomfortable so best not to mention it.

[1] https://play.google.com/store/apps/details?id=com.visionobje...[2] http://www.sibeliusblog.com/news/staffpad-is-a-music-handwri...

atwrkrmrm 1 day ago 3 replies      
HN front-end devs: If you were to make a web version of this, would it be better to use canvas or svg? And why?
haberman 23 hours ago 2 replies      
Now that Apple Pencil exists, I hope they consider a port. This looks amazing but I just don't think any app could convince me to buy a Windows machine.
hellofunk 1 day ago 0 replies      
This is great. A good week to see new MSFT products and demos. Great for competition. Apple has some catchup to do on a few things.
bkolobara 1 day ago 0 replies      
I would love to see a programming/math prototyping environment based on this idea. Transforming data by sketching. Quickly writing down a formula and pulling data through it with an arrow. Putting filters on the arrow. Like Wolfram Mathematica on steroids.
6stringmerc 23 hours ago 0 replies      
This looks great!

...but will it do Guitar Tab?

Thousands of passionate staff-illiterate-to-barely-competent songwriting guitarists would probably like to know.

Edit: Went ahead and emailed through their contact form, I'm eager to find out what they say.

6stringmerc 23 hours ago 2 replies      
Relevant capability testing joke:

Aphex Twin should give it a whirl!


truebosko 1 day ago 0 replies      
The idea of writing via a stylus and having it transformed not into a massive PNG, but perhaps a simple Unicode file is really growing on me.

I love pen and paper for jotting down ideas or using it to solve problems, but it seems this medium is slowly going to take over due to the benefits of having a software tool analyzing your writing, and thus providing tools/suggestions/etc. along the way.

dedene 1 day ago 1 reply      
Wow! Amazing. Does something like this exist for iPad or Android tablets?
Bjartr 1 day ago 1 reply      
I really want to try it out of curiosity, but as someone who doesn't actually know how to read, write, or play music I can't justify actually paying for it.
tvon 1 day ago 0 replies      
FYI, the preview that shows up when shared to Facebook is broken (just shows some CSS).
I'm Google chains of visually similar images and videos in Google (2011) dinakelberman.tumblr.com
331 points by dhotson  2 days ago   46 comments top 25
captn3m0 2 days ago 2 replies      
Here's a statement from the creator: http://dinakelberman.com/imgoogle/imgoogle.html

The relevant bit:

>Firstly, lots of people ask if it's an algorithm or something. It's not! Just me searchin google.

>Secondly, a lot of people assume this blog is therefore made predominantly by using the "visually similar" function on Google Image Search, which is a totally reasonable thing to assume. While I definitely employ that function in my attempts to search thoroughly (and love it for it's own beautiful results), surprisingly little of the piece is actually constructed using it! Visually Similar appears to employ a logarithm based mainly on color percentages in an image, and as I'm Google is based more often in conceptual similarities than color-wash similarities, my searching is almost entirely relient on keywords rather than searching by image. Of course, there are times when Visually Similar has helped with a transition or section here or there, but overall, it's not the way I work.

anc84 2 days ago 1 reply      
Aww man, I thought this really was pure algorithms, not manual curation. It is still a wonderful project but my awe is gone. Would be great to see it as a real bot.
yazaddaruvala 1 day ago 1 reply      
When I read the title "I'm Google - ..." a very small part of me was really, really excited that maybe, just maybe, some system within Google became self aware, learnt english, and wrote a blog declaring its identity.
shubhamjain 1 day ago 1 reply      
I am oddly reminded of Pollard's rho algorithim[1], you start with a base image and only choose the first image that comes up in the result, in the end we will eventually return to an image that we have already traversed. I wish someone could find the base images that result in the shortest and longest cycles.

[1]: https://upload.wikimedia.org/wikipedia/commons/4/47/Pollard_...

Dav3xor 2 days ago 0 replies      
Some of the break rooms where I work have a computer you can do searches on. There are some Google image searches that just shine like diamonds. Most of them are somehow PBS related for some reason. "Bob Ross" "Fred Rogers" "Ernie and Bert", etc. I started an odd little game of people leaving google image searches on the break room machines.

You'd walk in and.. "Hedgehogs!"

abrichr 2 days ago 2 replies      
I predict that someone will make an algorithmic version of this and post it to HN within a month.
codewithcheese 2 days ago 1 reply      
If Google deep dream was the visualizations you might see on LSD, Google chains is like following a conversation of people on LSD =D
MrBra 1 day ago 2 replies      
I'm assuming porn is filtered out. Given that porn represents the vast majority of Internet content, it would be curious to see how quickly a sexually explicit image would pop up, and how funny the connection with the previous non sexual image would be.
jpalomaki 2 days ago 1 reply      
"Back in the days" image morphing software was somewhat popular. It could be fun project to run these images through some automated tool to create a continuous video of one image morphing to another then to third and so on.
hellbanner 1 day ago 0 replies      
Reminds me of http://translationparty.com/ which converts between Japanese & English using google translations
digitus 1 day ago 0 replies      
We're working on something really similar to help people explore visual content.


gloves 1 day ago 0 replies      
This is cool!

I love how on Google when I do an image search on myself I can go from Brad Pitt one minute, to an old man the next. That's not a complaint though. The technology is always improving and I live in constant wonder of the things people can create from nothing :)

luluganeta 1 day ago 0 replies      
This work is from the same year, also really interesting: http://sebastianschmieg.com/searchbyimage/
hellbanner 1 day ago 1 reply      
A few scrolls down there is "This video was removed by Youtube for privacy reasons".. does the related image algorithm still use what image was there, for calculating next image?
Cerium 1 day ago 0 replies      
Very cool project. As an interesting side effect I'm sure Dina learned a lot of obscure trade specific words while working on this.
juliann 2 days ago 0 replies      
I can't find where the about link is, but here's the about text i found on the source code:

Im Google is an ongoing tumblr blog in which batches of images and videos that I cull from the internet are compiled into a long stream-of-consciousness. Both the searching and arranging processes are done manually. The batches move seamlessly from one subject to the next based on similarities in form, composition, color, and theme. This results visually in a colorful grid that slowly changes as the viewer scrolls through it. Images of houses being demolished transition into images of buildings on fire, to forest fires, to billowing smoke, to geysers, to bursting fire hydrants, to fire hoses, to spools of thread. The site is constantly updated week after week, batch by batch, sometimes in bursts, sometimes very slowly. <br><br> The blog came out of my natural tendency to spend long hours obsessing over Google Image searches, collecting photos I found beautiful and storing them by theme. Often the images that interest me are of industrial or municipal materials or everyday photo snapshots. I do not select images or videos that appear to be intentionally artistic. Happily, the process of researching various themes in this way has lead to unintentionally learning about topics I might never have otherwise, including structural drying, bale feeders, B2P, VAWTs, screw turbines, the cleveland pack, and powder coating.

I feel that my experience wandering through Google Image Search and YouTube hunting for obscure information and encountering unexpected results is a very common one. My blog serves as a visual representation of this phenomenon. This ability to endlessly drift from one topic to the next is the inherently fascinating quality that makes the internet so amazing.


Just wanted to add a note on how I make this blog, as I have seen people wonder the same couple things frequently.

Firstly, lots of people ask if it's a algorithm or something. It's not! Just me searchin google.

Secondly, a lot of people assume this blog is therefore made predominantly by using the "visually similar" function on Google Image Search, which is a totally reasonable thing to assume. While I definitely use that function in my attempts to search thoroughly (and love it for it's own beautiful results), surprisingly little of the piece is actually constructed using it. Visually Similar appears to employ an algorithm based mainly on color percentages in an image, and as I'm Google is based more often in conceptual similarities than color-wash similarities, my searching is almost entirely relient on keywords rather than searching by image. Of course, there are times when Visually Similar has helped with a transition or section here or there, but overall, it's not the way I work.

I hope you enjoyed my first FAQ

&ndash; Dina Kelberman

cognivore 1 day ago 0 replies      
I love the one that goes from cookie dough to car plowing through the sand dune!
accounthere 1 day ago 0 replies      
It would be nice if you could change the seed image. Someone needs to automate this.
kraig911 2 days ago 0 replies      
I wonder would happen if they took every n(20) as a start for another similarity pass.
dfar1 2 days ago 0 replies      
The transitions are brilliant!
jstanley 2 days ago 0 replies      
How to automate this:

- pick a seed image and then:

- run one of those "caption generating" algorithms on it (are any open source? what are the best ones?)

- feed the caption into google images

- pick the first result

- repeat

Probably also wants something simple to prevent cycles and fixpoints.

davidhariri 1 day ago 0 replies      
This is so cool!
danschumann 1 day ago 0 replies      
Kevin Bacon
arxpoetica 2 days ago 1 reply      
Somebody needs to automate this.
jahnu 2 days ago 2 replies      
Urgh... talk about giving tech people a bad reputation as philistines.

It's a wonderful, thoughtful project.

Twitter Names Jack Dorsey Chief Executive nytimes.com
282 points by jvrossb  3 days ago   202 comments top 25
ar7hur 3 days ago 6 replies      
I still think it's impossible to run two companies that both require so much attention, given their stage.

Yes Jobs had Apple and Pixar, and Musk has Tesla and SpaceX. But both Pixar and SpaceX don't really require day-to-day CEO attention, they follow long term plans (movies, rockets). That's really different from Square and Twitter, which are both, in their own ways, in a kind of trouble.

I'd love to be proven wrong though -- so good luck, Jack!

inthewoods 2 days ago 3 replies      
I keep wondering if this is going to be like Marissa Mayer joining Yahoo - much fanfare, some movement, but at the end of the day she's been unable to significantly move the needle. I just wonder if Twitter fundamentally isn't as scalable an idea/product as Facebook et al. It's obviously Jack's job to make it that - but what if the basic form of Twitter just isn't as compelling, no matter how you change it or dress it up?
uptown 3 days ago 1 reply      
"Twitter Feels Compelled to Point Out That Twitter CEO Is a Full-Time Job"


codingdave 3 days ago 2 replies      
Twitter does not need to gain new users - it needs to reactivate old users. The statistic I cannot get past is that they have lost one billion users. That is a much different problem than most companies are dealing with.
nreece 3 days ago 5 replies      
Here's his announcement on Twitter, among other details: https://twitter.com/jack/status/651003190628872192
vit05 2 days ago 0 replies      
I hope Jack realizes that Twitter is not a great tool to chat with friends / neighborhood / family, but an incredible tool to reach people who are away from you, social and geographical.

They need to focus on how easy it is to approach a movie star, your favorite player and musician you like. How easy it is to show that you like a brand or you love a new TV show. And talk about some major events that are happening around you.

For people who have no idea what it is, they just see it as a tool to talk to someone. And most of the time, you do not have any feedback on what you wrote. In fact, you may not have any idea how many people have read what you have written.

So I think if they focused on showing how Twitter is great for expanding the boundaries of what you want to talk and make easy to see feedback from people about what you have spoken, they can attract more people.

ilamont 3 days ago 0 replies      
Its exhilarating for him, one long-time confidante said. He draws energy from how to think about both companies.

Whether by coincidence or design, Dorseys comeback closely resembles the Steve Jobs Narrative a modern myth Silicon Valley entrepreneurs hold up as a map to absolution. (1)

I'm going with "by design." His ego risks the futures of both companies, unfortunately. Surprised the Twitter board caved on allowing a part-time CEO.

1. http://recode.net/2015/10/02/why-jack-dorsey-is-ready-to-sav...

Vecrios 3 days ago 1 reply      
Chris Sacca would sleep very soundly tonight.

Jokes aside, I think Jack is the man for the job. He has proven capable in square. I hope he does the same with Twitter. The company needs to take advantages of the huge market share it has.

antiffan 3 days ago 1 reply      
Slightly off-topic, but I recently interviewed at Square, and I also have some long-time contacts there. I can honestly say they have some of the most brilliant engineering minds I have encountered working there. Whether it's a trickle-down effect from Dorsey or otherwise, they have succeeded in recruiting and retaining many extremely talented individuals.

I am curious if anyone has recent anecdotes in regards to the engineering talent at Twitter (aside from the talent that came in with Periscope).

harrygold 2 days ago 0 replies      
Twitter has incredible value as a tool to break realtime news and events. The problem is, it buried in a veneer of fruitless and redundant tweets that nobody wants to dig through. If they can figure out how to surface the value 'there's gold in them hills!'
yuhong 2 days ago 0 replies      
From https://twitter.com/SJosephBurns/status/640698530038943748:

"Twitter needs a CEO who is an @elonmusk with the Street and a @pmarca in the tweets. - @zerobeta"

jackgavigan 3 days ago 2 replies      
I can't see this working out. Dorsey probably is the best candidate for CEO of Twitter, and I think there are some very low-hanging fruit to pick when it comes to solving Twitter's product issues (I disagree with Startup L Jackson - Twitter's product is not fucking fine). The market reaction has been positive - TWTR opened up 3.15% just now.

However, Square is a different story altogether. Square Wallet was a damp squib, and Square's facing competition both from established players like Intuit and more recent entrants to the market, like iZettle. Leading Square and bringing it to market seems like a full-time job to me, and I wouldn't be surprised if it's IPO valuation takes a hit because it lacks a full-time CEO.

sjg007 2 days ago 0 replies      
Stay tuned for twitter square integration. Buy with twitter should go through square.
fitzwatermellow 3 days ago 4 replies      
Congrats, @jack

Would love to see twtr become a platform for a myriad of third-party apps. Wouldn't it be great to place a market order by messaging @bats "buy $TWTR 10,000 30.00". Or order a limo with @uber "2 people in one hour to jfk airport"?

tscosj 2 days ago 0 replies      
What I don't understand is "to scale" part of an article. If you cannot sell shit to 225M people, how you gonna sell that for 1B?
samfisher83 3 days ago 1 reply      
I think using twitter as a platform for ordering things etc. would be something he could do to make money. They should open it up to developers and charge them for cool apps.
bru_ 3 days ago 6 replies      
People are worried that Jack will be too busy between Twitter and Square, but what they don't know is that the dude's been completely spaced out for the last 4 years, making the same motivational presentation about his Dad's pizza shop to anyone that will listen. Nobody at Twitter talks to Jack anymore, even the most Senior people.
rch 3 days ago 1 reply      
Who had the most significant role in getting Twitter results prominent placement in Google desktop results? That seems like a mutually beneficial development.
NH_2 3 days ago 0 replies      
I think this will be good for Twitter. Jack will be able to make big identity and design decisions over the next few years with less pushback from the employees and users than any non-founder CEO. He's already begun by declaring that tweets will extend beyond 140 chars, and the response has been apprehension instead of outright rejection. And for Twitter to remain competitive with Facebook, even as Facebook builds Notes and live-streaming video to cater journalists, Twitter is going to need to make many of these decisions.
supergirl 2 days ago 0 replies      
when twitter finally pops everybody will see the bubble. or maybe uber pops first
piratebroadcast 2 days ago 0 replies      
Twitter should buy Slack and own the workplace communications market.
curiousjorge 2 days ago 1 reply      
What if you created a website that no matter how hard you try, you can never make enough money to justify it's insane valuation? You hire the guy that created it. If that fails years down the road, hire a blonde.
kylebgorman 3 days ago 7 replies      
I'm short Twitter. The fact of the matter is it that Twitter should never have become a multibillion-dollar company. There is no barrier to entry into this space---any competent web developer could make a non-scaling Twitter in an afternoon---except network effects, and those have proved weak due to poor user experience, particularly for new users.

Twitter should have treated itself like a utility, and focused less on the online advertising race-to-the-bottom that it is sure to lose due to the aforementioned poor user experience and negative sentiment about the platform's future; this announcement is only going to continue to contribute to poor impressions.

The other monetization directions they have played around with---namely selling access to researchers and advertisers, and certifying identities of accounts for celebrities and brands---are a much better fit for the platform, and would have sustained a fast-moving company of 50 hotshot engineers. But the constant pressure to get bigger and bigger has served Twitter poorly. I'm sad to say that I think it will be a ghost town in a few years.

kbenson 2 days ago 0 replies      
I first read this as "Twitter names Jack Donaghy Chief Executive" and thought I was in for a good laugh. Now I'm disappointed. :/

Edit: To be clear, I thought Twitter was just expressing a good sense of humor, and I was disappointed in that I was expecting something humorous but didn't find that.

CBCrypt: Encrypt from the client rather than send passwords to servers cbcrypt.org
259 points by dorfsmay  2 days ago   209 comments top 45
amluto 1 day ago 6 replies      
> It should be ok to reuse passwords at different sites - provided that the passwords are never exposed to those sites.

> To cryptographers, the phrase prove you know something secret without exposing it instantly suggests use asymmetric cryptography.

No, wrong, stop right there!

The phrase "prove you know something secret without exposing it" instantly suggests that you should figure out what "exposing" means before you try to design a protocol. If I'm a user reusing a password between two sites, then it's unavoidable that either site can try to brute-force my password. However, it would be rather nice if an attacker who hasn't compromised either site can't brute-force my password without interacting with one of the servers for every single guess.

That property is achievable using well-understood techniques, and this blind use of asymmetric cryptography fails to achieve this goal and is unnecessarily complicated (what's the PRNG for?). The primitive you want is an "augmented PAKE". The IRTF CFGR is very slowly working on standardizing one, but in the mean time there are several high-quality augmented PAKEs out there.

Also, note that I said "primitive". A strong primitive does not automatically make a good protocol. CBCrypt's docs say "Keypair is ECDH 256, derived from SeededPRNG". This raises all kinds of warning bells: while there are perfectly good protocols based on ECDH, ECDH by itself just gives an /unauthenticated/ session key, and from a 30-second look at the code, CBCrypt appears to be trivially vulnerable to a replay attack in which a malicious server A convinces a standard client to give it a valid authenticator for a session to a different server B.

Someone1234 1 day ago 7 replies      
So essentially this is a really convoluted way of concatenating username, password, and domain then running scrypt() on it, and sending that to the server as the password instead of the raw password itself?

I actually read the Github page for this, and glanced over the C#. There's no realistic example of how this would work in practice, and I have unanswered questions. This would legitimately solve some phishing scenarios and MIGHT slightly mitigate password reuse scenarios (since they know username + domain, they now need to spend time cracking scrypt() on the password). The arguments that it helps user privacy (NSA something something) are tin foil hat wearing jibberish.

Overall: Meh. If they're serious about this, they need more than a vague proposal.

PS - And a password manager solves all of the named issues with less complexity in my opinion.

dchest 1 day ago 5 replies      
"CBCrypt deterministically generates a public/private keypair unique to each specific user, on a specific server, using a specific password. Any variation of any of these factors results in an entirely different and unrelated keypair."

If it deterministically generates a keypair from a password, then attacker acquiring a public key is equivalent to acquiring a password hash the public key now becomes a verifier.

"The worst attack a malicious server can mount is an expensive brute force attack, attempting to overpower the rate-limiting function and eventually guess a password that recreates the exposed public key."

Almost the same result as if the server just stored an scrypt hash, but more complicated, and without a random salt. The solution for the problem just makes it worse.

hinkley 1 day ago 1 reply      
I think the Stanford folks were on to a better solution for this problem. (I have no affiliation)

Use all of the authentication data as the seed for generating a key pair. When you create a new password you send the public key to the server, where it is stored. The private key is regenerated on the client on every authentication challenge.

I submit for your perusal:


vixamincessidnt 1 day ago 0 replies      
Why not use the existing secure remote password protocol instead of inventing a new protocol? https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...
tptacek 1 day ago 2 replies      
TryValidateChallengeResponse at:


... appears to have a straightforward, helpfully inlined timeable comparison of challenge responses.

I don't understand what this package is accomplishing. The description leads off talking about the grave privacy implications of storing scrypt password hashes. But then it stores what is in effect an scrypt password hash on the server.

ajanuary 1 day ago 1 reply      
The website does a really poor job of emphasising that it's using asymmetric cryptography and some sort of challenge/response protocol, and how that is really where the improvement over the status quo is. It's _not_ just sending a hashed password.

I actually don't know very much about challenge/response protocols, and I'm struggling to find any good resources, so take this with a grain of salt. You could build a protocol out of digital signatures (probably there are massive things wrong with this, don't do this, here by dragons, don't build your own crypto etc. etc.):

1) When the user registers with the site, the client generates a public/private key pair and sends the public key to the server, which stores it.

2) When the user wants to log in, the server sends a challenge to the client. The client uses their private key to sign it and sends the signed message back to the server. The server uses the public key stored against the user to validate the signature.

Obviously this is more than just hashing the password on the client. If the server sends a different challenge each time (possibly time based like OTP?), you're protected against replay attacks.

The problem is you need a key pair. You don't want to generate a random one because then the user has to manage it and keep it with them and copy it from machine to machine etc. So the problem solved on the website is how to generate a good key pair from just a site identifier, username and password.

How effective it is at that, I have no idea.

[edit] other posts have helped me realise the public key is just an unconventially derived hash. Even if it's used in an unconventially way for authentication, you can brute force it the way you brute force any general hashing algorithm: key == generate_key(password_guess)

hasenj 1 day ago 2 replies      
How does public-key authentication work? Over things like, ssh, I mean.

Does the client send the private key to the server for verification? I'm not sure but I doubt it. At least I hope that's not what it does.

My guess would be that the server sends an encrypted message of some sort that the client cannot decipher unless it possesses the private key. It deciphers the message then sends it back to the server in deciphered form to say: hey, see, I have the private key, here's the proof: this is the message you sent me earlier. I couldn't have deciphered it without having the private key.

Can this scheme be used with passwords as well? As in, treat the password as a private key, so the client never has to send it over the network.

 Server ---- [garbled-message-challenge] ---> Client Server <--- [ungarbled-message-response] --- Client
Does this scheme work?

nitrogen 1 day ago 1 reply      

Does this provide any advantages over SRP?

ceronman 1 day ago 3 replies      
The main problem this project is trying to mitigate is the password reuse. If user A uses the same password for a lot of sites, a malicious person B is able to convince A to join a new service, like a cat photo sharing app or something like that. If A uses his/her same usual password, B automatically gets access to all A's accounts.

The way this project tries to solve the problem is by hashing the password from the client side so that sites never get the real password, but a hash instead. As other commenters have pointed out, this is not entirely effective because the attacker still can crack the hash. It just makes the problem more difficult depending on the size of the original password.

The real solution to this problem is to stop sharing password at all. Use a password manager for that purpose.

devit 1 day ago 2 replies      
The hard part is getting browsers to all implement something like this (all with the same specification), which for some reason hasn't been happening, despite the fact that a mechanism like this is an obvious design that is obviously better than sending passwords.
zobzu 1 day ago 0 replies      
Passwords is one thing, but its sending information to servers that is the issue.

Things like U2F have a better chance of solving the password issue.

Information though.. i'm happy with google apps processing my data but i'd rather they can't actually store the data on their servers.

Ex: lets say you're a gmail user. Wouldnt it be nice if only your browser could read the contents and gmail servers only metadata?

Obviously, Google wouln't want that (remember, you're the product) - but that would be nice in the grand scale of things, wouldn't it? Like you know, instead of being slaves of corporations, just have them do stuff that's good for us and what not (dreaming out loud right now)

jakeva 1 day ago 3 replies      
"If a user attempts to login to a malicious site accidentally or because they were tricked by a similar but different name, the malicious server will only gain knowledge of a derived public key. The attacker will not be able to impersonate the user at any other server, or even on the compromised server, because the attacker has not discovered either the user's private key for connecting to the compromised server, or the user's password that could be used to derive the private key on this or other servers."

Doesn't this assume a lot about the state of the internet? I mean a phishing attack could easily harvest the user's password, then the attacker effectively has their private key. Unless every user were trained well on how to determine whether the connection can be trusted, but that's still a problem today in 2015 even with https.

faleidel 1 day ago 3 replies      
Why can't we use PGP tu login to websites?

That way the server only has your public key (and everyone can have it anyway).

dheera 1 day ago 0 replies      
I still would not reuse passwords on different sites, unless the code that is being used to communicate with their servers is open-source and has been vetted by either myself or others who have some sense of cryptography.

Unfortunately, neither mobile applications nor the JavaScript on most web pages are open source, so there's no guarantee that they have implemented this strategy correctly. You don't know what happens after you type a password into a text box.

It's also too idealistic to expect that enough people will implement this kind of system such that you can advise people that it's "okay" to reuse passwords.

huhtenberg 1 day ago 0 replies      
They are not the first to invent an "RSA-keypair-by-password" scheme and certainly not the last. The main issue with these is the fact that an estimated entropy of English language is 1.2 bit per symbol [1], so even if you are using pass-phrases, you are still generating an RSA keypair with a PRNG with only few tens of bytes in randomness in it.

[1] I think this was in one of Shannon's papers, but not sure. It should be very easy to find though. Basically once you start digging around a topic of using passwords as PRNG seeds all relevant research surfaces very quickly.

creshal 1 day ago 4 replies      
Whatever happened to challenge-response schemes in the vein of CRAM-MD5?
aggieben 1 day ago 3 replies      
My reaction is one of those "I'm not sure if I should feel stupid or not" moments, but: rightly-built applications already don't expose passwords to employees in that they aren't (or at least, shouldn't be) generally persisted. Usually the plaintext passwords live in volatile memory briefly while the server computes a hash of some kind for comparing with what is stored. I suppose an admin could run a web application under some sort of memory profiler, or take a dump at exactly the right moment, and capture a plaintext password, but that seems pretty far-fetched.

CBCrypt seems to me to introduce a lot more complexity into the entire application space with very little real gain. I'm certainly open to being convinced (especially because I'm waffling on whether or not this is a dumb reaction).

jtwebman 1 day ago 0 replies      
Servers are far more secure then a users computer so I would trust a server over even my own computer. All things a user can do very badly if they don't know what they are doing, will. I have been in software long enough to know that. Good luck but I just don't think this is the solution as it still relies on SSL which I think is the weakest link, not the server or the make believe bad employees.
smartera 1 day ago 1 reply      
This is all nice; and as hinkley pointed, Stanford already proposed a better protocol. The primary issue in my opinion is how to trust the JavaScript arriving from the server that it does what it's supposed to do.

In my humble opinion; we need to use the blockchain to save hashes for trusted and open-audited JavaScript files to be confirmed by the user. This, however, needs to be done at the browser level to avoid an endless trust loop with JavaScript/browser extensions. blockstrap.com has figured out how to put file hashes on the chain; so technically it shouldn't be a major challenge to do the first part. The browser part is where it gets tricky!

yaur 1 day ago 0 replies      
The server has all the inputs except the password and enough information to validate if it has guessed the right password. So password guessing/dictionary attacks are still viable, especially with state level adversary. BUT users are given a false sense of security because "the password is never sent to the server". So we can expect users to (on average) use worse passwords and reuse passwords inappropriately.

IMO, if you are going to use client certificates you should be just be using standard client certificates with whatever your OS provides to generate them.

nfirvine 1 day ago 0 replies      
"due to Third Party Doctrine, users forfeit their legal right to privacy by merely making it possible for sysadmins to access their information. This means the sysadmins can legally share users' information and passwords with additional third parties, and the NSA can legally spy on it all, without any need for a warrant or probable cause."

This sounds super fishy to me; feels like a misinterpretation made through tin-foil lenses. Citation please?

AstroChimpHam 1 day ago 0 replies      
He keeps talking about how this is going to prevent malicious employees from getting at your data or sharing it with the NSA. It's not. Facebook, Google, and every other website you use will still have unencrypted access to all of the information you give them on their servers. The only thing this would keep them from having is the password you use to protect that other stuff they already have access to anyway.
ricksplat 1 day ago 0 replies      
As far as my limited understanding goes, MS-CHAP v2 allows you to store a hash of your password on the server rather than storing the actual password. Then through a convoluted sequence of hashing algorithms it generates another hash and compares that with a hash you have provided. I think it's been around since at least 2003 and though it has itself been cracked (cf Moxie Marlinspike) I don't believe it's ever expected to be used outside of an encrypted tunnel (cf Matthew Gast).
phlay 1 day ago 0 replies      
This looks very similar to my password authentication method PSPKA or 'password seeded public key authentication'. See https://github.com/phlay/pspka, if you are interested.
amelius 1 day ago 0 replies      
This is only a tiny improvement. Any password cracker can figure out your password just from the hash anyway. What we really need is industry support for and general acceptance of two-factor passwords. I want to generate one-time passwords from my smartwatch, have them sent to my keyboard or computer directly, and be able to log in to gmail and through ssh alike.

Is it just me, or does the current way we handle passwords feel archaic, even while we still use them?

homakov 1 day ago 3 replies      
I'm working on the same problem, and there's a working prototype https://github.com/sakurity/truefactor

The problem is bigger than that - we should stop making users type password at all. There should be an authentication module any website could use to store and retrieve credentials from. Check out Truefactor.

rsy96 1 day ago 0 replies      
Sounds a good idea. I'd like to see a more technical description of how it derives the public/private key pair from a password. In addition, the language choice of C# is probably going to limit its popularity, given how C# is still tied to Microsoft world (I know they are open sourcing .Net, but that has not matured yet).
kilovoltaire 1 day ago 0 replies      
I don't see how this can claim to prevent phishing attacks. A fake site can still just steal your username and password as usual, and then go to the real site and log in with them.

Am I missing something?

Obviously there is still a benefit to not sending your password to the real site, but their anti-phishing claims seem phishy.

nfirvine 1 day ago 0 replies      
How is this not functionally equivalent to TLS client-side certificates? Yes, I understand that there's a UX problem, but CBCrypt has to solve that in addition to the crypto problem.

Also, C#? What is this, a Silverlight applet? :P

mrbig4545 1 day ago 0 replies      
so the hash of the password becomes the password. i don't see the gains tbh
Mimick 1 day ago 1 reply      
I think LastPass (assuming it's secure) solved this problem on a more practical way.

You get a random generated password for every website, and it won't suggest to use it unless you are on that website.

kpcyrd 1 day ago 0 replies      
I think it's amazing that there are so many security conscious people that are very sensitive about passwords and storing them as plaintext is worse then hitler, but nobody questions how we're still stuck with plain shared secrets for authentication in 2015. If the concept of passwords wasn't so fundamentally broken a password leak wouldn't be an issue, not even if it's plaintext.

"Oh no, my shared secret for this site got compromised and now all my accounts are compromised because I authenticate to 50+ other sites with the same secret."

This is like putting your only private key on every service and complaining if bad things happen.

bagosm 1 day ago 0 replies      
It does one thing better than a password manager: it prevents rainbow attacks via forced localized salting. That's about it... Next HTTP can solve other problems...
wnevets 1 day ago 1 reply      
Whats makes this different from SQRL?


z3t4 1 day ago 0 replies      
The only thing that will save you is your enemies incompetence :P

Make sure you encrypt ALL communication and not just the password or cookie.

lisper 1 day ago 1 reply      
Or people could start using something like this:


ouistiti 1 day ago 0 replies      
Or better still, just create a RESTful stateless API and implement JWT?
CCs 1 day ago 1 reply      
There's an issue with passwords/keys not sent to the server: what if the user forgets the password?

No "Forgot your password?" functionality available (reset token), since the server does not store password (hashed or otherwise).

aimatt 1 day ago 1 reply      
Still just as susceptible to rainbow tables.
meritt 1 day ago 0 replies      
How is this any different than every other "hash a bunch of shit and throw in a salt" half-baked encryption idea people continually invent?
peterwwillis 1 day ago 1 reply      
Throw a password on SSL/TLS client certificates and you have something that sounds about the same. Nobody used it because it was confusing to users.
zaroth 1 day ago 0 replies      
As a client-side hashing approach, I think it's a bit overly complicated but probably not fundamentally flawed. I would use a singular hash function to produce the byte stream, instead of introducing a PRNG and seeding it from the hash functoin just to get a byte stream. But the bigger conversation is, what role can client-side hashing play, either as a full-on replacement to server-side hashing, or perhaps just to augment the server-side hashing everyone is doing today?

Stepping back a minute, it's helpful to consider the fundamental reality of almost any password validation scheme. In this case we are storing a "public key" on the server, but it is more aptly described as the "password validator". Almost all designs will have such a thing (a notable exception is PolyPassHash which cleverly avoids a one-to-one mapping of users to validators, and uses this to try to defeat some offline attacks, with some interesting trade-offs).

If you combine public information, the password validator, and the correct password, your 'password validation function' will complete and return 1. Therefore, if you have the necessary public information (i.e. domain name, username, whatever else is stuffed in the hash) and you steal the password validator database, you now have everything you need in order to run the so-called "offline dictionary attack".

In SRP, an obvious inspiration to this CBCrypt design, the password validator database is basically computed entirely by the modexp() work required to generate the pub/priv keys. There is no additional/configurable key stretching which can be applied other than choosing the bitlength of the keys.

Here we have a configurable round of hashing (Scrypt or PBKDF2) which is applied before the key derivation step. This provides additional protection against offline attacks, depending on how hard the work factor is set. History has shown us, a key detail in ensuring secure password storage is actively managing the hashing function and work factors over time, or else watching Moore's law strip away any protection we thought we had.... So the next key question is, how do we actively manage and maintain a sufficient work factor?

When evaluating a hashing function, one key question to ask is can we add to the work-factor of existing passwords offline (without having the password in RAM) or do we need the user's password resident (somewhere, client or server-side) in order to recompute a new hash with the new work factor? In this case, by design it's impossible to offline harden the existing passwords. You would need to first login the user, verify their password, and then recompute a new hash (perhaps transparently, perhaps with some UI component if it is too slow otherwise) and re-derive and update the key server-side, in order to change the work factor.

But really, the primary consideration with any client-side hashing design, is how to support the extremely wide array of devices which will need to register/login to your site, and what's the user experience when they click the Login button?

From CBcrypt.cs, where the proof of concept is written in C#.NET using BouncyCastle's Scrypt library:

 /* cost 4096, blockSize 8, parallel 1, cause SCrypt to take ~175-350ms on Core i5-540M, 2.5Ghz * This is in addition to the approx 100ms-200ms to generate ECDSA keypair. */
Now, in reality we are going to be running Scrypt in Javascript on a mobile device. I found what seems to be a highly optimized implementation [1] and plugged it on Dropbox [2] so I could point my browser at it to see how slow just the Scrypt ran. On my laptop it was taking about 800ms and on my iPhone 5 about 2500ms. I would say that means for about 25% of the mobile population this function runs about an order of magnitude too slow for comfort... it will impact user experience, it could impact bounce rate, etc. This doesn't even include the ECDH part of the calculation.

This is definitely the hardest question to answer with client-side hashing, or even simply "sever-relief" hashing where a portion of the overall hashing may optionally be pushed to the client, particularly during DDoS type events. How do you identify which clients are sufficiently powered to reasonably perform the hashing, or what kind of UI needs to be presented if you could be potentially locking up a browser for 5+ seconds while the hashing is running? Users are not well trained to expect logins to be slow -- a stall in the browser is almost always a reason to simply resubmit the form. It would be great if users associated their browser tab locking up, their fans spinning to full speed, and their battery power plummeting with "Whoohoo, this site is ROCKING a awesome work factor!" but in the real world we see that companies want hashing run on powerful CPUs server-side, with the capability to offline-upgrade hashes over time.

The final nail in any client-side hashing approach is when the claims start to cross over to "server never sees the password". I don't see this as a valid argument any time the code to perform the hashing is being sent over the same trusted connection from the server itself. There are corner cases, for example if you are shipping compiled and signed code which is hashing client-side, or if you take special precautions with browser plugins to ensure you are running a specific build of the Javascript, but that simply doesn't apply to 99.99% of use cases, and so I personal dismiss those benefits as not likely to be achieved in the real-world. What we're left with is technically a sub-optimal user experience with additional trade-offs. So in conclusion, I am currently quite bearish on client-side hashing on the web in general.

[1] - https://github.com/tonyg/js-scrypt[2] - https://dl.dropboxusercontent.com/u/42286273/site/test_scryp...

caskance 1 day ago 0 replies      
I'd rather just use SAML, OAuth, SQRL, etc.
balaa 1 day ago 1 reply      
Isn't this basically what Kerberos does?
What Zillow doesn't want you to know about its listing gap buildzoom.com
250 points by justhw  1 day ago   176 comments top 26
bcg1 1 day ago 5 replies      
Sold my house this summer. My wife listed the house on Zillow and took nice looking pictures and put a good description. We also paid a real estate agent $300 to list our house in the MLS (and we would handle the rest of the sale). Once our listing was in the MLS Zillow brought in the crapified compressed JPEGs from the MLS feed and overwrote our description etc, and locked us out from further edits. Fail.

Zillow is going about this all wrong... their advantage is that they aren't the MLS system, they are the best positioned to be an alternative to the NAR cartel. If anything THEY should have agents in every town who will put your house into the MLS for you for $300, as well as allow enhanced listing on their site at the same time... they might actually make some money that way, and also it would solve their MLS problem. Instead they seem to want to beg the cartel for a seat at the table. Pretty sad.

meritt 1 day ago 2 replies      
Zillow has substantially mitigated the loss of Move.com/Realtor.com by forging direct relationships with numerous MLSs since the cutoff in April.

> Since January, more than 300 MLSs have signed agreements to send listings directly to Zillow and Trulia, providing their members access to the largest audience of home shoppers on mobile and Web 1.


The sampling methods used by the author are extremely poor. Miami, FL on Zillow.com and Realtor.com are substantially different areas. Just do a search yourself and look at the map. Realtor appears to be using the metro while Zillow's site defaults to "Miami" only. They need to extract 100% inventory and use exactly the same geofenced boundaries if you want actual 1:1 comparison. Their conclusion may very well be correct but the data sure as hell does not support it.

If you want to criticize Zillow, there are far easier methods than this one. How about that Trulia acquired MarketLeader for $355M (Apr 13), Zillow acquired Trulia for $2.5B (Feb 15), and Zillow just sold MarketLeader for $23M. How much of Trulia's $2.5B pricetag was in recognition of Market Leader's "value"? I'm guessing we'll see at least $200-250M or so drop from Zillow's goodwill ($1.8B as of 6/30/15) on their balance sheet for Q3-15.

gshx 1 day ago 2 replies      
If Zillow can solve the "problem" of putting all the docs (disclosures, inspections, offer docs, title docs later on, etc) up with the listing along with making a network of handymen (to help out with prettifying a house for sale), most buyers and sellers will be happy to pay them 0.5-1% instead of the seller having to pay the agents 4-5%. There's generally not a whole lot of work in buying or selling a house including agents doing events like open houses and helping with "discovery" and the buyer-seller matching problem. The offer process itself is also quite simple and can be done online. That said, the one benefit of an open house/tour hosted by a neutral party, is that it lets potential buyers easily assess a house without having a biased seller in attendance. This can also be managed and does not really require a real estate agent. Zillow and similar services like Trulia have a fantastic market opportunity in front of them.
Simulacra 1 day ago 3 replies      
When I was searching for a house I used 5 or so apps to do so. I found that Zillow had many, many listings that were very old, but were placed up top and the dates changed to make it look like it was just listed. However, checking the actual data for that listing shows that it was not pulled from the market and relisted, it was just Zillow constantly boosting really crappy houses that had been on the site for a long time.

In short, trust none of them, use multiple services, and keep looking constnatly. I found my house through my agents MLS system, which was about the time it showed up on the aggregates.

MikeKusold 1 day ago 2 replies      
When I was looking for a condo this past summer, I used RedFin. Since RedFin ties into MSLs (basically a list that all realtors use), I knew about listing before my agent could email me about them.

I highly recommend it, even if you have a non-RedFin agent.

imroot 1 day ago 5 replies      
Getting and keeping real estate listings up to date is a big pain in the ass. There's no standardized residential listing data, and while you might get RETS feeds or data dumps from the MLS nightly, it's still on you to ensure that all of the information is parsed correctly and displayed in accordance to the MLS'es standards (or risk being cut off).

Often, MLSes have terms in their contract that say that regardless of the source, that their information must be the canonical source for information in some cities -- which is a huge problem with dealing with REO'ed (Real Estate Owned) properties and the banks that want to sell them.

I know of a startup that eventually just resorted to scraping realtor.com (and I'm sure that they're still doing it to this day) instead of dealing with the various headaches of managing the contracts with the MLSes and RETS providers.

debacle 1 day ago 0 replies      
The MLS system is a disaster. Think Alien 2 levels of mucousy, dark and dank ventilation shafts. Zillow has never not worked really well for all of my uses (shopping, price checking, looking at neighborhoods, etc), and it's done so in a way that no other realty website ever has.

The reality is that realtors have a huge vested interest in making it more difficult to shop for a house and generally rely on information asymmetry in a massive way. Zillow is a massive blow to that barrier and I hope they succeed.

Disclaimer: I've worked for a handful of realtors and with several MLS systems in the past.

rgbrgb 1 day ago 3 replies      
One point that this analysis misses is that there's often a significant delay between the time a property hits the market (lists on MLS) and the time it hits aggregators like Zillow [0]. If you're looking at week old listings in hot markets like LA and SF, then you're looking at a batch of inventory that has already been picked through by savvier buyers.

For this reason, we always recommend that our buyers use a brokerage-quality data feed like ours [1] or Redfin's to monitor for new listings. If you have a reliable data feed and check what's new once a day, then you never miss out on the best properties -- much less stressful than clicking and re-clicking tiny icons on a map.

[0]: http://www.inman.com/2014/02/14/los-angeles-claw-is-first-ml...[1]: https://www.openlistings.com/setup

jasode 1 day ago 1 reply      
FYI: To add some color to the article, here's more information about the different relationships of MLS with Zillow and Redfin from a former Redfin intern (posted July 28, 2014):


ohitsdom 1 day ago 2 replies      
> So as not to violate Zillow.coms terms of service we have done so manually (hence the limited number of cities).

Can sites have a legally enforceable terms of service that ban automated access? I understand if the automated traffic is high and impacts server performance and cost, but they can ban it even if it's limited? I understand sites need to protect their servers, but if it's publicly accessible and the traffic is reasonable, I'm surprised it's legally enforceable to ban it. And as shown by this blog post, it still can be done manually so the ban isn't very effective.

borkabrak 1 day ago 2 replies      
..aand it's gone.

I just got:

"This post has been temporarily removed at the request of Zillow. We are collaborating with them to write a more complete version of the story, and will have an updated version posted on October 7th."

Nothing creepy about that. Certainly doesn't leave me feeling that whatever it is the article said about Zillow apparently hit them pretty close to home.

cwilkes 1 day ago 3 replies      
What annoys me about Zillow is that their "number of days on Zillow" is totally bogus. A house down the street was on the market for 2 months (at a highly inflated value). Time on Zillow? 2 weeks.

Maybe the time on Zillow is based on repricing or taking it off the market and putting it back on? Either way if you were looking for homes unless you kept track of an individual house you wouldn't be any wiser.

Also amusingly the house was a tear down, so the Zestimate was based on the old crappy house. Which I'm sure the builders loved -- it was half the price of the newly built home.

elec3647 1 hour ago 0 replies      
Article was taken down. Does anyone have it archived/alternative link?
deraker 21 hours ago 0 replies      
Looks like this article has been taken down by the authors now due to a cease and desist:


Article is gone from original source too... Obviously nothing to see here folks ;)

patja 1 day ago 1 reply      
Looking at the C&D letter, it seems that Zillow is saying any use of their site for commercial use is prohibited. Wouldn't that cover traditional print journalists doing a similar story, or any story that made use of information gleaned from Zillow?

Why didn't Bloomberg receive a C&D for this story which says "Bloomberg used data from the U.S. Census Bureau, Zillow Group Inc. and Bankrate.com to quantify how much more money millennials would need to earn each year to afford a home in the largest U.S. cities." http://www.bloomberg.com/news/articles/2015-06-08/these-are-...

I guess nobody except the EFF really wants to take on this type of fight.

nickgrosvenor 1 day ago 1 reply      
Redfin is more accurate than zillow, They list new listings much faster and the interface is better. I'm in LA and redfin is superior.
alyx 1 day ago 5 replies      
Anybody know of any good sources of MLS data for programmatic consumption (even if not free)?
pbreit 1 day ago 1 reply      
I like how they speculate that where Zillow has more listings that it could be because Zillow has database or data scrubbing problems and not the other way around.
justinzollars 1 day ago 1 reply      
As a recent home buyer, I found Zillow information to be at least one week out of date. In most cities, that probably isn't a big deal; however in the San Francisco Bay area I found the site pretty useless because all of the homes I was interested in, I was unable to see because they were usually off the market, pending or very late in the process where I was unable to visit the home.

I ended up using a Sotheby's owned site, with accurate data.

Additionally, their Zillow Estimate information is at least 20% low in SF, even by admission of their own analysis: http://www.zillow.com/zestimate/#acc

The benefit I should mention is that I met a great Realtor through their ads, which he admitted to spending thousands of dollars on per month on.

xacaxulu 1 day ago 1 reply      
It doesn't seem surprising that Zillow is pump-and-dumping in an already extremely bubbly market.
kelukelugames 1 day ago 1 reply      
Redfin is not even listed. Snubbed.
staunch 1 day ago 2 replies      
I lost all faith in Zillow when I realized their estimate was just a number based on their guess at a price per square foot times the square footage...

A properties historical "Zestimate" changes if you update the current square footage. Whatajoke!

cyrillevincey 1 day ago 0 replies      
Post deleted upon Zillow's request?
ryanSrich 1 day ago 0 replies      
Semi related. In Portland every single house on Zillow was actually already sold. I'm not exactly sure how this works, but it was very annoying. The only app that would show houses that were actually still on the market was Redfin. Has anyone else in a different area experienced this?
JustSomeNobody 1 day ago 0 replies      
The article's conclusion left me feeling kinda meh. The article made me feel like it was leading up to something and then "... is destined for an uphill battle.". Well, yeah? Anytime you base your business on someone else's data, you run the risk of the carpet getting yanked out from underneath you.
saidajigumi 1 day ago 5 replies      
Mods: the title of this HN post is not the title of the linked article, which at this writing is the somewhat less clickbaity "What Zillow Doesnt Want You To Know". In fact, the c&d part, while important, could almost be seen as a distraction from the larger issues of MLS listing access discussed in the article.
Closing a door thesharps.us
353 points by clessg  3 days ago   185 comments top 27
drewg123 3 days ago 4 replies      
Its not just women that are being put off. I'm a man, and when I worked for an IHV, I dreaded every interaction with the Linux kernel community due to the tone of the interactions. It was by far the most hostile community we engaged with. I often submitted patches through either junior developers in our company who were Linux enthusiasts, or through friends who were established in the Linux community just to avoid dealing with the people on the subsystem list.

By contrast the *BSD communities were far more helpful, as were the closed-source MacOSX and Solaris driver/kernel mailing lists, as well as the private interactions with folks from Apple and Sun.

nappy-doo 2 days ago 1 reply      
Sigh. Story time.

A long time ago (2006,7,8?) before Sarah took over USB development, I tried to start getting fixes into the un-maintained USB stack. I submitted fixes for leaks, segfaults, and general cleanup and documentation. At the time, the "maintainer" was one of the most unhelpful, and ugly people I ever dealt with over e-mail. After months of writing with the person over a simple leak he'd introduced, I gave up, deciding to fix it on a branch, and publish our code rather than deal with that developer. I vowed I'd never go back -- and I still haven't.

I'm not saying all kernel developers are jerks, but I'm not interested in working with those that are. As such, I'm just not willing to spend my time trying to help. (And maybe some developers want to keep it that way. So, I guess we're both happier for it.)

lolo_ 2 days ago 3 replies      
I have contributed some trivial commits (so far) and though I encountered some harsh comments it was nothing I felt was overly personal. But of course, my experience is pretty limited at this point.

I get the impression that the level of these issues varies wildly depending on the subsystem in question. For example, Greg Kroah-Hartman is friendly and helpful to a ridiculous degree, I literally don't understand how he gets so much done and maintains helpfulness (and he's working on the staging drivers with some of the roughest code in the entire kernel.)

I feel sorry that Sarah has had this happen, and it's sad that this could happen to anyone, but in particular it's sad that it's happened to a woman when we have such a massive under representation of women already in our industry and probably even more so in the kernel.

I don't know what the answer is, but for those areas of the kernel that are a problem a balance needs to be found between directness and talking to somebody like a human being.

As for Linus, I think he gets somewhat misrepresented in many places. His vitriol is reserved for senior kernel maintainers who should know better and do things which (in many cases) could very negatively impact users, I've seen a number of threads where he's been let's say, less than civil, which were all about kernel code breaking user code and the subsystem maintainers saying 'well they're doing it wrong so let's just break their applications'. In those kind of circumstances you're glad that Linus strongly objects.

AdmiralAsshat 2 days ago 0 replies      
I'm pretty sure Sarah has confronted Linus over this exact issue in the past, and he was pretty adamant that it would not change:


So, there you go. This is what happens, and what will continue to happen, when the leader defends that kind of behavior.

sssilver 3 days ago 4 replies      
Different teams have different cultures that work for them.

It's not impossible that the culture that alienates some people is the very culture that enables the phenomenal engineering of the Linux kernel.

I personally would feel privileged to be a part of a group that puts engineering excellence beyond anything, including my own hurt feelings. There would be beauty for me in that emotional austerity and sheer directness.

That being said, I empathize with the author. If a team culture that delivers doesn't work for someone, the best thing to do is to move on. Keep looking, don't settle.

EwanToo 3 days ago 2 replies      
A sad post, but not an unexpected one given the tone on the kernel mailing list - I'm sure many others have left without saying anything.
chasing 2 days ago 1 reply      
There's a weird myth floating around that being honest, direct, or "real" means being an asshole.

And there's another related myth that being an asshole is acceptable.

xixi77 2 days ago 1 reply      
She (and everyone here) talks about it being OK to criticize code but not OK to offend people personally, which seems like a rather reasonable concept -- but does anyone have actual examples of the latter, to put everything in context? I see http://marc.info/?l=linux-kernel&m=135628421403144&w=2 and http://marc.info/?l=linux-acpi&m=136157944603147&w=2 quoted, but to me it looks like both are about code?
matt_morgan 2 days ago 1 reply      
USB in Linux is really, really good. It didn't use to be. This is a big loss.
doomrobo 2 days ago 1 reply      
>What that means is they are privileging the emotional needs of other Linux kernel developers (to release their frustrations on others, to be blunt, rude, or curse to blow off steam) over my own emotional needs (the need to be respected as a person, to not receive verbal or emotional abuse).

This seems to be imply that these people don't also have the need to be respected. It's a choice to privilege everybody's need to blow off steam over everybody's need to be respected. Not just hers.

mst 2 days ago 1 reply      
I do wish there was some separation in the description between abrasiveness and sexism/homophobia - I've only ever really seen the former on lkml, and am far more tolerant of it than the latter.
reitanqild 2 days ago 0 replies      
I worked in a place where I'd be scolded once in a while when I did something stupid and I enjoyed it[0] for a while as long as it was fair.

It quickly became annoying when it turned out the same people who criticised others harshly swept their own mistakes under the rug ...

[0]: "As iron sharpens iron, so one person sharpens another."

tylerflint 2 days ago 0 replies      
> We are human. We make mistakes, and we correct them. We get frustrated with someone, we over-react, and then we apologize and try to work together towards a solution.


nanodano 2 days ago 1 reply      
The Linux project has been around for a couple decades now. Has this attitude been a trend from the very beginning, or is it something that formed over time?
droopybuns 2 days ago 3 replies      
This article helped me understand a frustration I have with millennials.

>>I feel powerless in a community that had a Code of Conflict without a specific list of behaviors to avoid and a community with no teeth to enforce it.

It never would have occurred to me that I was ever entitled to procedures for handling conflict in a community. I have operated under the following rule for community bullshit:

Endure it, fix it, or abandon it.

The fashionable "sad departure" missive just reeks of entitlement. Reading these notes make me feel embarrassed for the author.

Younger people seem to be celebrating a style that whines about community. It isn't leadership.

nommm-nommm 2 days ago 1 reply      
Its easy to say "This is the way it is, tough cookies, deal with it. Grow a thicker skin if you want to play."

It takes a bigger man (apologies for the sexism - can't think of a better term) to put their head down and become a better person.

adrianlmm 2 days ago 0 replies      
What I'd like to know is if she was disrespected in any way, cause despise the Linus behavior, he only is that way with people he knows and with people he has confidence.

so, was she disrespected or not?

xname2 1 day ago 0 replies      
I don't get it.

If you don't like it, just leave, work with another team. It IS that simple.

BUT, don't simply say "xx turns women away", because not every woman is the same.

JoelJacobson 2 days ago 1 reply      
https://lwn.net/Articles/105375/ (Linus on Kernel Mangement Style, 2004)If everyone would read this before getting involved in the kernel project, hopefully those not compatible with the culture would not get involved in the first place.
bjornstjerne 2 days ago 1 reply      
I'm glad she was able to figure out that the team culture didn't fit her and leave for something more suitable to her preferences. Different people are different, and a single team cannot accommodate every style.
osilas 9 hours ago 0 replies      
osilas 9 hours ago 0 replies      
rishabhsagar 2 days ago 4 replies      
Woah! What campaign?!

I don't think she is drumming up any support. Asking for polite and decent conversations without being called a 'cry baby' or being asked to 'go back to kitchen' is not political knife fight, it is basic courtesy in a public forum IMO.

Grue3 2 days ago 1 reply      
I was with her until

>I have the right to replace any comment I feel like with fart fart fart fart.

Right after

>I would prefer the communication style within the Linux kernel community to be more respectful.

Way to not practice what you preach.

acd 2 days ago 0 replies      
I think some developers need to go out and meet more people in person. There is no need for a toxic communication style.Usually things are said much harder to people on mailing lists than is said face to face.

Get out talk to people. People working isolated being an issue

zeveb 2 days ago 10 replies      
> I need communication that is technically brutal but personally respectful.

And sometimes being personally disrespectful yields better results. Which is, after all, what one wants out of a software project: results, not happy feelings.

I think we'd all like to be personally respected (I know I would). But I also think that almost all of us have done things which aren't respectable (I know I have); and I believe that at least for some people, the shame of public disrespect is part of the learning experience involved in not doing it again.

If this atmosphere of harsh personal criticism does yield better results, then it's necessary. I'm reminded of the old adage, 'if you can't stand the heat, get out of the kitchen.' Heat is necessary to cook (a kitchen in summer is miserable, particularly without air conditioning); interpersonal heat may be necessary to produce better software.

I myself don't do so well in an atmosphere of intense personal criticism, and have great difficulty giving it, but I consider those my own personality flaws.

Edward Snowden interview: 'Smartphones can be taken over' bbc.co.uk
254 points by mhandley  2 days ago   133 comments top 21
junto 2 days ago 3 replies      
Everybody talks about the OS, buy nearly everyone forgets about the base band, the hidden OS on every phone that you have almost no control over.

Whilst the media is worrying about Apple iCloud and phone encryption, GCHQ are quietly delving into your base band and enjoying the smoke and mirrors.

To use analogy, we are worrying about the government looking under our clothes, whilst in fact they are peeling back or skin and skulls and peering into our humanity.

patrickaljord 2 days ago 5 replies      
Of course they can. Even the iPhone, Apple can easily push an invisible update and install a bot on your phone if asked by the government. As long as you don't control the backend and even the frontend, you're at the mercy of whoever controls it (Apple in this case). That's why all the Apple talks on privacy lately sounds like not much more than good marketing to me.
rm_-rf_slash 2 days ago 4 replies      
None of this should be a surprise. We should expect that any device with Internet access can be hacked by someone, regardless of their intentions. If it isn't the NSA it's Chinese "patriot hackers" or Russian cyber-criminals operating with the consent of their governments. Or many others. Instead of seeing this security state as a binary, we should always consider two questions:

1: How much do we value our privacy and security versus the needs of society (in the case of backdoors and so on), and,

2: How much do we trust the people whose business is having the ability to break into our phones? I don't like how invasive our security agencies are but if they end up preventing major crimes or terrorist attacks I can't say what they do is wrong.

At the end of the day, I want the people defending me to be more powerful than the people attacking me, but I don't want my defenders to use their same tools against me.

cryoshon 2 days ago 1 reply      
"Describing the relationship between GCHQ and its US counterpart, he said: "GCHQ is to all intents and purposes a subsidiary of the NSA.

"They [the NSA] provide technology, they provide tasking and direction as to what they [GCHQ] should go after." "

This is the juciest part. This is the confirmation we've been suspecting for a long time: CGHQ is the NSA, and all of their programs are shared. This means that we can pin the worst abuses of GCHQ onto the NSA, and also confirm that US citizens are directly targeted by even the most outrageously invasive surveillance efforts-- there is no exempt population, proving the NSA's PR lies once again.

verytrivial 2 days ago 1 reply      
My understanding may be dated, but I have often wondered if the battle for privacy is a lost cause in the mobile phone space. Even with a ground-up open platform for the phone and OS, current regulation requires blob of 'certified' hardware and software between you and the antenna/network. Short of using my phone to acoustically-couple a 2400baud cryto-stream (the call meta-data of which still being snitched), I'm really not sure if privacy is possible.
mixmastamyk 2 days ago 3 replies      
I would be surprised if Apple has let a vulnerability of "send text message, pwn phone" linger for very long. Article doesn't mention brands or versions, but it is quite important to fully understand.

Or does this work at a lower level? I've heard the radio chips themselves are untrustworthy, but how would they control the main OS on another chip?

shostack 2 days ago 0 replies      
What about 3rd party keyboards like those that have recently made their way to iPhones and have been on Android for a while?

All of them (even Samsung's swype style keyboard) seem to have some sort of cloud-based storage for your data so it can remain equally predictive across your devices. Is there any good security research out there on how safe these keyboards are and which ones are the worst offenders? Seems like it is essentially a user-installed cloud-based keystroke logger ripe for abuse.

I love the functionality of some of them, but man do they terrify me.

wicket 2 days ago 7 replies      
I'm surprised HN readers don't already know this. It still astonishes me how so many so called "tech savvy" users are content with surrendering their privacy and freedoms to Google or Apple so that they can run the latest "apps".

This is why I'm backing the Neo900[1]. It might be a bit pricey and low spec'ed by today's market (a consequence of it catering for a niche market meaning it won't be mass produced) but in my opinion that's a small price to pay to actually own your phone (it's actually more akin to a mobile computer than a phone).

[1] http://neo900.org/

meapix 2 days ago 1 reply      
What strike me most is the amount of people around me who don't care about this.
madez 2 days ago 5 replies      
Im trying as good as I can to protect myself against such attacks. My android smartphone is permanently in airplane mode and I dont use a sim card. Do you still see a security risk?
sigmar 2 days ago 3 replies      
It seems strange to me that the Snowden is only now mentioning the "text message" attack vector, after everyone already knows about Stagefright. Is he out of things to leak? or did he mention it before and go unnoticed?
DanBlake 2 days ago 3 replies      
"Nosey Smurf is the 'hot mic' tool. For example if it's in your pocket, [GCHQ] can turn the microphone on and listen to everything that's going on around you - even if your phone is switched off because they've got the other tools for turning it on."

Are they implying that all/most smartphones still communicate with cell towers when turned off? (obviously this isnt happening) - Or do they pwn the device before hand to have it fake that its turning off while remaining on?

nick_name 2 days ago 0 replies      
Looks like Vysk's QS1 is aiming to mitigate the baseband hacks - http://www.theguardian.com/technology/2014/jul/25/startup-cl...
daenz 2 days ago 2 replies      
> Mr Snowden said GCHQ could gain access to a handset by sending it an encrypted text message and use it for such things as taking pictures and listening in.

Are there hardware GCHQ keys in the phone for verifying the encrypted text? I imagine there would have to be, otherwise anybody (with enough time and research) could construct one of these messages to gain control of the phone.

blazespin 2 days ago 1 reply      
The question I have is the issues around crashing a device via texts[1]. Was that part of this scheme? Was it put in there on purpose?

1. http://www.techtimes.com/articles/55893/20150527/one-text-me...

btbuildem 2 days ago 0 replies      
HN regulars may well be aware of all these things, but it's good to see this on the pages of the mass media.
venomsnake 2 days ago 0 replies      
If a device is known it can be hacked. Anonymity is the key. Use roaming sim card (it will require some cooperation of the remote operator, so kinda makes it harder).What to do to mitigate - no sim card. If have to use sim card - imei randomizer. Wifi mac address randomizer.
tdaltonc 2 days ago 2 replies      
Top Comment Paraphrase: "I knew about this before it was cool."

When someone posts a new python/lua/lisp feature intro, no one says "I knew that already!" or "No new info here!" But if it's about security or privacy, the HN zeitgeist wants to denigrate it as "old news."

coldcode 2 days ago 1 reply      
As much as I admire Mr. Snowden for what he did, he is not an expert outside of the documents he took with him. He isn't privy to anything happening now. He didn't build anything or code anything. All he did was steal from some idiots that should have known better how to secure information. This does not make him omniscient.
djyaz1200 2 days ago 3 replies      
Edward Snowden is not a hero IMO, anyone who cared to look knew for years the government had vast surveillance powers. Is anyone else tired of seeing his headlines? The guy seems to really want to be a celebrity? Does he deserve that? I'm not trying to be rude, only suggesting we rethink our attention to him.
Reddit Presents: Upvoted upvoted.com
275 points by jsnathan  2 days ago   151 comments top 29
staunch 1 day ago 7 replies      
I sold upvoted.com to Alexis Ohanian earlier this year. I'm glad it's not just a redirect anymore!

I wanted to do something cool with it but never did. Then one day I got an email from an assistant of Alexis Ohanian's asking to sell it cheap for his little school project :P

I knew he was just trying to avoid being gouged, so I offered to sell it to Alexis for his initial asking price, if he'd give me a meeting and some feedback on my startup. He agreed and we had a good meeting. He gave my co-founder and I some genuinely usable advice, and technically funded our bootstrapped startup (https://portal.cloud) for a few weeks there.

minimaxir 1 day ago 7 replies      
There was a recent allegation that the Reddit administration had encouraged vote brigading with Tom Hanks' comments: http://i.imgur.com/Obafhpc.png

From the looks of the Upvoted front page, looks like they're doing it as a content marketing strategy, which doesn't bode well. Also, it seems like a BuzzFeed clone, which Redditors despise.

(Disclosure: although I have done a LOT of work analyzing Reddit [http://minimaxir.com/2015/10/reddit-bigquery/], I have not been approached to write anything for Upvoted)

on_ 1 day ago 1 reply      
I have been pretty critical of reddit lately, but there was a time when I really enjoyed using it and it does provide a ton of value to many people. It is a great website in a very difficult situation due to the community culture (very anti-corporate/advertising mentality), diversity of users and content ownership issues. On a real level, the site is pretty awesome. I want to see them succeed because sub-communities and even the organization have stood for an opened internet and positive things.

I think upvoted looks really sleek and I hope it is successful, but they really need a way to monetize and it is a really hard problem to solve. Obviously, using a widely supported mature CMS like wordpress makes it easy toproduce content with minimal effort and cost but that has been reddit users largest gripe. Upvoted is a curator/aggregator built on top of a curator/aggregator, which is weird. Reddit's success and problems stem from providing the long tail of content, allowing diverse topics and communities to be covered while allowing globally popular things to float to the top. This means that there is rarely community consensus, so while upvoted has little risk as it is cheap to make, I can't see it providing much financial support for the company.

In all honesty though, I wish them the best of luck and hope to see them do cool and intersting things in the future. Obviously, improving the search would be a great start because Google is an awesome search engine, but for content discovery and curation, Reddit is doing a great job.Best of luck guys and sorry about the sarcastic comments about Wordpress and PHP, it really is a good way to quickly test out something like upvoted without significant dev reources and is a good content management system, edit: [if used correctly]

austenallred 1 day ago 1 reply      
I think this is a brilliant move.

To all those saying whether or not current redditors will use it: that's not the point. This is Reddit attempting to use the content users generate on the platform for the 99% of people who don't use Reddit. Call it buzzfeed if you want, an absurd amount of people use buzzfeed.

So long as Reddit and Upvoted are separate, I think it makes a ton of sense.

pvg 1 day ago 1 reply      
http://www.redditblog.com/2015/10/introducing-upvoted-reddit... actually explains what it is and might be a more informative link.
mikepurvis 1 day ago 2 replies      
Looks like Buzzfeed and every other online clickbait publication.
allsystemsgo 1 day ago 3 replies      
Is this Reddit trying to take itself more seriously? It's almost like a mask layer to obfuscate the hive mind. I'm curious how the content is created, how something is featured, etc.
roymurdock 1 day ago 1 reply      
For anyone confused about why this exists: http://upvoted.com/advertise/
CJKinni 1 day ago 1 reply      
So this feels like a way of obfuscating comments and some of the less appealing aspects of the reddit community, while turning it into a buzzfeed style community. It feels like a prettier version of http://thisisthe.link/
TomGullen 1 day ago 2 replies      
Looks nice, functions nicely, I can see what they are doing but it's just not for me. Content appears pretty shallow at a brief browse through it all, buzzfeed esque.
joesweeney 1 day ago 1 reply      
I feel that normal Reddit users are going to hate this for the most part because it does a lot of the same things that BuzzFeed does in that it takes content from Reddit presents it in a somewhat dumbed-down, clickbait way. However, I don't think this is a problem. Upvoted is not supposed to be for Redditors, it's supposed to be for a different audience who isn't yet ready for Reddit. It's going to capture at least some of the traffic that usually lands on other clickbait sites which take content from Reddit, and it's going to allow them to monetize.
natvod 1 day ago 2 replies      
Another Buzzfeed-y content site? Meh.

What I would actually like to see:

Have journalists contact Redditors who post interesting stories for interviews to write up more fully fleshed stories.

A lot of Redditors post really interesting stories about their experiences, businesses etc. It'd be super cool to read more about it.

tlrobinson 1 day ago 1 reply      
Publications like BuzzFeed have been making money off of Reddit's content, or at least content discovered on Reddit, for years. Makes sense Reddit would want to capture some of that value.
notacoward 1 day ago 0 replies      
Nice. The thing I don't like about Reddit itself is that it's hard to find what each article is actually about, and most of the time it's some in-jokey/meme-y stuff I wouldn't have bothered with if I'd been able to see even one image or quote. Finding quality content there is hard; if I wanted to spend that much time separating wheat from chaff I might as well try Google+. Upvoted looks like a much more accessible way to get some light reading/entertainment done. Good idea, and AFAICT so far a good implementation.
rndn 1 day ago 0 replies      
Im not sure what this site is supposed to be for: Is it (a) to generate more ad revenue for reddit to be finally self-sustaining or (b) an attempt to come up to the expectations of Reddit's VC shareholders (i.e. a separate startup to generate more revenue than necessary for self-sustenance).
jokoon 1 day ago 0 replies      
Can't really like this, but at least it has the virtue of containing the kind of stuff that happened to digg.

Whatever happens, I'm really with reddit in light of all the controversies that surrounded it. I really have a high esteem of community driven websites who can be user-oriented and still grow and attract more users. It's not an easy task. I'm sure there must be some kind of game theory around it if you want to keep it going. Making balanced rules for such a website might be no easy thing.

Some call it "plebeian" but I think it's still a very good website if you don't focus too much on the default subreddits. I will never be able to wrap my head around the 4chan UI, even if it has an attractive community.

tumes 1 day ago 0 replies      
It's interesting to me that between this and Apple's news app that we're steering to a less-social-engagement centric model for news presentation. One new way to save myself from looking at the comments.
mcintyre1994 1 day ago 0 replies      
I can't quite put my finger on it but this feels like one of them websites someone will share on Facebook and I'll tell Facebook never to show me anything from that domain again. I'm sure those sites are making lots of money though and are getting shared because people like them, so they'll probably do way better with it than I expect.
hugh4 1 day ago 1 reply      
The most bizarre thing about this is that it's all over HN and voat, but I can't find any mention of it on reddit.
kawera 1 day ago 1 reply      
Sad that The Redditor didn't work out: http://www.theredditor.com/
thejew 1 day ago 2 replies      
Looks like it's a Wordpress blog: http://upvoted.com/wp-admin
z92 1 day ago 0 replies      
Looks like a "portal site", which were popular in the 90s.
ebbv 1 day ago 1 reply      
Here's a question I have; reddit is full of liars. What efforts are the Upvoted staff making to verify stuff that appears on reddit before bringing it over to Upvoted?
dom96 1 day ago 0 replies      
This looks awesome but I am scared to keep it open for too long as it looks like an even bigger time sink than Reddit.
tscosj 1 day ago 1 reply      
Is that ordinary Wordpress site?
it_learnses 1 day ago 0 replies      
how did you get a .cloud domain?
samstave 1 day ago 0 replies      
Upvoted! -- The Fisher Price (R) of Reddit!!
chinathrow 1 day ago 0 replies      
why no https? Even reddit itself is on https.
The microservices cargo cult stavros.io
277 points by stelabouras  2 days ago   169 comments top 43
song 2 days ago 4 replies      
Yes, a thousand times yes! Microservices are yet another tool in the box but they shouldn't be used on everything. And it makes no frigging sense for a startup or any new project to start with microservices...

The main advantage of microservices are in scaling and in reducing complexity of a big system but those advantages only make sense when you have enough traffic so that you have to scale or when your system has become complex enough to warrant microservices.

When first starting development, the most important thing is speed of development to get feedback from users as soon as possible. It's much faster to develop a clean, well optimized monolith than to spend a lot of time developing a whole bunch of micro services.And while thinking in term of microservices will help you to better conceptualize your software architecture, at this stage, you don't have all the informations needed to have a clear idea of what the final architecture will be and you'll often end up with microservices that are divided in suboptimal ways causing a lot of pain.

togusa 2 days ago 3 replies      
I'm in the middle of a microservices mess than was forced upon us. I have nothing positive to say. If you're in the SaaS space already and it's not a greenfield project, it's orders of magnitude better to deploy lots of smaller identical monoliths than it is to try and build and deploy lots of services and manage the contracts and complexity between them.

Major problems I've seen are: per transaction performance sucks due to the network or IPC channels, development friction, logical complexity, infrastructure complexity, managing contracts between services, debugging failures, monitoring performance, bootstrapping new staff and the biggest of the lot: headspace.

If you want to succeed, at least in the short term, just keep your monolith tight and fast and without sprawling infrastructure requirements. Single machine, single process, single storage engine (or database), single messaging system. Then scale that to multiple instances. If your site deployment requires at least 20 machines due to sprawl, you're going to be even more screwed when you throw microservices at it, not less. If your application is incredibly complex, it's not going to work either. The problem domain needs to be small and easy to consider as it's difficult to cleanly extract a chunk of your average monolith into a standalone concern.

There are also people with technical authority in many companies who blindly follow the latest fad without real consideration of suitability, risk assessment or accountability. If someone starts waving microservices, AWS and everything else around, they need to fight their position and everyone needs to assume that isn't the default end game.

aartur 2 days ago 7 replies      
Microservices are advertised as a means to modularization, but it's what programming language modules are for - they are defined on source code level and can be freely used in different runtime components without network/ops/version-management headaches. When you have your module defined that way, you can think of exposing it as a microservice because this may make sense for your use case.

Imagine that each Python module runs as a microservice. For many modules this would lead to huge performance degradation, for example a regexp module can be called thousands times per second, the running time of a call is usually short and replacing an in-process call with a network call will give 100-1000x slowdown.

But if you take a different use case of the same module - complex regexps running on large texts, potentially causing out-of-memory errors, then packing the module into a microservice can make sense - separate processes can have large caches, an out-of-memory error terminates an instance of a microservice only and not the calling process.

Generally I think the advice should be to always use source code modules in the first place, and create microservices using these modules for specific use cases only involving runtime needs like caching, fault tolerance, scalability.

daxfohl 2 days ago 6 replies      
This leaves open the question of what are microservices. Are they of necessity completely isolated units deployed with Docker and Kubernates on a CoreOS cluster and communicating only via HTTP each with independent databases? Yes this seems a bit much for most projects.

There are midway options though. Even the lowly batch job is a good way to get some of the decoupling without having to go "all-in". I find batch jobs and message queues give me 80% of the benefit of "microservices" with only 5% of the pain.

In fact someone needs to write an article on "levels" of "microserviceness", (which certainly has multiple dimensions and branches) and point out the benefits and drawbacks of each level.

Of course the end game being: "a Docker container for each line of code."

bru 2 days ago 6 replies      
Some of the weaknesses can be tempered by not using HTTP to communicate between the microservices:

- "slowdowns on the order of 1000%"

- " bunch of code necessary to marshal/unmarshal data [...] there are always dragons in there.

And also problems of versioning, data integrity, etc.

I've had those problems in a microservices architecture. That's things that are solved by protobuf[0]. Your servers exchange small, efficient structured data and you get tons of other benefits ({un,}marshaling for free, integrity, versioning, ...).

Potential downside: a language you want to use having no protobuf API.

Finally, I see another downside to the microservices architecture: it may be decided that the smaller, decoupled code bases should be stored in multiple CVS repos. Which turns into a nightmare: a single bugfix may span across multiple repos and there is no clean built-in way to links commits across them, you still should sync the interfaces (e.g. with git submodules), etc. This is a thing I've witnessed firsthand, and proposals to merge the repos were dismissed since "We [were] using a microservices architecture". Yes, it's a mistaken implementation of the microservices paradigm, but it still happens.

edit: I recommend protobuf not by preference over other equivalent solutions, but because it's the only one I know and have used. Alternatives are evoked below.

0: https://developers.google.com/protocol-buffers/

markbnj 2 days ago 0 replies      
I think this article isn't very useful. It's unfortunate that we have this human need to blow things up and then deflate them. I dislike the term "microservices" because for most organizations all it does is plant a flag all the way across the yard from "monolithic." Suddenly the guys at the top have a new buzzword and the engineers are all trying to make their services as small as possible so they're micro enough.

Most of the gotchas the article mentions aren't logical consequences of decomposing into smaller services at all. You don't have to have different data stores for each service. You don't need to "marshal data" between services. If a service needs to call a service it's just a client like any other client, so if we want to call standard http request/response handling "marshaling" I guess it will sound more complex and scary. Breaking a monolithic app into smaller pieces doesn't increase complexity, it reduces it. And to the extent you have more things to monitor that probably means you can now monitor and control things that were more or less invisible outside the log data in the monolithic architecture.

More importantly, decomposing a problem into logically related areas of functionality that can execute separately allows you to make the most efficient use of compute resources, and it is consistent with the idea of favoring multi-processing over multi-threading. In almost every way groups of simpler things collaborating makes much more sense than large complicated things that do all. It's only when we create these Knights in shining armor that people start feeling like they have to be knocked off their horses. Use the tools and techniques that make sense.

davedx 2 days ago 2 replies      
So I think there is a parallel to this with the whole "everything must be in tiny modules on npm" movement in the JavaScript community. If you do this, you end up with lots of repositories, a rigid separation of concerns but a network between you and your dependencies that will get hit a lot unless you wield some npm link sorcery, and a bunch of extra complexity.

A modular monolith application is what people have been writing since people thought up the notion of modules. Enforce proper discipline when building your app out and you won't need these physical walls between your functional areas.

I'm currently reading SICP, and the notion of using "block structure" in Lisp to compartmentalize and encapsulate functional areas of code is introduced in Chapter 1.

Get the basic stuff right before you start introducing complex systems to split up your software.

sz4kerto 2 days ago 3 replies      
One thing that is usually overlooked: do microservices fit your team structure? A team who spends all day together can manage a monolith very well, synchronous releases (everything-at-once) are not a problem. If you don't need 24/7, then it's even better.

However, if you're a distributed team (maybe across timezones), quick discussions are difficult and 'costly', then microservices might worth the effort. Managing the deployment and operations is more difficult but sometimes much less coordination is needed when people communicate through APIs and not Skype and Slack.

dunkelheit 2 days ago 0 replies      
Having read one of the success stories linked in the beginning (http://philcalcado.com/2015/09/08/how_we_ended_up_with_micro...) I think I am starting to get it. That was a rather candid article describing a team which after having got burned by a sprawling "monorail" architecture decided to split it up into services (nothing micro- there) based on organizational viewpoint (if there is some knowledge about a part of application shared by 3-4 engineers it makes sense to split it into a separate module so it can be developed faster). And as I am inferring from the pictures their services don't call each other much so it is really close to a modular monolith. So if "migrating to microservices architecture" really means "splitting that damned monorail into something more manageable" then it is a really good idea after some point.
donpark 2 days ago 1 reply      
This phenomenon is just part of human nature. Same thing happened with OOP, design patterns, TDD, etc.

To apply microservices effectively, you should first build the monolith, modularizing at the source code level and adding choking points as needed. Over time, microservices will naturally roll off the monolith not unlike boulders rolling off mountains after rain or earthquake. Don't go dynamiting in anticipation.

angdis 2 days ago 2 replies      
I can't help but think that much of the intent behind "cargo-culting" is simply people building up their resumes for future career development.

If you want to work in a sexy new technology, but you need to develop experience in that new stuff to be marketable it is totally understandable to try to build up skills by forcing the implementation of over-sized solutions.

In other words, many employers aren't willing to take on folks if they don't have the requisite experience on some new stack and that compels folks to gain that experience anyway they can, including "cargo-culting" stuff that isn't necessary just for the experience gain.

jakozaur 2 days ago 1 reply      
Rule of thumb: Divide number of full-time backend engineers by 5 and you get ideal number of microservices :-).

Too many microservices is a complexity mess, too little means you have a monolith that is hard to iterate on.

calpaterson 2 days ago 2 replies      
I don't agree that scaling up is an unqualified advantage of microservices. In practice you have to co-ordinate communication between your services. Though some of this communication will be asynchronous publish-subscribe (ie: speed unimportant) some communication will necessarily be synchronous.

If one heavy part of your rails app takes up 90% of the processing time, there is nothing wrong with just getting a bigger machine for the whole app. The bigger CPU/memory/whatever will be spent on the heavy part and the rest will be normal.

For most business, scaling is not a problem - they can just get bigger machines. Having to re-implement transactions across your microservice architecture really is a problem. Very often transactions need to cross microservice boundaries and that really requires a lot of thought

krisdol 2 days ago 2 replies      
I see the strengths and weaknesses in the article, and the complaints from all the comments here, but I still find the trade off of microservices worth it. It requires sophisticated ops and well defined deployment tools and dev environments, but we have to handle ten billion requests a month on our stack. The ease at which we handle that scale, and the speed at which engineers get to iterate and deploy makes microservices all the more worth it.
amelius 2 days ago 2 replies      
> Data segregation: Since all your data now lives in different data stores, youre responsible for relationships between data. What would be a simple cascading delete in a monolith is now a complicated symphony of dependencies, calls and verifications.

IMHO, this is the biggest problem with microservices: "Transactions" are not available in a microservice environment. You'll have to work really hard to get anything that comes close.

ed_blackburn 2 days ago 0 replies      
Agree with the premise. An excellent example premature optimisation or YAGNI. An alternative is to factor your code by business capability / bounded context as microservices endorses. Factor the code as such but don't deploy the logical partitions as physical ones.

Keep it all in one deployable artefact, in-process for as long as you possibly can. Use an in-proc message bus first, don't dive into Rabbit until you know you need it. As soon as you require that infrastructure cost for http, mq, monitoring a ballooning of boxes / VMs, deployment complications you'll notice the spike in operational expenditure.

Grow you architecture organically.

mschuster91 2 days ago 1 reply      
I don't get the trend to split up everything as micro as possible.

Use a proper framework like Symfony (or if, like many people, all you want is a CMS, Drupal) supporting MySQL master-slave or multi-master replication and separation of web frontend and file hosting, host it on AWS (or plain old dedicated servers), put in Cloudflare if you're scared of DDoS kids, and be done. If you need SSO use either provided SSO plugins or an LDAP backend if the SSO is only required for various platforms provided by you.

Said architecture can be built and run on a single server and if you're dealing with spikes you just spin up a couple frontend servers and be done.

rndn 2 days ago 1 reply      
I think a cargo cult also has something to do with signaling, sort of like a status symbol ("They can't really be Y if they are not X!" -> "Look at us how X we are!"). It's a self-reinforcing meme that is used as a heuristic for value estimation, but usually fails catastrophically because of its heuristic and self-reinforcing nature.
k__ 2 days ago 1 reply      
If I learned one thing in Software Engineering it's "modularization matters most". And microservices seem to be the epitome of this concept.

If you have to work with different people, you need a way to minimize dependencies between them.

Also, the more encapsulated things are, the less the starting skill of a person matters. You just need people who get things done. Later you can switch out the bad modules easily. Which is a huge economic factor.

I can't count the hours I spent with fixing horrible monoliths and the years it took to replace them.

But if there is a horrible microservice, you can do this in a fraction of time.

oldmantaiter 2 days ago 0 replies      
TL;DR Microservices have their place, and can be useful for certain environments, but they are not a fix-all.

They can be pretty nice for multi-tenanted development environments. Sure, you could use any of the other isolation techniques, but being able to provide an environment that can be started quickly (and somewhat easily depending on the rest of the services required). Not to mention that the popularity of container systems and their ease in understanding (Dockerfile vs RPM spec) means that other people can hack away at the dev environment without having to know the ins and outs of building proper packages (although they should learn).

Now, for a production environment, I would never move to a microservices architecture for the reasons listed in the article and my own dislike for adding overhead and complexity to solve "issues" that can be easily dealt with using tools that have existed for years (proper packaging with dependencies etc..).

copsarebastards 1 day ago 0 replies      
I agree with the YAGNI-ish approach, but talking about micro services as if they provide modularity is entirely off-base. The decision to use micro services should be driven by scalability, not modularity. If you're saying that it's going to be terrible to make changes to your codebase, simply bolting micro services on top of that is going to make things worse.

A well-designed micro service architecture is modular in that each micro service is basically a nice wrapper around either a query or an update. But you can organize your application into an API of queries and updates without micro services.

To be honest, if you don't at least intuitively understand this, you have no business architecting a production system large enough that this matters.

lectrick 2 days ago 0 replies      
Many of the advantages of microservices can be achieved by refactoring your monolith code to be less monolithic.

I would suggest using functional styles wherever possible, plenty of isolated unit testable code, and a hexagonal architecture http://alistair.cockburn.us/Hexagonal+architecture that pushes all the I/O, mutation, side effects, etc. to the very boundary of your code. Also see Gary Bernhardt's "Boundaries" talk for more interesting thought in that vein https://www.youtube.com/watch?v=yTkzNHF6rMs

agentultra 2 days ago 0 replies      
There are also a distinct lack of tools for debugging co-ordination and scheduling problems in a micro-service (or as they used to call it in my day, Service Oriented Architecture) system.

In an asynchronous RPC scenario, does Microservice A listen for the appropriate response message from Microservice B before continuing work on Request X99? Does it respond to all messages in the appropriate order? What happens in a cascading failure scenario when the back-end system Microservice B relies on is taking too long due to bad hardware/burst traffic/DDOS/resource contention?

Do you have tools that can analyze your program for critical sections where you need explicit locking and ordering mechanisms? Do you have analysis tools that provide guarantees that your fancy distributed architecture is complete/correct?

These are just a sample of the things OpenStack has to think about -- a micro-service architecture for managing, orchestrating, and authenticating access to data-center resources. It's a hard, hard problem and an on-going effort by thousands of well-paid engineers across the globe to get right.

I have no doubt that a small team of talented developers could stand up a system of APIs around their core services to get a system running. However I can guarantee that they will be making huge trade-offs in terms of correctness and reliability.

At least with a monolith (is that a pejorative?) application you do have tools to analyze and debug your code that work well and have been battle-tested for a couple of decades. I suspect you would produce fewer bugs if you were constrained for developer talent and time.

jscruz 2 days ago 1 reply      
Micro service architecture is good to evolve a monolith project who need to scale when dealing with a huge amount of calls. It's great to be able to experiment with different implementations and technologies, do A/B testing. It enforces to have single responsibility modules at architecture level avoiding bad practices if you are dealing with different/remote dev teams.

You have challenges, though. One of them is when implementing micro services you need a cultural change in your business to be able to adapt to the change. You need to deal with more complex architecture, you need to implement your own solution to deal with the architecture, spend time defining a dev ops culture if there is none, ...

Businesses are usually pretty different between others so you can not expect to have the same solution to deal with your problems (For example, using Netflix approach as a silver-bullet solution).

I've heard so many times the concept "micro services" as the goal as same as "big data" as the solution. Again, we should analyze what is our problem and what we want to solve before selling the new shiny thing and making things over complicated.

cookiecat 2 days ago 0 replies      
Martin Fowler identified a lot of the same tradeoffs in this video: https://www.youtube.com/watch?v=2yko4TbC8cI

One benefit I haven't seen mentioned yet: microservices are effective at reducing the mental "page size" when working on any particular part of the system.

simonpantzare 2 days ago 0 replies      
The application I work on most of the time is largely monolithic and usually I have no problems with that. Some parts have been extracted to their own codebases and are deployed separately because of performance reasons.

We also separated the main backend/API codebase from the frontend mostly because the frontend devs work prefer to work within the Node ecosystem instead of Python/Django and so that we don't have to think too much about synchronizing deployments. The tests for the backend code take quite long to run as well compared to the frontend tests, so having this separation is nice for the frontend devs that way too.

What I some times would like to have better infrastructure support for though is throwaway prototypes/projects that can live in their own codebases and have access to all the regular databases, blob storage and so on, as well as databases that are private to the prototype that I can do whatever with with no risk of doing something bad to the important databases/storage.

I would also like these prototypes to be able to register themselves with the load balancer to take care of everything under `/halloween-experiment/` for example and have the load balancer add headers like `X-UserEmail`, `X-UserID`, `X-IsEmployee`, and so on so that I don't have to implement authentication/authorization in every prototype.

Today these types of prototypes need to live next to the "important" code so that they can use the same CI pipeline and easily be made public or visible to employees and use real data.

I'm following projects like https://getkong.org/ with interest, and together with everything happening around Docker such as EC2 container service or Kubernetes, as well as projects for service discover/configuration like etcd or Consul, it feels like we're getting there. There are just so many projects to keep track of, and you need to figure out how to make them all part of your CI pipeline. :)

evantahler 2 days ago 0 replies      
Develop as a monolith, deploy as services: engines.


pjmlp 2 days ago 0 replies      
Not only that, the microservices is just SUN-RPC, CORBA reborn and we all know how they worked out.
junto 2 days ago 1 reply      
I love his little worflow diagram embedded in the article: http://media.korokithakis.net/images/microservices-cargo-cul...
jnet 2 days ago 0 replies      
"As with everything, there are advantages and disadvantages"

The author focuses on microservices, however, I think there is a larger point to be made. It is not that some particular architectural pattern is bad or good, it's that when you don't fully consider the requirements of your application and apply some pattern or technology just because it's the hot item this week you are going to end up with problems. This has less to do with microservices, in my experience, and more to do with less technical managers making decisions for a project when they don't fully understand.

tibbon 2 days ago 0 replies      
At my job we've avoided microservices thus far. 90% of our deployments are just to Heroku. Every now and then I lament in my mind that we aren't using the coolest new tools (Docker, microservices and all the things that come with), but what we have works really well, and we can easily scale up by 10x and things will still work.

Every time I think of the mess that it will cause to break up things to microservices, I'm glad we aren't doing it- yet. When the time comes, we'll roll out to services as-needed, but that day isn't today.

lolive 2 days ago 1 reply      
Damn!Now what is left as the next big thing?
lobster_johnson 1 day ago 0 replies      
Yet another article that misses a huge aspect of microservices: Reusability. (I'm going to borrow from an older comment [1] here.)

Almost all of the swathe of microservices we've developed internally are general-purpose. We've built a dozen or more user-facing apps on top of them. If I wanted to build a new app today, I would typically sit down and write a Node + React app, configure some backends, and I'd be done. I don't need to write a new back end because I can just call our existing services.

If you look at what a modern web app is, most apps these days are actually stupidly similar. They typically need things like:

* User accounts

* Authorization with existing OAuth providers (e.g. Facebook)

* Some kind of database to store and search structured content

* Notifications (email, text, push)

* Storing images or video

* Syncing data from external sources

* Analytics

We have generalized, reusable microservices that do all of this.

Let's say I want to build a HN-type link aggregator with comments. I will use our document store to store the links and the comments in a nice hierarchical structure. I will use our login microservice that mediates between an identity data model and an OAuth account registry. I can use our tiny microservice devoted to recording up-/downvotes. I can use our analytics backend to record high-level events on every UI interaction.

I can write this without a single new line of backend code.

This ability to "pick and mix" functionality you need is the real, largely undiscovered beauty of microservices, in my opinion. It's the same idea that makes AWS attractive to many people; you're building on the foundation of thousands and thousands of work and reusing it.

We just whipped up a new site recently where 95% of the work was purely on the UI, since all the backend parts already existed. The remaining 5% was just code to get data to the system from a third-party source, plus some configuration.

Reusability requires that you plan every microservices to be flexible and multitenant from day one. It's a challenge, but not actually a big one.

Is it possible to do this monolithically? Sure. I would be afraid of touching such a beast. We have very few issues with code devolving into "legacy", for example; the strict shared-nothing APIs ensure that half-baked, client-specific hacks don't sneak into the codebase. If anything messy happens, it happens in the client app, and that's where it should happen. Eventually you'll throw the app away, but the backends remain.

agentgt 2 days ago 0 replies      
The problem with microservices for us has been the composition of operations. Yeah we use the Rx* observable patterns and it helps but the code is still non intuitive for new developers if the language is pretty much procedural/imperative. Even with languages like Scala it still gets confusing. Even if you have a language where threads are cheap (golang) you still have to compose the operations.

I have been meaning to see if there are microservice like frameworks for Haskell similar to Hystrix (which is what we use).

wiremine 2 days ago 1 reply      
Of course microservices are just another tool in the toolbox.

I think what's frustrating is the lack of support in moving from a monolith to a microservice architecture. I haven't built a lot of them myself, but it feels like you're rolling your own framework/architecture whenever you need to make the transition. Is that anyone else's experience, or is it just not possible to codify best practices?

Randgalt 2 days ago 1 reply      
The "micro" in microservices is the issue. It reminds me of the "No-SQL" movement. The truth is that EVERYONE has a multi-tiered architecture. The only question is how many tiers you need. It's always more than 1.
dorfsmay 2 days ago 0 replies      
The thing that nobody addresses and especially not the micro services gurus is how do you know where and what to split into micro services.

When does splitting a service add enough value that it is worth the cost of performance and added complexity?

aterreno 2 days ago 0 replies      
If you design your software with a bad architecture you will have problems, if the services and their data are 'cut' in the wrong way you will get performance (and other) problems.

That's valid for functions, state, api and service stores.

debacle 2 days ago 1 reply      
I think some of these points are gross exaggerations.

> You immediately increase the things your servers have to do tenfold.

Really? It's ten times as much work to implement microservices?

> Personally, Ive seen slowdowns on the order of 1000% when moving to microservices (yes, ten times slower).

Then you implemented your microservices wrong.

I think that the author's understanding of the goals and purposes of microservices is maybe a bit misguided. Microservices are about front-loading scaling problems, not about having a clean architecture or smaller codebase. If you never need to scale, you don't need microservices (but you're probably wrong).

The flowchart at the end of the post really underscores for me that this author's argument is not genuine. He holds up this shibboleth of a "monolithic" architecture, something that doesn't really exist in 2015.

lkrubner 1 day ago 0 replies      
I think it is fascinating how an idea can emerge with a fuzzy meaning and, in the space of 2 years, become rigidly associated with a narrow set of technologies which will surely be much more temporary than the idea itself, thus forcing people, after 3 or 4 more years, to come up with a new word for roughly the same idea.

In the summer of 2013 I was working at Timeout.com and we were trying to reinvent the architecture of the site. Timeout.com had spent several years using the PHP framework Symfony to build a massive monolithic CMS, and the thing was a disaster. It was shockingly slow. If you ssh'ed to inside the datacenter and then tested the response time of the system, under ideal conditions, from one computer in the data center to another computer in the data center, then the average response time was 10 seconds!

This lead to a long internal debate. I advocated for what I called "An architecture of small apps", because at that time none of us had ever heard the word "microservices". I did not hear that word until March of 2014, when Martin Fowler wrote his essay:


But back in the summer of 2013, with permission, I published the whole internal debate that we had had at Timeout.com:


You will notice that you don't see the word "Docker" in my essay, nor do you see it in Martin Fowler's essay. And in my essay, I suggest we use ZeroMQ to bind our apps together.

But 2 years after we had our internal debate, I've noticed that more and more people now associate "microservices" with a very specific set of implementation details: Docker, Kubernates, HTTP and Service Discovery.

I acknowledge that these 4 technologies can be combined in very powerful ways. I currently work at the startup incubator run by NYU, and I get to eavesdrop on what the folks at lsq.io doing, since they sit next to me. And I get that Pelly is a frighteningly smart guy doing extremely cutting-edge stuff. I totally admire everything they are doing.

However, I personally feel that I'm following a microservices strategy, and yet what I'm building is still a lot like what I described in my essay of 2013.

July 30th, 2013http://www.smashcompany.com/technology/an-architecture-of-sm...

DanielBMarkham 2 days ago 0 replies      
The article is a little weak, but well worth the read.

I love the microservices concept, but fair warning: as bad as OO has gotten over the past 20-30 years, microservices promise to be even uglier.

Why? Because not only are you mucking around in the code, you're also mucking around in how everything connects to everything else in your cloud.

Just like we saw vendors come out with click-and-drag ways to create new classes, now we're seeing vendors start to sell "pre-finished" microservices. Get the disk out of the box, boot it up, fill out a couple of forms, and voila! Now you have microservices.

That worries the living crap out of me because microservices are the architecture of the future. You just can't get from here to there using a magic bullet. Learn you some pure FP, make everything composable using the Unix Philosophy, and keep your LOC to a bare minimum. Toss off every damn thing you don't need.

As much as I know they are the way forward, I have a bad feeling that consultants will have plenty of billable time coming up straightening out a lot of messes.

datamiller 1 day ago 0 replies      
datamiller 1 day ago 0 replies      

fight the future

Microsoft Surface Book microsoft.com
189 points by SoapSeller  1 day ago   1 comment top
dang 1 day ago 0 replies      
       cached 8 October 2015 15:11:03 GMT