hacker news with inline top comments    .. more ..    10 Oct 2016 Best
home   ask   best   3 years ago   
RethinkDB is shutting down rethinkdb.com
1663 points by neumino  4 days ago   436 comments top 90
pc 4 days ago 9 replies      
RethinkDB is one of the developer tools that we at Stripe most looked up to.[1] The team had so many good ideas and rigorous, creative thoughts around what a database could be and how it should work. I'm really bummed that it didn't work out for them and have enormous respect for the tenacity of their effort.

I'm also excited to have them join us here at Stripe. As we've gotten to know Mike, Slava, and the other Rethinks -- and shared some of our plans with them and gotten their feedback -- we've become pretty convinced that we can build some excellent products together in the future. I'm looking forward to learning from all of them.

[1] (And, for me, even before Stripe -- I started learning Haskell with Slava's Lisp-interpreter-in-Haskell tutorial back in 2006... http://www.defmacro.org/ramblings/lisp-in-haskell.html)

jnordwick 4 days ago 10 replies      
Why does nobody seem to have any introspection on why RethinkDB failed? Clearly there are some major problems that people re ignoring. If my favorite DB (I must mention Kx Systems once a month) folded, I could give you a laundry list of issues where things went sideways, but all I see is glowing praise and comments about the best tech not always winning (KDB knocks the socks off of everything, but I sure can give you a list of places it fails).

This isn't meant to be harsh, but these are times to learn, not simply pat each other on the back.

simon_acca 4 days ago 4 replies      
It seems that unfortunately RethinkDB the company was architected in such a way that the success of their product, in terms of performance and developer experience, led to a decrease in revenue.

This shutdown therefore goes a long way to say how talented and ethically correct the team was, something extremely evident in how they put correctness and reliability in front of performance.

In short, RethinkDB is a very solid piece of software that does well where (many) other NoSQLs fall short, that is:

 * easy HA and automatic failover * easy auto sharding * rational query language * ease of administration and awareness (webUI) * realtime capabilities * perform well on the Jepsen test!
Now, what could have they done differently to stay afloat? What avenues do we have to fund fund such great projects, whose point is being OSS? (I mean, one of the selling points of RethinkDB is that one can trust it in this land of NoSQLs that promise but don't deliver, and this is in part thanks to their open development processes)

jgord 4 days ago 1 reply      
I just want to personally thank Slava for his exceptional work over the past few years - at every step him and his team have been classy and professional, and brought real creativity and intelligence to the world of databases and startups.

Its clear to me that Rethink is the model for future databases - its just that DBs have a long gestation as no-one wants to risk their data until the code is aged like a fine wine. Its an important longterm technology play - just the kind we need to improve things for all of us.

In two or three years I think they would be making money, I think this is a failure of capitalism or imagination our HN/SV community.

To those several of us who have the power to write a check, please consider doing so, Rethink have been relentlessly building the future [ and you will make money ]

misterbowfinger 4 days ago 4 replies      
Wow. I was seriously rooting for them. I'm looking forward to Slava's posts about their challenges on the business side. There's a part of me that's angry that MongoDB - which some would say has made an objectively worse product - has succeeded.

My initial thought is that MongoDB has done a way, way better job at SEO. The number of blog posts about RethinkDB pales in comparison to Mongo. I wonder if they got beat on sales as well? Not sure.

tbrock 4 days ago 3 replies      
I hope this isn't true. Having worked at/on multiple competitors I have nothing but respect and admiration for the work the RethinkDB team has done to make a great database and development platform.

This was real technology! I'm truly sad that the environment is such that great work like this can't continue to be funded.

Thanks for showing everyone how to write amazing documentation, caring about the fundamentals, and for the incrediblly snazzy admin panel.

aeharding 4 days ago 1 reply      
Very sad to hear, but hopefully the software will continue to be developed in an open source format.

Keep this in mind when you invest in a certain technology: some organizations, especially nonprofits (for example, the Apache Software Foundation, Python Software Foundation, the new Node Foundation) are probably going to support and develop their software for extended periods of time relative to, say, a startup or for-profit (Parse, MongoDB and RethinkDB immediately come to mind).

haneefmubarak 4 days ago 1 reply      
This is honestly quite depressing for me to hear. I've always liked the team and the fantastic product they created. A couple of years ago, when I was working on a DB product myself, I met a few of their team and I was just blown away by how nice and welcoming they were, even to someone developing a product that could potentially compete with theirs.

Later onwards, when I was working on an NLP startup earlier this year, I opted to use RethinkDB because I had seen how clean, smooth, and fast its internals were. When I had a hiccup with running a cluster in here cloud and tweeted about it, Mike and others from. The RethinkDB team instantly reached out to me and helped me resolve the issue.

joneholland 4 days ago 3 replies      
A shame that a company like mongo can exist while a company like rethink folds.
kbd 4 days ago 0 replies      
Really sad to hear this. I always had a ton of respect for you guys and your commitment to solid technology. (I mean y'all even passed Aphyr's Jepsen tests out of the gate [1]) You've also always paid a great deal of attention to interface, both in your query language and in administration.

One of your engineers even wrote once that maybe you took longer than you should have and over-engineered some things, but now that that was in place y'all would be better off for it. I'm sorry that didn't wind up being the case, at least as far as your company is concerned.

RethinkDB was at the top of my list of technologies I want to build something on. I even went to your (one?) meetup in Boulder. I guess my t-shirt is now a collectable :-/

But I'm happy for y'all that you wound up finding a great place you can all work together. Best of luck!

[1] https://aphyr.com/posts/329-jepsen-rethinkdb-2-1-5

jarrettch 4 days ago 1 reply      
Just started a new project at my company using RethinkDB, very early stages, but I was amazed at how well this was documented and implemented. Coming from different ORMs to use with MongoDB, ReQL was a joy to work with and I didn't even want to use an ORM(for me at least, others may disagree).

Sad day to me, but as an open source tech, I hope and trust it will continue to live on.

It sounds like the company landed at Stripe. Good for them. Glad they weren't left out in the cold.

Thank you all very much for your hard work. In my short period of time I have had using RethinkDB, it's been a pleasure to work with.

GordyMD 4 days ago 0 replies      
I'm very upset to hear this. RethinkDB, and the team behind it, have been inspirational to me and it has transformed my day to day development workflow. Their dedication to developer experience (amazing docs, ReQL, admin tool,...) has been something I've particularly admired. I'm relieved that the team has a new home though and it's fantastic that a company that has similar traits is the destination - props to Stripe and the RethinkDB management team for making this happen.

I hope to see RethinkDB to live on in the OS community.

marknadal 4 days ago 0 replies      
My greatest condolences to you guys. I have met with Slava a couple times, and with Mike several. They are incredibly down to earth and I felt they were very important to contributing to the democratization and humanization of databases (an otherwise unfortunately very vicious and elitist industry).

So what is next? I need to strongly remind people that both Mozilla Firefox and Blender were Open Source projects that survived their parent companies. This is not a death statement, and very easily could be a rebirth of Rethink.

Despite working on a competing database, I want to take a moment to explain why RethinkDB is the best in its chosen niche, by comparing it to other options out there:

- MongoDB. You won't get realtime updates and relational documents if you use MongoDB compared to RethinkDB.

- MySQL/Postgres. Rethink's awesome ReQL query language is a joy to work with compared to standard SQL. Without Rethink, you'll be missing document storage and live queries.

- gunDB. Our model is Master-Master, which while we think is great, it fundamentally restricts you from building Strongly Consistent systems, like banking apps, etc. RethinkDB is the only realtime updating Master-Slave system out there. They clearly win here.

- Redis. RethinkDB still has amazing speed and performance, but offers so much more than Redis by itself can.

- Hadoop. I know less here, but I've heard that RethinkDB has an amazing query optimization engine. Automatically taking your query and sharding it out across the distributed storage system.

RethinkDB is a winner in these technical aspects and fits for a very well defined problem. I encourage people to still use it and contribute to it if their business matches that. Don't let news like this deter you, or else we would have lost gems like FireFox and Blender.

Best regards to you guys, you are awesome.

saikat 4 days ago 0 replies      
I used Slava's blog to learn about Lisp and went on to build a product with Weblocks, which was a continuations-based web framework that Slava created back in the day (I only recently finally deleted a forgotten 8-year old todo about merging in my refactored Weblocks formview). Random trivia (kind of) - the syntax and concepts in Weblocks actually bear some resemblance to Goat, which is a framework that Patrick had created and used to build the original version of the Stripe dashboard (likely just because both of them come from a Lisp-heavy background).

I had just started a new company using RethinkDB, and it was definitely poised to be my go to database for new projects. Anyway, all this is to say, for someone who I know only through his works, I have great respect for Slava and want to thank him for the work he's done. Here's to hoping that RethinkDB finds a way to continue.

Fenn 4 days ago 0 replies      
Slava is one of the smartest people I've met (and I'm sure the rest of the team follows suite) - it's indeed sad that the RethinkDB company is no more.

I feel we're at a stage where some of the key technologies/platforms are coming out of relatively small companies (docker, storm, rethinkdb, to name a few), however, it appears it can be tough to make a business out of this alone.

I'm consequently very happy to see that the RethinkDB team have found a home at Stripe and hope that works as a setup to allow the talent to flourish and keep producing great work/innovations.

wojcikstefan 4 days ago 2 replies      
I recognize there's hundreds of different factors that played a role in the story of RethinkDB and its relatively poor adoption. I'd like to touch on one specific point (probably far from the most important one) - the name.

I think you guys introduced friction very early on with some of the developers, because of your name. Given the context of our times where many of the devs joke around about "yet another 'revolutionary' database that claims to be different and then does the same thing, but dropping some crucial functionality" (for which I primarily blame MongoDB), "RethinkDB" can be considered a bit of a sardonic name. It can also read as "we need to reject the notions other databases are built on and rethink it all", which to more conservative devs can feel as an attack on decades of thought and design around their beloved MySQL and PostgreSQL.

Basically, I think this name might actively discourage developers from adopting your db or at least playing around with it.

melvinmt 4 days ago 1 reply      
> Were excited that the members of our engineering team will be joining Stripe

So is this one of Stripe's BYOT hires?

koolba 4 days ago 1 reply      
Per Crunchbase:

 Dec, 2013$8M / Series A2 Sep, 2011$3M / Series A Apr, 2010$1.2M / Seed Jun, 2009undisclosed amount / Seed
Did the company have any revenue or was that the only money the company had to run on?

Assuming that info is correct, that's pretty slim for building a "hard tech" type of company. Building a database (from scratch no less) is a lot of up front work. Even after it's built you need to convince tech people to want to use it, convince their bosses to let them use it, and even then, hire sales people to sell support contracts.

jondubois 4 days ago 0 replies      
This is very sad. RethinkDB is my favourite database engine of all time.

I have huge respect for the RethinkDB team - They've put the product above everything else and they've made it accessible to as many developers as possible by making it open source.

I think that in spite of this announcement, I will likely continue to use RethinkDB.

ronreiter 4 days ago 2 replies      
If RethinkDB is really better than Mongo then where is the blog post about it?

Where are all the scaffolding tools? The examples?

I have only heard of Rethink because I read HN every day. But no one knows about it. That is obviously the reason it failed - bad marketing.

patkai 4 days ago 0 replies      
RethinkDB was simply too good to be true. I've been reading Slavas defmacro.org probably since he started writing it, and was watching where he will end up. It was no disappointment that he created this with his cofounders, and while I'm a bit sad they couldn't find a viable business I know they are young and ambitions, we will hear from them.

Go Slava!

hmexx 4 days ago 1 reply      
For people mourning the loss of rethinkDB, and who are interested in alternative document based NoSQL DBs under development, please checkout ArangoDB. It's my favorite NoSQL database. I have no affiliation with them.

Beautiful query language called AQL, and graphing support (multi-model) baked in.

andrewl-hn 4 days ago 1 reply      
Loved that they took care of the engineers! Stripe is a great place to work where their expertise is surely welcomed.
overcast 4 days ago 2 replies      
Holy shit, my worst nightmare is realized. I LOVE RethinkDB, I use it for EVERYTHING. It has everything I've ever wanted in a database. Relational documents, realtime feeds. Sadness. Someone better be picking up from where they left off.
laxd 4 days ago 0 replies      
I'd fund Slava and Mike again in a second. - Paul Graham on twitter
mangeletti 4 days ago 3 replies      
Does anyone have experience with Postgres' pubsub[1] as a means for realtime apps?

1. https://www.postgresql.org/docs/9.6/static/sql-notify.html

antirez 4 days ago 0 replies      
Rethinkdb is great software that was really designed with technological goals, one after the other, in mind. I'm a bit surprised to see the company shutting down. I mean, in some way, it's years they work at the project, but in the final usable form the project is not so old after all... Anyway to make it viable for the project to have a good live as an OSS project, there is to build a big community of core developers, and that's hard, so as a first step what I would do to ensure the viability is to strip-down it as much as possible in order to reduce the codebase of the fundamental part of RethinkDB to a more manageable size/complexity.
k__ 4 days ago 1 reply      
Interesting, I expected ArangoDB close shop before RethinkDB.

I found both nice and even built a product on top of rethink over the last twelve months. Guess I was blended by the hype :D

skndr 4 days ago 0 replies      
Mike has been one of the most genuine and helpful people I've met - always offering to lend an ear or offer suggestions and feedback. It must be really difficult to stop doing something that's been such a core part of your effort and thoughts for so long. I hope you and the team find interesting and engaging things to work on.
sixdimensional 4 days ago 0 replies      
RethinkDB advanced database technology. Congratulations on a good run. Like many things, we may not realize the impact of their work today, but with the product going open source, it can serve as both a learning tool/inspiration for new platforms in the future, as well something to build on. I imagine the product will take its place in the halls of database history, even if it didn't succeed in the market financially. This is a legacy to be proud of, and glad to see the team has somewhere awesome to go!
h1d 4 days ago 4 replies      
A while ago, I heard how good RethinkDB is and tried to check it out and found this page.


The more complex the query becomes I just felt everything was getting less intuitive compared to the SQL version and ended up not actually using it.

For anyone who switched from SQL, I'd like to know how it felt to write RethinkDB queries. Did they start to feel fluent later on?

adharmad 4 days ago 0 replies      
Very sad to know that RethinkDB is shutting down. This was one company that I wanted to see being successful! A great tech team taking on a hard technical problem. Even if they are closing down, I hope they have changed the landscape.

Also having followed Slava for a sometime, nothing but respect and admiration for him!

lifeisstillgood 4 days ago 0 replies      
I would be very interested hearing @coffeemug's views on the tension between building a business and building an open source product. (Those I know who have done this range from "never again" to "lost a fortune in sales")
rer 4 days ago 1 reply      
I wish RethinkDB had published a performance benchmark against Hadoop. It'd be enough to show the world how ahead of its times RethinkDB is.

I also wish I knew what RethinkDB's investors are thinking.

JulianMorrison 4 days ago 0 replies      
I think I'm gonna try and teach myself to work on this in its new open source incarnation, because I was rooting for it hard while it was still a company, but it was perpetually in a state of "soon, not yet", and now "not yet" is never unless the people who want it step up.
OhSoHumble 4 days ago 0 replies      
This is heartbreaking. I have used it for a couple of different projects and it's just a dream to work with.
v3ss0n 1 day ago 0 replies      
Since our startup is build entirely on Tornado + RethinkDB + Emberjs + Python , this is a huge blow.

But i do not worry , since this is announced many folks willing to support from community and many willing to contribute to RethinkDB in terms of both profit sharing and code steps up.

Also Stripe and Compose.io step up , to stand with RethinkDB .

I for one , a Startup Enterprenuer and Founder of Software Agency from Myanmar , Promise to share at least 5% of our profit on projects developed using RethinkDB to keep it Running .

programminggeek 4 days ago 1 reply      
What was RethinkDB's business model and why didn't it work?
KirinDave 4 days ago 0 replies      
This is really sad. RethinkDB is an amazing product.
jordanthoms 4 days ago 0 replies      
This is sad - Was planning on moving from Postgres to Rethink as our plan once we start to hit scalability limits / needing to shard. I guess we will see how the open source efforts go.
uptown 4 days ago 5 replies      
Bummer. Had Rethink in my list of things to eventually tinker-with. Is it stable where it should still be considered as a viable choice for your tech stack, or is this pretty much a deal-breaker?
bhouston 4 days ago 2 replies      
That is sad. We were considering switching to RethinkDB from MongoDB.
Kiro 4 days ago 1 reply      
While RethinkDB looks amazing I'm glad I didn't listen to the people here demanding me to switch from MongoDB. In the future I will just stick with what works.
solidr53 4 days ago 0 replies      
Their chance of surviving would have increased dramatically if users could install rethinkdb as addon on Heroku.

Availability as a service was an issue, the compose deal did not work well for me, as my stuff was hosted on heroku.

Anyway, rethinkdb is probably the most awesome database I've tried.


samdung 4 days ago 2 replies      
RethinkDB fits into the category of products i love but do not use because i do not want to manage the underlying servers. I've moved to Firebase and other 'services' on the Google, AWS clouds.

Given that this is the way many of us are operating (i.e. 'moving to the cloud'), i wonder what the state of open source software is going to be in the coming years ...

jeffy 3 days ago 0 replies      
Never used RethinkDB, but this is the first time I've seen an announcement this written as "we are shutting down, the team is moving to Stripe" rather than "We've been acquired by Stripe! Nothing will change except our product will stop working in 30 days"

Much respect for the no-BS announcement.

djsumdog 4 days ago 0 replies      
Haven't played with it, but everyone about RethinkDB looks positive. Hopefully it will continue to develop in the open source community.

There are so many projects that really need attention to keep open source growing. I started looking at what I could do for IRCv3 projects recently because I'd really like to support an open source alternative to Slack.

niahmiah 4 days ago 1 reply      
Quite a sad development. I always thought they would have found more success if they had adopted MongoDB's API (or similar).
hiphipjorge 4 days ago 0 replies      
As a former employee at Rethink, I'm super sad to hear about this, but know that they are all great people capable of even more than what they have accomplished with this amazing database! Will continue to recommend RethinkDB ferociously to everyone I know and will always fondly look back at my time there!
ginkgotree 4 days ago 0 replies      
This is really so sad. RethinkDB is top notch. Truly a great team.
kerkeslager 3 days ago 0 replies      
RethinkDB is an amazing product. I've had the pleasure of using it a few times in my projects and the experience has always been very smooth. I would recommend RethinkDB to anyone, and you guys should be very proud of your work.

I'm sad that all that excellent engineering hasn't translated into a profitable business model. Part of me wants to suggest you make a donation campaign, and I'd be happy to become a regular donor. But it's understandable and respectable that you guys want to move on to other ventures.

I've also used Stripe's API and their systems are also very good. I'm glad to hear you'll find a place with another solid engineering team and I wish you guys all the best.

sheeshkebab 4 days ago 1 reply      
Why is there no "Pricing" section on their website? how were they making money? (if any)
cybrix 4 days ago 1 reply      
This is very sad to hear this looked like a real product.

Quite off-topic but has anyone managed to implement something similar to Changefeeds in MongoDB? RethinkDB sends both old data and new data and watching the oplog sounds like a bad idea.

baristaGeek 4 days ago 0 replies      
I have tinkered with RethinkDB a lot recently. It's the NoSQL DB (and the DB of any kind, I believe) that has the deepest focus on real-time features, that mixed up with Golang's concurrent model can support really powerful and highly scalable real-time apps.

But my question is, what are your thoughts on building more than a product but a business with RethinkDB? Will the support quality decrease? Will Stripe offer architectural consulting? Will developers be interested in learning ReQL?

amelius 4 days ago 0 replies      
It is sad to see that making a living from development tools is becoming less and less an option.

Kudos to the RethinkDB team for trying, and giving us a very useful DB.

breatheoften 3 days ago 0 replies      
I'm sad for this. I really liked rethinkdb and thought the potential for future development seemed really promising. They seemed to really know how to ship -- it's sad to imagine the momentum they had created fizzling out ... i would've liked to have seen what they would have came up with over the next couple of years ...
_jezell_ 4 days ago 0 replies      
Slava is awesome. Rethink is awesome. Hope Stripe realizes investing in Rethink dev could pay massive dividends in developer goodwill.
spullara 4 days ago 0 replies      
Really sorry to hear it Slava. You guys were doing great work. At the end of the day, I think that open source has really killed pure database companies aside from the incumbents. We'll see where MongoDB ends up but they have the advantage that a support contract business model works when your product needs support.
dylz 3 days ago 0 replies      
Just wanted to say good luck with everything to the team, and especially to neumino - you've personally helped me years ago on freenode writing some long ReQL queries when I first started experimenting with Rethink. :)
longsien 4 days ago 1 reply      
I love Rethink and Horizon. Are there great risks in continuing to use Rethink even if development has stopped?
mrcwinn 4 days ago 0 replies      
I never used it in production (lack of need, not a lack of interest), but I always enjoyed playing with RethinkDB. Really fantastic work - both under the hood and the presentation on top. Be proud of the work everyone did. Stripe is a great home to end up at!
RangerScience 4 days ago 2 replies      
To be clear: As part of shutting down, you guys found new jobs as you could for employees? That's rad!
mrbogle 4 days ago 0 replies      
This is sad. Looking forward to the post-mortems. A question in the interim: Did rethink ever consider building a SaaS business around the product? What were the reasons against it? Were there (a lot of) customers asking for a hosted version?
ianwalter 4 days ago 0 replies      
This is the most sad I've ever been for a software project folding. Good luck to the team.
dantiberian 4 days ago 0 replies      
I've been working on a Clojure RethinkDB driver for about a year now, and I've always been incredibly impressed with the Rethink team. Their openness, friendliness, and helpfulness when asking questions has always been outstanding. You'll be sorely missed all.
kriro 3 days ago 0 replies      
Looking forward to the more detailed writeups since I'd love to read why it wasn't possible to build a sustainable company around such an excellent product.

Good move by Stripe to pick up these fantastic engineers.

chrischen 4 days ago 1 reply      
I find it hard to trust HN users as I was about to adopt and migrate from MongoDB to rethinkDB based on consensus by HN user comments about it being undeniably superior, yet it was actually close to shutting down apparently.
silasb 4 days ago 0 replies      
Wow, I've always used RethinkDB for toy DB stuff here and there, but nothing serious. I loved how well everything was put together, from the documentation to the homepage. Hopefully the project can live with OSS contributions.
crudbug 4 days ago 1 reply      
Core tech products are hard to sell. With so many passengers on big data boat - MemSQL / VoltDB / Postgres / DataStax / DataBricks / Cloudera / Horton many more.

Your sales force strength becomes critical.

Waiting for insider insights.

desireco42 4 days ago 0 replies      
I don't know what to think. I am angry and sad that my favorite db is going away?! On the other hand, I am trying to understand and be supportive of the team behind it, as I have huge respect for them and their work.
bryanh 4 days ago 0 replies      
Massive bummer - Slava and Mike and the team were super top-notch. RethinkDB will always be my goto to explain the delta between a great and a poor first experience for any dev tool.
kevindeasis 4 days ago 0 replies      
Ive been teaching my siblings programming. It took them literally a day to learn rethinkDB.

That should say a lot about how amazing RethinkDB is. It's unfortunate to see this get shut down as RethinkDB is pretty easy to work with

jmcgough 4 days ago 2 replies      
I'm not familiar with RethinkDB - what was their monetization strategy?
thomaseng 3 days ago 0 replies      
This is depressing :/ I'm hoping RethinkDB will have an afterlife.
cdaringe 3 days ago 0 replies      
Best of luck guys! RethinkDB is still #1 to me :). Hopefully this ship keeps sailing!
the_common_man 3 days ago 2 replies      
Looking at their website, they are being used be:* Nasa* Jive* wise.io, mediafly etc

Just another confirmation that many of those companies list all these big well-known 'customers'.. except they are just lying. Maybe someone in nasa downloaded rethinkdb once and they just listed them there?

Suvitruf 4 days ago 0 replies      
Sad news, but I Gotta Believe!
doczoidberg 4 days ago 0 replies      
Anyone has insights in firebase? I hope firebase will not get the same troubles.
tehchromic 3 days ago 0 replies      
Ok I'm going to go with: Rethink is dead, long live Rethink!
vmware508 3 days ago 0 replies      
I loved RethinkDB. I'm sad.
onurozkan 4 days ago 0 replies      
I'm glad they are joining Stripe, both of them great and cool companies.
flamedoge 4 days ago 0 replies      
I await to hear postmortem on what went wrong
ahmd 4 days ago 0 replies      
this seems to me kind of acquihire
skyfantom 4 days ago 0 replies      
whoopdedo 4 days ago 1 reply      
How long have the Mets fans been doing that hammer cheer?

Because it looks eerily similar to another arm swinging chant that another NL East team does.

saiko-chriskun 4 days ago 0 replies      
wutf 4 days ago 0 replies      
Time to rethink RethinkDB.
pinkskip 4 days ago 0 replies      
ilaksh 4 days ago 1 reply      
People saying they "failed" -- I disagree. It is VERY hard to sell products that are essentially free. Its largely down network effects and luck or having an _in_ with the good old' boys network. Its not more popular than every other db..so what.. And the best systems are rarely the most popular. Btw popularity is not merit.

But they made a badass database that relatively speaking IS very popular. So thats not failing. Thats winning.

Also maybe these people are being more honest and responsible than some startups that keep flushing millions knowing they are unlikely to ever have a positive cash flow.

Plus getting acquired by some company like Stripe, realistically, whether there was some big exit or not, is the dream of many people.

So given the reality of the startup scene to say they 'failed' is a joke.

Yahoo scanned customer emails for US intelligence trust.org
1341 points by tshtf  5 days ago   395 comments top 53
DubiousPusher 5 days ago 20 replies      
I think the attitude here that most tech companies are rolling over and just complying without a single ethical consideration is misplaced.

The government has been doing an excellent job of basically extorting these companies into compliance. They threaten the full weight of the US government's wraith and then tie every order up with classifications and gag orders.

You aren't legally allowed to talk to other companies in the same position. Most your legal team probably doesn't get to know what's going on. You can't take your case to the public without being held in contempt.

I'm not giving these companies a complete pass for being complicit in the erosion of individual's civil liberties but treating this as if the decision is easy is vastly unfair.

cJ0th 5 days ago 4 replies      
Anyone remembers this?

> Barack Obama: NSA is not rifling through ordinary people's emails. US president is confident intelligence services have 'struck appropriate balance', he tells journalists in Berlin

edit: link fixedhttps://www.theguardian.com/world/2013/jun/19/barack-obama-n...

rdl 5 days ago 3 replies      
I was honestly a bit unhappy when Stamos left Yahoo in the middle of a bunch of (what seemed like) cool projects for users -- seemed like he was just jumping ship from an objectively pretty crappy company to a continuing-to-accelerate rocketship, presumably for career reasons.

However, if it went down like this -- he did probably the least destructive thing possible. I probably would have gone public or done something stupider, but at the very least not being a party to ongoing abuse of users' trust is necessary.

I'd like to see what other senior execs at Yahoo! were aware of the program and supported or at least tolerated it, so I can avoid ever working with any of them.

kefka 5 days ago 4 replies      
Lets take it a different way:

You're knowingly sending your data to a 3rd party. You're not encrypting. It's not through the USPS (special protections).

It seems bloody evident that, of course, your email provider can read your emails! Unless you're encrypting with GPG, then they can (and they can still read the signing keys).

Yahoo, Google, and friends all scan, dedup, and all sorts of tricks to determine marketing and quality content (spamming). If you're worried, run your own mailserver. It's what I do, along with using gmail. But I know that, at any time, people/scripts/ai are reading everything sent and received.

edit: I'd much prefer to hear commentary/how wrong/how right/how crazy I am, rather than -1's.I'd like to hear a discussion about the "Secrecy of text written on postcards"....

smsm42 5 days ago 0 replies      
Most illustrative part:"Yahoo President Marissa Mayer and the company's legal team kept the order secret from the company's security team."

If you have to hide things from your own security team, it's pretty clear you're doing something very bad and you know it.

And my imaginary hat off to Stamos for resigning when he found his boss betrayed user privacy and undermined security. If everybody had such level of integrity, doing shady stuff would be much harder.

jonknee 5 days ago 1 reply      
It sounds like Yahoo will fit right in at Verizon... It also sounds like another leak designed to damage Marissa Mayer:

> According to the two former employees, Yahoo Chief Executive Marissa Mayer's decision to obey the directive roiled some senior executives and led to the June 2015 departure of Chief Information Security Officer Alex Stamos, who now holds the top security job at Facebook Inc.

yladiz 5 days ago 4 replies      
While it is damning that Mayer didn't go to Stamos about this and went straight to the email team, it's hard to say whether she felt it was necessary to tell him, or was even allowed to, since we don't see the court orders and what they entail. It's really easy to be against this and play armchair preacher but this is something she probably had no choice in, in many ways.

Also, I'm wondering if this story is bigger because people love to hate on Mayer. I am certain this kind of thing happened/happens at Facebook, Google, Twitter, WhatsApp, etc., so it's confusing why this is so newsworthy. It's not really newsworthy that data from an email provider is sent to NSA under secret court orders and NSA can search the full text of it. Is the newsworthy part that she asked the team to do it without consulting the security team? My question would be, why wouldn't a manager from the email team consult the security team if they had the power to?

boren_ave11 5 days ago 2 replies      
Friendly reminder: the FBI and NSA are part of the executive branch of government and report to the President of the United States. Make no mistake -- there absolutely is someone who could stop this. The fact that this clearly unconstitutional activity not only continued after being exposed, but actually appears to has expanded in its scope, leaves us with but one conclusion: the President supports this activity and wants it to continue.
suprgeek 5 days ago 0 replies      
The scariest part of the whole piece answers this question:Why are back doors with secret keys a BAD idea?

"... he had been left out of a decision that hurt users' security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails...."

The CEO of Yahoo must have known that this kind of scanning and storage puts their users at risk. She choose to do it anyway as being the path of least resistance against a more powerful adversary (US govt.). Bad judgement compounded by zero spine... Verizon looks like the perfect fit.

josh2600 5 days ago 0 replies      
I mean, think about the threats from .gov, right?

$250k per day doubling every week that can come with a gag order sounds like the sort of thing that could damage a business to the point of extinction, no?


zmanian 5 days ago 2 replies      
Secret URL for deleting your Yahoo account.


taivare 5 days ago 0 replies      
This reminds me of what happened to my grandfather in the early 30's. He was employed by a small glassworks in PA, a factory town that owned his home, the town store, post office everything. They opened his mail and fired him for trying to start a union. Three kids under five and a wife thrown out on the street. Seems like the Oligarchs are still reading the spues mail all of these years later.
JustSomeNobody 5 days ago 0 replies      
Let's see a show of hands for those who think Yahoo was the only one?
lasermike026 5 days ago 0 replies      
Distribute, encrypt, and anonymize. The only way forward doesn't include them.

Congress is up for grabs. You can really change who is in congress this round. If you don't like the guy you have vote in another. Vote for people that want to cut surveillance programs and agencies that request them. We could save or reallocate mountains of money.

pkaeding 5 days ago 0 replies      
Yahoo was attributing its recently announced data breach to state-sponsored attackers.... Maybe that wasn't so far off the mark after all.
Floegipoky 5 days ago 0 replies      
Ignoring fiduciary responsibility for a minute, what would happen if a publicly-traded company refused to comply with such a court order until they were required to release a financial statement? Wouldn't they be legally required to disclose that multi-million dollar fine?

How would a company under such a gag order announce bankruptcy? "Sorry, we lost all the money and we can't tell you why"?

zby 5 days ago 0 replies      
The interesting part of the news is this:

"""The sources said the program was discovered by Yahoo's security team in May 2015, within weeks of its installation. The security team initially thought hackers had broken in."""

this is from Reuters: http://www.reuters.com/article/us-yahoo-nsa-exclusive-idUSK

I can imagine being in that security team :) But there is also something more profound in this about secrecy in our times.

Esau 5 days ago 6 replies      
The lesson from this is to not trust corporations with out privacy. Sadly, it seems many of us are not learning it.
ChicagoDave 5 days ago 1 reply      
I find this hilarious since the only thing I use my yahoo address for is retailer sign-ups and things I know will land me a boat load of junk mail. It is my email landfill.
AnimalMuppet 5 days ago 0 replies      
From the article: "Some surveillance experts said this represents the first case to surface of a U.S. Internet company agreeing to a spy agency's demand by searching all arriving messages, as opposed to examining stored messages or scanning a small number of accounts in real time."

The first case to surface. Anybody else could have been doing it for just as long, but we don't know yet.

vermontdevil 5 days ago 4 replies      
Now gotta wonder if Google has succumbed to government pressure to do the same.

I'm really hoping and trusting they haven't.

markpapadakis 5 days ago 1 reply      
I imagine Yahoo! Mail engineers being royally pissed about this. Well, I suppose that includes all Yahoo! folks who are still putting real effort into improving Y!'s services. Every odd day something surfaces about Y!'s execs questionable practices and decisions, every even day problems, leaks, bad press. Moral must have hit rock bottom.

Maybe the Yahoo! Board should have surveyed the startups scene, looking for founders who bootstrapped successfully and proven their worth, and recruit the best they could get. I am not very familiar with management of people and aspects of running a business, but I believe there is a lot more to it than being a smart person with computers.

_audakel 5 days ago 1 reply      
If she had wanted to this to get out, I wonder if she could have ordered the email team to go ahead and build out the sniffer so she is not in contempt of the court, but let her security team openly blog about it, without informing her, when they found it - which could lead to an inadvertent release of the info? If the sec team was not under the gag order maybe they would not have gotten in trouble.

Or take her to a super boss level, she could have used whisper to talk to guccifer and let him know about some vuln that would allow access to the legal directory.... which would have to gag order. #wikileakitup

zmanian 5 days ago 0 replies      
This is substantially worse than PRISM which operates on individual targeted persons and the upstream Verizon, AT&T program which collects plaintext over the public Internet.

This involved bulk search of data past the decryption layer.

tkinom 5 days ago 1 reply      
Since all these companies (Yahoo, Google, FB, MSFT, etc) all operate and with users in other countries, what happen when other countries/governments demand the same "search/access" of info?
En_gr_Student 5 days ago 0 replies      
It was part of carnivore and AT&T also supported that. I'm pretty sure all major vendors had hooks into their systems for carnivore.
0xmohit 5 days ago 0 replies      

 Yahoo Inc last year secretly built a custom software program to search all of its customers' incoming emails for specific information provided by U.S. intelligence officials, according to people familiar with the matter.
Wonder how much of the 4.8 billion can be attributed this custom software program?

turc1656 5 days ago 0 replies      
This shit needs to stop. Immediately.

Like most people, I have no problem with the government using probable cause to get warrants that are in search of something specific (none of these grab-all bullshit orders). If you have a legitimate reason to be looking at someone, then there should be no problem getting a warrant.

These secret FISA court orders are a serious violation to the rights of Americans in many cases. At minimum, if we really do need these secret courts to prevent people from finding out they are the subject of surveillance, then there needs to be an expiration on those gag orders. This crap about never being able to mention it FOREVER has to go. There should be a limit, say 5 years, which is well beyond the length of time most investigations take. At that time, those orders should expire so that these government actions can be brought to light if there is any question of wrong-doing on the part of our overzealous law enforcement.

"Former NSA General Counsel Stewart Baker said email providers 'have the power to encrypt it all, and with that comes added responsibility to do some of the work that had been done by the intelligence agencies.'" Sorry, but no. That's not how it works. There is no obligation to do the work of government unless it is actually written into law (i.e. record-keeping laws). And it currently is not. This is precisely why everyone should be encrypting all communications on the CLIENT side themselves. It should never leave your device (PC, phone, whatever) unencrypted. That way, if the government wants to go on a fishing expedition or has an actual legitimate reason to look at you, they will have to get a warrant for the device itself, which will at least give you a head's up that they are trying to put you in the clink with a bunkmate named Bubba.

The NSA, and the government in general, has completely blown any goodwill they once had with the public. Under no circumstance will I ever advocate for anything that makes their job easier. And it is for no other reason than simply because they have proven time and again they cannot be trusted.

Honestly, I'm still not even clear why every employee of project PRISM isn't rotting a jail cell right now after Snowden shed some light on the program for the rest of us peasants. Every single employee of that program had to know the clear violations of the constitution they were helping to partake in. Keep in mind the constitution protects against unreasonable SEIZURE as well as search. Gobbling up communications in the manner they did clearly counts as seizure because they would not have had them otherwise - whether or not they actually search the records is immaterial.

I'm not an Apple fan, but when they told the government to go pound sand regarding that terrorist phone encryption case, that was the first time that I can recall I actually approved of Apple's political position on something.

Zigurd 5 days ago 1 reply      
Some people here laud some companies for being good about user privacy and security. This shows they have not yet reached table stakes for privacy and security.

This is why no provider can be trusted. Every routine communication should be e2e encrypted. Otherwise this WILL happen.

feefie 5 days ago 2 replies      
Is this is the best solution?https://emailselfdefense.fsf.org/en/

Getting anyone else I know to do this seems like a long shot. Is there something simpler?

hackuser 5 days ago 0 replies      
Note the attitude toward encryption:

Former NSA General Counsel Stewart Baker said email providers "have the power to encrypt it all, and with that comes added responsibility to do some of the work that had been done by the intelligence agencies."

Taek 5 days ago 0 replies      
Another reason for users and enterprises alike to avoid US companies and services. And another reason for entrepreneurs to start companies outside the US - escape the stigma, escape the potential clash with secret courts.
ArkyBeagle 4 days ago 0 replies      
So you really think that a free email service will "protect your privacy?" Any of them?

Why would you think that?

FWIW, SIGINT is a major part of the present festivities in the Woah on Terruh. It's simply unrealistic to expect anything transmitted through ordinary means to be remotely private.

cornchips 5 days ago 0 replies      
Any large company should openly defy such an order.

What will they do??? Fine, court, shut down the company? If that happened would the public not outcry?

johansch 5 days ago 2 replies      
So, is this correct, in this context?

Pass: Apple, Google

Fail: Microsoft, Yahoo

Unknown: Facebook, Twitter

jokoon 5 days ago 1 reply      
To be frank, the more I hear about those stories, the less I'm shocked.

There is nothing to be shocked about. Unless nobody else than intelligence officials are getting access to this, and if the investigations are legit, then what?

News like this are trying to ride the whole Snowden train, but that's not what Snowden what whistle blowing about. Snowden was trying to warn about the abuse of those tools.

Now people moan and yell each time agencies try to do their job.

jameshart 5 days ago 0 replies      
Any chance that this, and the recently announced historical account breach, are coming out as artifacts of Verizon's due diligence?
honyock 5 days ago 0 replies      
This is not at all surprising! BTW, I don't know a single person that has an email account with yahoo, who is not older than 60!
awt 5 days ago 0 replies      
That the usg attempted this is a sign of deeply seated incompetence at a philosophical level.
gjolund 5 days ago 0 replies      
Good riddance. I don't understand what is worth scavanging from the carcas.
jmadsen 5 days ago 0 replies      
I'm sorry, but have you used Yahoo Mail?

I don't believe they are capable of writing the "siphon" they are accused of. To be honest, I don't think they actually have engineers. I think they just use summer interns.

pseingatl 5 days ago 0 replies      
They moved heaven and earth to try to find Snowden.
aszantu 5 days ago 0 replies      
having my yahoo as spammailaccount for registrations, they probably scanned gigabytes of all sorts of stuff xD
VOYD 3 days ago 0 replies      
Took them long enough ;)
ChoHag 5 days ago 0 replies      
But continue to find themselves stumped?
lifeisstillgood 5 days ago 0 replies      
And it did not find any :-) !!!
ezoe 5 days ago 0 replies      
So, when do Americans exercise the right of the Second and liberate from this totalitarian government?
exabrial 5 days ago 0 replies      
Thanks Obama!
trendia 5 days ago 5 replies      
In China and Russia, it is well known that all oligarchs are corrupt.

However, not all of them will go to prison -- only those who cross the politicians will ever be tried and convicted.

cheeze 5 days ago 0 replies      
Can we merge https://news.ycombinator.com/item?id=12637302 into this? Same exact headline
floor__ 5 days ago 1 reply      
thwee 5 days ago 0 replies      
It should read "...Yahoo Chief Executive Marissa Mayer's decision to indulge the directive..." indulge, not obey.
singularity2001 5 days ago 2 replies      
Google overtly scans your emails for anything.
Google Noto Fonts google.com
1191 points by bpierre  3 days ago   302 comments top 51
ixtli 3 days ago 7 replies      
I don't think this communicates the _incredible_ amount of creative and technical work required not only to simply fill out over 100,000 characters in the UTF-8 space but to make them stylistically consistent.
eganist 3 days ago 3 replies      
So in a nutshell, typefaces which literally span every single defined character?

Are there special optimizations implemented for different use cases as well, e.g. screen v. print and sub-varieties of each? Ten years ago with Vista, Microsoft Typography (https://www.microsoft.com/en-us/Typography/default.aspx) put out a family of typefaces--Cambria, Calibri, Consolas, etc.--which were optimized specifically for sub-pixel rendering on LCD screens while maintaining on-paper legibility. I'd be cool with Noto not having any such optimization in mind given that the stated objective appears to be to include every defined character, but I do wonder if it should happen eventually.

...or maybe not, who knows. Pixel densities now have approached ludicrous territories. It might just no longer matter at least when we're talking about optimizing for screens.

fazlerocks 3 days ago 9 replies      
We had to stop using this font as it didn't support back tick(`). Try typing ` multiple times in the search box.

We had reported this to Google some months back but got no response.

tcfunk 3 days ago 5 replies      
I keep wanting to find a font to use on the portions of my website that contain Japanese, but they are all so big! No way I'm making my visitors download 115MB worth of font just so it looks a little nicer.

Edit: Thanks for the tips, I will look into those options :)

ppierald 3 days ago 1 reply      
I always thought the term Tofu was "mojibake": https://en.wikipedia.org/wiki/Mojibake

Wikipedia disambiguation page: for "Tofu" mentions: Slang for the empty boxes shown in place of undisplayable code points in computer character encoding, a form of mojibake


samfisher83 3 days ago 4 replies      
Is Google using aws to host this.

If you look at:https://noto-website.storage.googleapis.com/

You will see the following:<?xml version='1.0' encoding='UTF-8'?><ListBucketResult xmlns='http://doc.s3.amazonaws.com/2006-03-01'> <Name>noto-website</Name> <Prefix></Prefix> <Marker></Marker> <NextMarker>emoji/emoji_u1f468_200d_1f468_200d_1f466_200d_1f466.png</NextMarker> <IsTruncated>true</IsTruncated> <Contents> <Key>css/emoji-zsye-color.css</Key> <Generation>1464738619772000</Generation> <MetaGeneration>1</MetaGeneration> <LastModified>2016-05-31T23:50:19.729Z</LastModified> <ETag>"e3aaae52d88ced070044f59d1efe2009"</ETag> <Size>152</Size> <Owner/> </Contents>


Are they using Amazon S3?


They just changed it. @1:00 pm so it no longer mentions aws.

dorianm 3 days ago 1 reply      
It was released originally in 2013: https://en.m.wikipedia.org/wiki/Noto_fonts

Such an amazing project

erichocean 3 days ago 0 replies      
> The name noto is to convey the idea that Googles goal is to see no more tofu.

Nofu would have been a better name if that's actually the goal.

ThatMightBePaul 3 days ago 2 replies      
I wanted to try the Noto mono out for programming, but it looks like the `O` is indistinguishable from the `0` :/

Otherwise, it's a nice looking font for the editor.

aluminussoma 3 days ago 2 replies      
Noto fonts are great. I especially love the OFL licensing. I wanted to include fonts in a mobile application and there was confusion on whether including a GPL version of the fonts would force me to GPL my app - something I could not do. I couldn't find a definite answer.

With Noto's OFL licensing, I no longer had that worry.

kccqzy 3 days ago 1 reply      
It doesn't seem to support the Tangut script, which is added in Unicode 9.0. It's the first thing I test if a font claims to support all languages in Unicode. In my knowledge, I haven't really seen any general-purpose fonts containing support for Tangut (because probably no one is going to use it). I thought Google has already completed this project, but apparently they haven't yet.
stesch 3 days ago 1 reply      
Fun fact: People behind a WatchGuard firewall (default settings) won't see Google fonts (at least in Firefox) because the firewall filters the CORS headers.
amake 3 days ago 1 reply      
The claim of full coverage of 110,000+ characters (they are targeting Unicode 6, apparently) appears to be false: Noto Sans CJK covers approximately 30K characters[1], while as of Unicode 6.0 there were 74,614 CJK Unified Ideographs (calculated from [2]).

Edit: Using a script I made to check codepoint coverage[3] I get 63,639 codepoints with glyphs defined for all Noto fonts included in their default download (Noto-unhinted.zip).

[1] https://github.com/googlei18n/noto-fonts/issues/717#issuecom...

[2] https://en.wikipedia.org/wiki/CJK_Unified_Ideographs

[3] https://gist.githubusercontent.com/amake/53b2331a2547b94f430...

tscs37 3 days ago 0 replies      
Noto & Roboto are probably the two fonts I love more than anything now, I just find them so pleasing, they don't do anything stupid and they look good.
mathiasrw 3 days ago 1 reply      
How to make a quick logo? Just write the name of the company and use the https://www.google.com/get/noto/#sans-bugi font...
capex 3 days ago 1 reply      
Refreshing to see the lorem ipsum replaced with something meaningful.
wst_ 3 days ago 1 reply      
Looking at any fonts, recently, I am starting to crave for a 4K LCD panel. When people were switching to HD panels there were big WOW around, but after buying a smartphone, I realized that HD is good for smartphone, but not enough for anything big. I love to read on my smartphone and I love to code, too, but fonts look just crappy (especially on Windows). Sadly, good 4K panels out there are still quite pricey.
amyjess 3 days ago 0 replies      
The sans has a real italic and not just an oblique. Color me impressed: real italics aren't very common in sanses, and that goes double for FOSS fonts. Very nice.
xendo 3 days ago 0 replies      
I wonder why they didn't share the fact that it was created using AI.
jyrkesh 3 days ago 1 reply      
Has anyone managed to install the Color Emoji font on Windows? Getting something about it not being a valid font file
ythl 3 days ago 2 replies      
Wow, they implemented Deseret - the Mormon-invented phonetic alphabet
jug 3 days ago 0 replies      
Why follow up Roboto? An honest question, not a rhetorical one. I don't yet prefer any one over the other.

I understand it covers a large part of Unicode, but if that is what makes it unique, couldn't Roboto just be extended?

tobltobs 3 days ago 0 replies      
That is great. I am working on a project where I require a font which includes all (or at leas as much as possible) unicode symbols. Until now I was thankful that I could use GNU unifont as a fallback, even if it was ugly. But this will make my app look so much better.
pat2man 3 days ago 1 reply      
Any idea why they don't just merge Roboto and Noto?
xbryanx 3 days ago 2 replies      
This is really fantastic. It sent me looking for source materials for the character choices made for each language.

I'm dealing with a media experience in Dakota and Ojibwa right now where we have source material that is spelled/character-ed quite differently than the alphabet provided by Noto in those languages. Given the scale of this project, I assume that some considerable thought went into each language's character set, but it's difficult to know for sure without any sourcing. The git commit logs don't offer up any hints. Anyone familiar with the project, know where I could find this sort of source information?

Should I be referencing something in the Unicode definitions for these languages?

dvcrn 3 days ago 2 replies      
Is it possible to replace the Mac system font for some languages with this?

I prefer the Noto Japanese typeface over the one that comes with Mac and would like to replace it.

ww520 3 days ago 0 replies      
The font support spans so many languages. This is incredible. Font work is hard and tedious, and thankless. Kudos to Google for open sourcing it.
Dowwie 3 days ago 2 replies      
What are hints and why are the noto fonts packages according to hinted and unhinted? I'm not sure which package to use for linux.
sushid 3 days ago 1 reply      
Surprised to see that Google is supporting even archaic Korean [0], but it would have been nice to see a chunk of text in Korean, Japanese, and Chinese, as opposed to a bunch of gibberish in all three languages.

[0] https://www.google.com/get/noto/#sans-kore

chuckreynolds 3 days ago 0 replies      
How come I get "6 serious errors were found. Do not use these fonts." in macOS Font Book? v10.11.6
wkoszek 3 days ago 1 reply      
Wondering if Matthew Butterick will like those. Will keep looking at Practical Typography for updates.
MNukazawa 3 days ago 0 replies      
forkandgrok 3 days ago 0 replies      
Last time I checked, Noto Mono was not considered a monospace font by Windows' console (Command Prompt) and cannot be used for working in the console (including Bash on Ubuntu on Windows). Is that still the case?
quirkot 3 days ago 0 replies      
To see all of the characters that are on the internet, you will need to download a CD's worth of fonts, 473mb. #MindBlown
imeron 3 days ago 0 replies      
I wish they would just support Latin Extended in Google Docs when Google Fonts are selected.
stevejohnson 2 days ago 0 replies      
This would make an excellent character set for a roguelike.
petecox 3 days ago 0 replies      
Hey google team,

Windows 10 reported an error:

"NotoColorEmoji.ttf is not a valid font file".

jxy 3 days ago 1 reply      
APL is finally gonna take the world. Thanks to google.

Wait, does it have APL symbols?

sebastianconcpt 3 days ago 0 replies      
Awesome. It took us only ~7 decades of Computing to invent this :)
threepipeproblm 3 days ago 0 replies      
This would be better if it were called Nofu...
soheil 3 days ago 1 reply      
There is Old Persian but not Persian?
memetomancer 3 days ago 1 reply      
I'm not installing 500 MB of fonts without a flipping contact sheet on the page :(
blunte 3 days ago 0 replies      
No slash through the monospace zero. What a huge shame.
bkhurjawal 3 days ago 0 replies      
Robot's won't take over the world! Google will.
dschiptsov 3 days ago 1 reply      
Ubuntu package?
c-smile 3 days ago 4 replies      
[pedantic] "UTF-8 space"... UTF-8 code units space is quite limited by single byte - 0...255. You probably meant Unicode Code Points space that, at the moment, is 21-bits number: from 0 to 0x10FFFF (1,114,112 decimal) [/pedantic]
greggman 3 days ago 2 replies      
eveningcoffee 3 days ago 5 replies      
Please do not use these fonts directly from Google servers. This is an another attribute for Google to track the whole browsing traffic. This is too much power for one company.
binthere 3 days ago 0 replies      
So, did they actually handcrafted each of the 100k+ characters? Probably not, but if yes, then I call it BS.
niels_olson 3 days ago 0 replies      
reminds me of natto. Gross.

edit: why the downvotes? They bring up tofu and then name the cure something close to natto, cured soy beans. And by cured, I mean fermented. And by fermented, I mean stringy at the molecular level, smells and tastes awful. And when I say awful, I mean, most of the people from the originating culture think it's awful.

Apple Has Removed Dash from the App Store kapeli.com
1032 points by ingve  4 days ago   470 comments top 58
dandare 4 days ago 3 replies      
I am annoyed by something else: today Apple stepped on the wrong toe, the community will cry foul and someone from Apple who reads HN will rush to salvage the situation. We have seen this pattern before (usually but not exclusively with Apple). But what about the thousands of small and nameless developers that were crushed by some script bug or killed by operator misclick? Who will ever help them?
izacus 4 days ago 7 replies      
The thing the pisses me of about these cases is this:

"I called them again and they said they cant provide more information."

They terminate your account and then they even refuse to tell you why. A basic human thing, a chance to fix the issue, but no. Go f* yourself from Apple and that's it.

makecheck 4 days ago 1 reply      
When a serious action is going to be taken for any reason, that action should be PRECEDED by at least an E-mail to the owner and the path to reverse the action should be clear. The E-mail should not just be a terse message, it should contain a wide variety of resources; something like: "Your account and applications will be disabled in 2 days for <reason>. Please select from the following links to attempt to resolve the issue, or call <number> as soon as possible.".

Its not just apps, either. There is frequently a less disruptive option for any major action; for instance, you can delete files by starting with the instantly-reversible "chmod 000", and after some period of time you actually go ahead and "rm -Rf". If, in between, a panicked user E-mails you back and says they really needed those files, you undo your "chmod" and instantly fix the issue. Why should anything on the App Store take days?

davesque 4 days ago 4 replies      
Kinda puts it in perspective how weird it is that companies have so much control over how software is distributed and sold these days. This would never have happened a decade ago.
edoceo 4 days ago 3 replies      
I make a regulatory compliance software. Apple refused to list my App until I removed functionality at their request. Functionality that is required for compliance.

Apple's arrogance in running their store may eventually cause it's decline.

funkysquid 4 days ago 2 replies      
This really sucks - just bought a copy of his MacOS app through his website to try and help compensate. At this point I'm starting to avoid buying software through the Mac App Store unless it's not available anywhere else. Even if it's slightly easier to make a purchase initially, you risk the headache of situations like this where you can't even migrate your license.
norswap 4 days ago 1 reply      
At this point, this is neither surprising, not unexpected. There is ample precedent, there are tens of stories like this every days (maybe even counting only those that hit the HN frontpage).

To avoid these problems, don't sell on the App Store, it's as simple as that (and very sad). Apple's processes suck and Apple doesn't care, as it has had year to fix things, but hasn't. Complaining won't change it. People have complained, and it didn't change.

projectileboy 4 days ago 6 replies      
I wonder if Apple knows how many developers use Dash. They probably don't realize what a high-profile faux pas they're making.
vortico 4 days ago 4 replies      
The OS X platform already has a stupid easy download-and-install process with their .app folders. It's a shame that some users prefer to instead use an alternative distribution method controlled by a centralized party rather than buying software directly from the developer.
okket 3 days ago 3 replies      

 Apple contacted me and told me they found evidence of App Store review manipulation. This is something Ive never done. Apples decision is final and cant be appealed.

abrkn 4 days ago 1 reply      
Could this be related to using the same name as the cryptocurrency? [1]

[1] "Dash (DASH) is an open sourced, privacy-centric digital currency with instant transactions." https://www.dash.org/

koolba 4 days ago 3 replies      
Having seen the internals of many an application that deals with both "human" and "group" accounts, I'm not surprised that something like this would cause an issue. Unless things are designed from the ground up to support it (which they never are), those types of migrations always have a bunch of edge cases that aren't properly handled.

Sure it sucks but the real test of whether Apple gives a rats behind is if they fix it a reasonable amount of time. If this drags on for more than a day without a human response from their support line I'd say, no they don't care. I bet that happens.

Aaronn 3 days ago 0 replies      
Phil Schiller's response (from this 9to5Mac article https://9to5mac.com/2016/10/05/apple-inexplicably-terminates...):

"Hi Matthew,

Thanks for your email about this app.

I did look into this situation when I read about it today. I am told this app was removed due to repeated fraudulent activity.

We often terminate developer accounts for ratings and review fraud, including actions designed to hurt other developers. This is a responsibility that we take very seriously, on behalf of all of our customers and developers.

I hope that you understand the importance of protecting the App Store from repeated fraudulent activity.

Thank you,


whizzkid 4 days ago 1 reply      
Even though It might be a simple error on Apple's end, this is both morally and economically unacceptable. As long as it is not a top-high critical issue with the account or the application, a considerable time should be given to developer to solve the issue.

They can even freeze the money that goes to developer until the issue resolved, but cutting the app from the market and failing on both app users and the developer?

I don't think this is the best approach.

duncan_bayne 4 days ago 0 replies      
"... all who draw the sword will die by the sword."

This sort of thing will continue as long as people persist in developing for proprietary walled gardens like iOS.

Once again, and despite my annoyance with the man in general, Stallman is right. This sort of thing is unethical, and we as developers shouldn't be supporting it by developing for iOS.


brandon272 4 days ago 1 reply      
I don't know why anyone develops apps for Apple's walled prisons. I can't imagine the anxiety of having a widely used app in the app store, and knowing that any day, for any reason that you may or may not understand or be privy to, an invisible hand can simply reach in and shut down your account or remove your app with nothing more than a curt message about your non-compliance of their terms.

No thanks!

DiabloD3 4 days ago 11 replies      
I find it ironic that the EU goes after Google for allowing third party stores (Samsung, Amazon, Nvidia, etc) and sideloaded APKs and allows you to build your own APKs for free...

... but Apple just randomly removes apps that people have purchased from the Apple Store (thus stealing their money and their product, doing everything short of uninstalling it, but preventing reinstallation), and the EU stays silent?

This is some bullshit.

hellofunk 4 days ago 2 replies      
I hope we will see an update here or somewhere soon that provides more information on why it was removed.
LukasP 4 days ago 0 replies      
This better be an error that they're going to correct. It is not acceptable behavior.
amelius 4 days ago 0 replies      
In my opinion, if an internalized market has more than (say) 10K independent people making a profit from it, the government should step in and require it to be opened up, and follow the rules of the free market.

This should hold not only for Apple, but also Google, Uber, AirBnB, et cetera.

Sk1pp 4 days ago 0 replies      
Is there a way for us to remark to apple that this is something we want to see? I would gladly email support.

This might be an effective way to handle these as I've seen a bunch of them.

EDIT: I would like to know the source, I have apps on the app store and I wonder if it is a simple as someone putting in a fraudulent claim of fraudulence.

Aloha 4 days ago 0 replies      
It sounds like just simple human error on Apples part.
dav- 4 days ago 0 replies      
On a positive note, this post introduced me to Dash and I have purchased the MacOS app directly through the developer.

And I'm sure I am not the only one :)

softawre 4 days ago 0 replies      
FYI - I went through the process described to migrate my license and it worked fine.
ThomPete 4 days ago 0 replies      
Man that sucks.

I ended up removing my app from the app store after I realized that Apple would never actually allow me to do what I wanted to do without having me go through hoops to get it approved every time i made an update.

Never been happier. I am sure Dash will do just fine outside the app store too.

zorrobyte 23 hours ago 0 replies      
Well, that was a complete waste of time. I requested a refund or credit for Dash as I paid $29 for it.

Chat and Phone T1 and T2 Advisors all said there is nothing they can do and kept suggesting I restore a backup. Too bad "Transfer Purchases" was removed in iOS 9 with App Slimming - meaning that there is no way to backup/extract an ipk file from iOS using native tools. They also couldn't offer any sort of iTunes Store credit or refund.

Long story short, if Apple decides to remove an App from their platform, it's gone - period and they don't give a sh*t how much you paid for it.

tj4shee 1 day ago 0 replies      
I bought my version of Dash on the App store.... shame on me ! and this is the exact reaon I avoid the app store and buy things directly from the developers when I can.

So Apple, how do I get my $$$ back so I can rebuy it from a reputable source ???

tj4shee 1 day ago 0 replies      
Apple has not only screwed the developer, but also EVERYONE that paid for this through the app store... I will no longer be afforded upgrades that the developer makes...
superfluid 4 days ago 2 replies      
If I had to guess, I'd say there might have been a copyright violation with respect to downloaded docs.
makecheck 3 days ago 0 replies      
I have to say, the way he has handled this has been fantastic. Within a day, he has published a license migration tool; fully explained the situation; and basically made it as painless as possible for all his users (versions 2 and 3) to start using non-App-Store licensing. I immediately paid $10 to upgrade from 2 to 3.
smegel 4 days ago 0 replies      
"Hi, Fred from Apple here..."

Things I've never seen on HN.

fierarul 4 days ago 0 replies      
It's also quite possibly a legal mixup.

The company he most likely used for the iOS app is, I assume, KAPELI APPS SRL which seems to have been incorporated in September 2016.

Which means that for the OSX app he used another company or he sold it as an individual.

This conflict as well as the company having no history might have triggered something on the Apple side.

CodeWriter23 4 days ago 0 replies      
Seems to me if Apple doesn't resolve this in a few days, he should move to the ad hoc build-your-blob and side load it like f.lux did. Enter a license key to prove you've paid.

It's beyond Apple's intent for the free version of Xcode, but what does he have to lose? Fuck 'em.

alanh 4 days ago 0 replies      
What a convenient license migration! Excellent.
jerrycabbage 4 days ago 0 replies      
This is why anytime you are dealing with things that anyone could conceivably find questionable you either develop a website or skip Apple. Their fanboys tend to have money, but let this be a lesson..
MrSourz 4 days ago 0 replies      
Yikes, I hope there's more to the story here, but it doesn't sound like there is.

I use dash if I had done the wipe & reinstall I was planning to this coming weekend I would have to repurchase it.

butterfi 4 days ago 0 replies      
Confusing, but makes me glad I bought Dash outside of the app store.
givinguflac 4 days ago 0 replies      
I really hope it's not the case, and would be a dick move, but perhaps this is another sign that Apple is bringing development tools natively to iOS.
mrmondo 4 days ago 0 replies      
Does this mean I get a refund for my purchase?
aabbcc1241 2 days ago 0 replies      
Not mean to be offensive but I will think this way:App Store is a commercial service. When you post something there, you're solidating service from Apple.So if they reject to serve you, it's like a store reject to sell something to you (not because you cannot pay for it).

My first response is why don't distribute it on your own, like an HTTP link to 'apk'... then I realised general user cannot install stuff without App Store.

Why open source project bother to support people not using open source system? You cannot save to whole world (like someone lock himself intentionally during a fire, and reject to open the door. It might not be a good example but I hope you get my thought)

deathtrader666 4 days ago 0 replies      
Why isn't there a class-action lawsuit against Apple on issues like these?
dec0dedab0de 4 days ago 1 reply      
It looks like Dash downloads documentation from various sources and displays it offline. I wonder if one of the documentation sources has a license that doesn't permit this use, and then they filed a DMCA notice.
jheriko 4 days ago 0 replies      
i'm a little skeptical this is the right approach...

everytime i've encountered a problem that someone couldn't resolve with apple over the phone, i've managed to resolve it by phoning them.

i wonder how much effort the developer really made, and how he talked to the people he dealt with. my experience of apple, and infact most customer service is that if you are nice and sympathetic and explain thoroughly the nature of the problem, then people will do their best to resolve it.

gtsteve 4 days ago 0 replies      
If it makes you feel better, I'd never heard of this app before. I really like it, and I'll probably end up buying it (on macOS).
techwizrd 4 days ago 0 replies      
This is disturbing. I've been working on an open-source Dash alternative [1] for Linux (anything with GTK actually) that is compatible with Dash docsets. Dash is a really cool piece of software, and it's really surprising to see something like this happen. I hope that this is resolved, and that we are Apple take steps to change their policy in the future.

1: https://github.com/techwizrd/tarpon

jrobichaud 4 days ago 0 replies      
Is Dash iOS affected as well?
emeraldd 4 days ago 1 reply      
I wonder if this is as simple as a typo in the DUNS number on someone's part?
madushan92 3 days ago 0 replies      
Is there a way to notify Apple about this like a petition or something? Lots of us would be happy to help
rezashirazian 4 days ago 0 replies      
Sounds like a simple human error. I expect to see an update stating the issue has been resolved fairly soon.
charlesdm 4 days ago 3 replies      
BoronCorps 4 days ago 0 replies      
Note that the version of Dash on the MAS is not sandboxed. Even after the Dash 3 (major update) release, App Sandbox was not adopted.
sdegutis 4 days ago 0 replies      
> "I cant update Dash for iOS anymore and I cant distribute it outside of the App Store."

Well, that's the problem with a walled garden in a nutshell, isn't it?

ezoe 4 days ago 0 replies      
He decided to waste his effort on the locked-in closed platform.He totally deserved it.
jbverschoor 4 days ago 0 replies      
Maybe you bought ratings?
anonymous_iam 4 days ago 0 replies      
Perhaps this is just another case of Apple/USG assuming the name Dash is associated with terrorism. After all, Dash is similar to Daesh...
lihebi 4 days ago 0 replies      
Just want to complain that I bought Dash 2, and there's no way to access it after the Dash 3 is out. I don't understand why the guy wants to make money this way. Good job Apple.
credo 4 days ago 0 replies      
Interesting flagging behavior :)

The #3 item on the front page has (Longest humans can live) had 44 points and was posted more than 1 hour ago. The #4 item (Typora) has 42 points and was posted more than 1 hour ago.

However, this post on the App Store is at #8 even though it has 172 points and was submitted 47 minutes ago.

diskrisknbisque 4 days ago 1 reply      
Apple is choosing to only support a few cryptocurrencies and hasn't given any criteria for their choices. This is all people want, Apple--reasoning! The sting from incidences like these, at least for me, comes mostly from the information scarcity that proceeds them.

Unfortunately, it seems, the writing truly was on the wall once Dash had to be removed from Jaxx Wallet.

I'm not even a Dash user, but choice in such a new space is important.

Not OK, Google techcrunch.com
646 points by CapitalistCartr  4 days ago   574 comments top 83
ubercore 4 days ago 20 replies      
Google Now has done the same for me, told me how long it will take to get to a bar I frequent. My reaction was quite exactly "Oh that's neat, thanks!" and I went and had a great burger that night.

Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.

TulliusCicero 4 days ago 21 replies      
What I think is interesting is that many of us nerds have probably innocuously fantasized about having a Star Trek-like AI assistant with us, but now that they're taking the first steps towards that, we're starting to realize that in order for it to do everything for us, it has to know everything about us, too.
jacquesm 4 days ago 6 replies      
So, at the risk of making myself ridiculous and branded a Luddite:

I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.

This already leaks more data about me and my activities than I'm strictly speaking comfortable with.

So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.

To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.

ht85 4 days ago 1 reply      
I feel like an old fool fighting against its time, but to me all those new applicances are scary not because of privacy (have my data, I couldn't care less), but because of how they shape our world.

Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.

Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.

This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.

This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.

This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.

A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.

jpalomaki 4 days ago 3 replies      
For consumers this will be a choice between keeping their data private and having intelligent systems that perform better.

So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.

The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.

I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.

aRationalMoose 4 days ago 9 replies      
I have an open ended question-- mostly born out of ignorance; But why is this a bad thing? Isn't an artificial assistant that not only knows and understands us but anticipates our needs incredibly useful? In the process, sure they'll collect your info for better advertising, but short of Totalitarian Surveillance or Data Breach Concerns (The former is a bit of a reach if you live in the west, and they can survey you anyway if they really want to, the latter also seems somewhat unlikely)-- whats the issue here? Genuinely asking because I'm trying to understand.
nojvek 4 days ago 1 reply      
Here's why I am afraid of Google. Google could have the best intentions, but its wife NSA that Google occasionally sleeps with doesn't. Everything you say to Google Home could possibly be recorded. Storage and Computing power for google is cheap. They can record everything you say in your home. Their algorithms can connect all sorts of information about you. If trump wants to create the next Muslim holocaust, Google and FB have the perfect information.

This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.

Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.

That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.

Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.

iamleppert 4 days ago 1 reply      
"AI" is incredibly overhyped. Most of the features and applications I've seen can be relegated into the "that's neat" category, before they are turned off and never used again.

Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.

I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?

Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.

swalsh 4 days ago 2 replies      
Imagine you're a something like a muslim in the US, and somoene like Trump is elected 5 years from now. You've been here all your life, you have a job, you pay your taxes, you're just a person who happens to be in the wrong place at the wrong time. Much like a jewish person in Poland in the 1940's. Now even back then, it was not easy to escape persecution... but it was possible. In a google world though, there's nothing preventing a corrupt government, or even a corrupt corporate governence from take over, or leveraging this data to your disadvantage. Perhaps your car recognizes you, and locks you in until police come, perhaps you felt safe enough to go to a bar, and that data was forwarded.

Perhaps an exaggeration, the point is, even if you trust google today. There's no guarantee that data will always be held by the people who are google today. We know for a fact the NSA had access to all google data up until at least the snowdon leaks. To me that's the concern about privacy, you have no idea how it can be used AGAINST you in the future.

throwaway98237 4 days ago 1 reply      
From the article-"In other words, your daily business is Googles business."

From Google-"Google's mission is to organize the world's information and make it universally accessible and useful."

One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information, and then expect their consumers to give them a pass and not call them to account for how they privatize user information.

Looking at the information that Google organizes and makes accessible and useful I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented when compared to "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite". Google is organizing exactly that data which Google can monetize, which is not the world's data. Not a lot of people want to buy data on deforestation so it's much more difficult to get Google to put resources into that. How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.

Simply stated, Google needs to stop acting benevolent and start fessing up to attempting to be omniscient in order to be all knowing about its users, not "the world's data".

spac 4 days ago 4 replies      
I'm confused as to why in this thread there's very little contrastive commentary on the different stance taken by Apple and Google about privacy.

Apple has made preserving user privacy a paramount goal, investing in research and technology to achieve it with minimal loss (however much it is) of (intelligent) functionality.

I find that a very strong point for the Cupertino based company.

(edited for legibility)

thr0waway1239 4 days ago 0 replies      
The idea behind so called checks and balances in the political arena is now needed in the tech arena - more specifically the megacorp arena.

People say, competition will ultimately take care of it. Yet, there really isn't a serious competitor for Google's search engine. And don't even get me started about social networking with respect to your private lives, where the only player is FB as far as I can see.

People say they don't want the government involved, and often for good reason. But if there is no expectation that these tech giants will self-police when it comes to privacy, and people don't want these organizations to be policed by the government either, then how exactly does this play out? How far is too far before we start demanding more respect for our rights from these organizations?

Another thing to think about: when dealing with tangible goods, the creative destruction of capitalism is somewhat reasonable to justify because it is usually easy to see. How does it work with information? Suppose FB just completely blew it for a few quarters in a row, and starts tottering towards its demise, what happens to the "defensible barrier" called data? Does it belong to FB to do as it sees fit, like the assets of a company about to be liquidated? Or is FB going to "return" it to the people from whom it got it? If some other company now got possession of its assets, including data, what is the expectation around what are reasonable uses for such info? Or, is FB, with its trove of data about every single person who has held government office, now just too big to fail?

And all this can be asked just of the data that FB collects from you directly by asking you to fill it in. What about the stuff that it "infers" behind the scenes? What about the "connections" it adds to its social graph without your permission in order to provide a "local marketplace" which apparently gets rid of the "private information" challenge? [1] Not that Google is any better in this regards, of course.

I think the time has come for some serious thinking about checks and balances in the privacy arena.

[1] https://news.ycombinator.com/item?id=12628808

mrgreenfur 4 days ago 1 reply      
This is literally the perfect end-game for an advertising company: total awareness of need under the guise of 'optimization' or 'AI enhancement'. They can see what your'e searching for, where you're going, when you run out of Mayo in your GoogleFridgeAppAssistant. What better way to offer ads than EVERY time you have a want? It's an advertising utopia!
thsealienbstrds 4 days ago 0 replies      
Just a couple of thoughts.

Is the market really so bad that Google needs to invade people's privacy to this extent in order to grow?

I bet Google's CEO will not use the products himself. Google is almost behaving like a pusher, promising people comfort at the expense of their livelihood (the chilling effect).

Perhaps this should simply be illegal. If people want a personalized AI assistant, why not train the AI on the user's device? I seriously doubt that it has to know everything about everybody's behavior in order to know some things about the user's behavior.

pmyjavec 4 days ago 2 replies      
You really have to wonder who truly needs / wants AI in their lives? It's really just being pushed on. Google should be careful not to make themselves irrelevant.

I've been experimenting spending less time with my devices and it's hard because I'm addicted, but life is more fun when it's being lived and not having to even think about technology, leaving devices of all kinds at home and just sitting in a park is a real luxury.

blhack 4 days ago 0 replies      
I guess, reading through these comments, that I'm the only one who wants this future? Yes, sign me up, google. I'll give you more of my information, if you can take it. Can I wear an implant that measures my heart rate, body fat content, blood pressure, glucose level, and brain activity as well? Because as soon as I can I will be the first in line for it.

What some of you don't seem to realize, (and this happens in EVERY SINGLE ONE of these threads) is that:

1) AI is not magic. Yes, we call it "AI", but you use words like "know" as if there is a conscious entity that "knows" something about you. The AI doesn't "know" anything. It's a computer.

2) Yes, actually you can opt out if you want to. Get a flip phone, don't use google services, use an adblocker, block javascripts that you don't like, don't send emails to gmail addresses, etc. Just don't use their services if you don't want them. Yeah, this might be harder. It might feel like you are living in the 1990s/1980s, but it sounds like that is what some of you want.

I, however, want a future where an AI can tell me things like "Flights to Shenzhen are really cheap right now, and you have the discretionary income to afford a trip there. Here is a possible itinerary for you based on the types of things I know you are interested in. You could leave this Saturday and there is nothing on your calendar that you need to be at for the week."


"I noticed that you have been bicycling a lot lately, and based on the patterns of where you go, I think that the following bike trail would be interesting to you. The route is loaded up on your phone already."

The other thing: google is an advertising company. Yes, because I know this, I am able to take this into account when listening to google's suggestions. But here's the thing: I like being [well] advertised to. I have discretionary income, that is WHY I HAVE A JOB. I am going to spend that money on things. If there is an AI that is helping me find the perfect nexus of things I want and things that I can afford, that is a GOOD thing. That is helping me more efficiently spend the money that I got.

Yes this stuff is subtle. Yes this stuff is pervasive. No we don't need yet another "2edgyforme" "if you aren't the customer you're the PRODUCT" articles about google.

lubujackson 4 days ago 0 replies      
I think there are two problems with this suite of crap from Google: the privacy issues and the fact that Google is putting corporate objectives ahead of creating useful things.

It's clear Google wants to "own the home" and all their products were built to further this goal (rather than be useful themselves). This is why Google bought Nest for 12 jillion dollars. And it's why the iWatch failed and Google Glass failed - right now, these are niche products that barely have purpose.

Now this stuff may become integral to our lives, as depicted in so many sci-fi stories, but if they become embedded in our lives and are wholly owned by one huge company, that should be terrifying to everyone.

Here are some real world reasons why: a virus is installed on your Google box through your wifi - now house robbers know everything about your schedule and habits. Your parent goes through your every personal action to make sure you aren't getting in trouble. A spouse uses the system to track your every movement and make sure you aren't cheating. And of course, the gov't has access to all of this data by default. Imagine being a famous celebrity with every action in your house known and accessible to any gov't peon with access and a bit of curiousity. This isn't some conspiracy theory, this is exactly the access Snowden had (and he was a contractor).

It isn't what these products are, it's the direction they represent: complete surveillance of every personal action, stored and owned by one monolithic corporation and the government. And not only is this is sort of where we are heading, it's Google's clearly stated objective.

It reminds me of the 50s when plastics were going to revolutionize everything... which they did, but we melted off the ozone layer before realizing the consequences of slapping new technology across the world. Especially when the benefits are so minimal and the threats are so real - imagine McCarthy with the type of access and control these devices would provide if Google succeeds in pushing this across 80% of homes.

kbenson 4 days ago 1 reply      
Does anyone else find the tone of this article off-putting? I mean, I agree with the author, but the presentation feels like fear-mongering. Maybe this is what we need to get people to pay attention to the details, but I instinctively mistrust things I perceive as trying to appeal to by fear at a base level, and this triggers that fairly heavily.

I have very conflicted feelings about this article.

yodon 4 days ago 2 replies      
When IR remotes hit college campuses, the game was to shut off someone else's TV through an open door. There's even a one button remote from that era that shuts off any TV in sight [0]. Voice control is like IR on steroids.

Guest at house party: "Ok google, show naked pictures of [host's ex-girlfriend]"

[0] https://www.tvbgone.com

davidcgl 4 days ago 0 replies      
Many readers are skeptical about the usefulness of personal AI assistants. This reminds me of what Jeff Bezos said about disruptive technologies [1], which I think resonates well among many tech company executives. You (they) need to be willing to be doubted for a very long time.

[1] http://www.geekwire.com/2011/amazons-bezos-innovation/

Any time you do something big, thats disruptive Kindle, AWS there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?

wisevehicle 4 days ago 2 replies      
The reaction to this seems awkwardly negative when contrasted with the praise that gushed for Amazon's Alexa products. I am having a hard time tracking why folks seem to feel so differently about google and Amazon having similar access to personal information.
jklinger410 4 days ago 1 reply      
I'm not worried what the engineers who built this will know about me. I'm not worried that Google will centralize this data so if someone hacked it and could drill through the data they could find me.

I'm worried about who Google wants to sell this information to and what they want to do with it. I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.

We can have an ultra-smart AI that does everything for me without worrying about these things. I don't want to pay with my personal information, I want to pay with money. I want Google to stay out of my life.

isaaaaah 4 days ago 0 replies      
If 100 people do A after doing B and you are the 101st who does not want to do A, is it your fault or that of the algorithm? This is no "artificial intelligence" it is a fitness, a mutation and an evolution function, without any true randomness or "chaos" prediction (predicting the future...), but with added Advertising hidden between your reccomendations. Like they can not build flying cars so they reinvent the hovercraft. Aimed at the Internet of things, which will be the biggest tech bubble known to mankind. Sorry i have to vent that somewhere, but all my communication platforms have already been shut down.
daemonk 4 days ago 2 replies      
I am not trying to inject my opinion. I am looking for genuine discussion. The idea of privacy has always been kind of vague to me. Where do we draw the line of acceptable information sharing and not? If a company is collecting data on our behavior and changing their practices accordingly to maximize their profits, is that immoral? Isn't that essentially A/B testing? The practice of using that data in a potentially manipulative way might be immoral, but is the best way to prevent that really to completely not share anything?
losteverything 4 days ago 0 replies      
It's ok Google.

Television was ok, too. I used to watch. All the time. TV was the glue that kept us together. Now it's the acid that tears us apart. I no longer use a television.

Google. I love your maps. Your directions. Your free storage. And my earning a living never requires to use you, Google. Just like my TV.

I do expect Google to become something that I no longer desire. Just like the TV. And I think Google won't be able to control or predict it either. Just like tv.

hackuser 4 days ago 0 replies      
It's not just a matter of personal preference. It's well known that mass surveillance is a powerful tool of oppression.

Oppression is not a theoretical idea and not only an historical problem: Government mass-murder in the Philippines, the oppression of Muslims in Europe, of a large religious group in Turkey, of Tartars by Russia (in Ukraine's Crimean province), of so many people in Syria, of populations in all the oppressive countries in the world. The U.S. election could result in oppression of Muslims, Latinos, blacks and others; some U.S. cities already use 'predictive policing' to identify and harass private citizens - what will happen if Muslims becomes an open target? And don't forget anyone who has any interaction with Muslims. Such things have been going on since the dawn of humanity and unfortunately will continue.

The idea that Google and other commercial mass surveillance will not be used for these purposes is a dangerous, irresponsible fantasy; it's lazy, head-in-the-sand thinking, akin to climate change denial: We haven't died yet is the only argument. These systems are not and will not be kept out of government hands: Government already has broad access, as is well known (National Security Letters, NSA spying, Yahoo's recent revelation, etc.). Laws can be made at any time giving government more access, and they will in climate of oppression. Many obtain illicit access, as we know, from the NSA to foreign criminals to antagonistic nation-states. And it assumes that the companies want to deny access; inevitably, some CEO of AllYourDataCorp will support government surveillance and be prejudiced against Muslims or immigrants or blacks. Likely, at least one already is doing it.

IMHO, while it disrupts our plans for IT and wealth, it's absurd to think otherwise.

usgroup 4 days ago 3 replies      
Prerequisite to personal AI worth a damn is data about you worth a damn.

It'd seem windows 10 is setting Microsoft up for this. Google is following suite with its own hardware.

Central task is to infer what you want and help you achieve it, but further, your AI can ask you questions too to work out all sorts of things subtely.

i think eventually, we'll think of I personal information as a commodity or "raw material" and regulate its extraction and trade as such.

marwatk 4 days ago 0 replies      
We are in the mainframe/terminal era of AI right now. Just like with early computing we don't yet have the local resources to do AI locally, so we're using terminals accessing the cloud. The consequence is our data also lives there. It will inevitably change and a personal AI and relevant data will live in your pocket instead (for those that would prefer a limited but more private version).
quickben 4 days ago 0 replies      
Well, they came a long way for an advertising company. Most usually die from ad-blockers, but they dodged most of these bullets it seem.

Moving to voice-to-text everything android, just seems as a logical extension to advertise/sell data even more.

Whether is ethical or legal, it doesn't seem to concern them at all when profits are in question.

raverbashing 4 days ago 0 replies      
And that's why I turned Google Now off (but still use some other Google services)
throw2016 4 days ago 0 replies      
Anyone with a passing acquintance of history and the insideousness of surveillance and will not be blase about privacy or casually trade it in for trivial conveniences that hardly merit the word ai.

You many not 'personally' need privacy or freedom at this point in your life but to casually dismiss it out of hand and fail to consider its import for a functioning democractic society is beyond reckless. Its just one of those things you don't need until till you do.

And thankfully individuals aren't in a position to trade that away unless they can write a new constitution and convince everyone to get on board.

All surveillance does is compromise your society in a fundamental way, and in this case just to add to Google bottom line and ramp up Google's creepiness factor even more. That's a bad deal.

taurath 4 days ago 0 replies      
Here's the thing - we're starting to get into territory where we can actually add real value for people again in terms of helping them plan their day, and actually have real AI assistants, computers from Star Trek, what have you. The value here is pretty easy to understand. The problem is that they're all being pushed by advertising companies who make all their money by learning and selling every single bit of information about you.

I want a startup that provides services like this but treats your personal location, correspondence, and behaviors like tax returns and credit card numbers. If we can achieve a good measure of safety and privacy in our messaging apps, we can do it for this sort of data.

isaaaaah 4 days ago 0 replies      
If it constantly needs data in order to calculate predictions, then it is no artificial intelligence. It is plain Software, like a calculator. The only reason it is called artificial intelligence is because that would be the only reason to adopt this technology.

Again, it might be neat, having a computer like on star trek, but what if you oppose your government? What if you oppose anything, and suddenly your toaster burns down your house, locks you out, locks you in, reports your every move?

Look at Manning, look at Snowden, look at Assange. They opposed and now they get terrorized by the govrnment and the software they once happily used to use. Look at how i will be treated right here by others.

Stop this Masspsychosis.

boardwaalk 4 days ago 0 replies      
Are there any open source/self-hosted projects that do some of what Alexa, Google Assistant, Google Now, Siri do?

Even without the manpower/big data/processing power of a big co I'm sure we could create something that's somewhat useful.

rwbcxrz 4 days ago 0 replies      
I seem to remember having a lot of the same feelings during the Apple event last month. The Nike+ Apple Watch will "helpfully suggest gear" for your workouts.

Ads have become utterly pervasive, and avoiding using Google's AI isn't going to protect you from them. My Samsung "Smart" TV has ads for Hulu built right into the operating system (despite my being a Hulu subscriber at the time). Windows 10 is basically one big advertisement (at least the consumer edition).

If I have to have ads blasted in my face all the time, I'll take Google's AI-driven ones that at least stand a chance of being less annoying.

optimuspaul 4 days ago 0 replies      
Nobody is seeing the obvious benefits here. Wouldn't it be great if we didn't have to really think anymore? Google can just tell me what to eat, what to wear, when to pee, etc. I for one welcome our AI overlords.
pmoriarty 4 days ago 0 replies      
People will come to value their privacy more if/when they or their loved ones become victims of harassment, stalking, blackmail, or identity theft as a result of their data being abused, leaked, or stolen.
KirinDave 4 days ago 0 replies      
What I think is really interesting about this conversation is the total lack of conversation. Any and all data collection is either completely harmless (from the corporate narrative) or the end of all liberty and privacy (from the EFF narrative and their leech-like tech rag clickbait headlines, sup techcrunch, you're still the problem!).

There is no concept of even discussing that this might be a tradeoff or a shift in what is perceived as private. There is no consideration given to how we might still do these things that people want while protecting their data. There's no consideration for how people's lives are changed in different ways by this tech.

Nope. It's either a total gain or a total loss.

And that is the real problem here. People are applying their political bad habits to what should be a reasonable and sensitive discussion about the varying levels of tradeoffs we should be willing to give and what the net good we can extract from this technology.

A great example is street view. Street view ultimately has enabled extremely detailed and powerful navigation, complete with a ton of ways to do real time traffic detection. Most people using apps that benefit from this data would say that's a net good, and in general as the tech evolves and traffic distributes more efficiently then urban environments see a similar positive effect.

Of course, the tradeoff is that I can scan a snapshot of your street and if you were there playing football with your kid, walking your dog, or publicly exposing yourself then minus your face I'm going to be able to see all that.

What makes these kind of issues even less clear is that street view enables self-driving car technology (we need the detailed and constantly updated nav systems for them). Self-driving car technology has the potential to totally transform some neighborhoods, has massive potential for assisting disabled people, can completely change the way we ship goods and thusly preserve oil and energy resources for generations to come. But it also has the potential to be a new way for the upper and rich classes of the world to completely cut out service industries and further alienate the economic middle and lower classes.

Why is this meaningful? Because if we don't talk about them then we can't help shape them. If we understand the implications as a society and demand commensurate good from these private industries then it can be an incredible boon to our societies. If we don't, then one of these extremist sides will win and all options for a middle ground where we get benefits and have tradeoffs will be excluded.

That's a terrible outcome.

tempodox 4 days ago 0 replies      
> ... your daily business is Googles business.

I can see the day coming where this is their primary marketing slogan.

kafkaesq 4 days ago 1 reply      
The scope of Alphabets ambition for the Google brand is clear: it wants Googles information organizing brain to be embedded right at the domestic center i.e. where its all but impossible for consumers not to feed it with a steady stream of highly personal data.

Unless, that is, you never buy any of that junk in the first place -- because like, who needs most, if any of it, anyway? -- and keep going on with your life. Which was humming along just fine before the IoT came along, after all.

Nano2rad 4 days ago 0 replies      
It is wrong even to ask us permission to let Google and also Apple monitor microphone all the time. If that becomes norm, there has to be hardware switch to disable microphone.
abandonliberty 4 days ago 0 replies      
This is the future.

To remain competitive people will adopt new technologies. Google assistant/cloud, self driving cars, CRISPR. Consider what people gain and lose with each new technology, such as the ability to drive, a bio-engineered kill switch, or control their own hardware (windows 10).

All new technologies can be compromised. The ability to process the extreme amounts of data we are generating is already at previously unimaginable levels. Political dissidents or those who interfere with corporate interests can be identified and silenced with false evidence (pedophilia!); media control; and personally targeted DoS of finances, cloud services, etc.

This is the ability to control the world. The corporate world is disincentivized from doing anything about it, and governments don't really get it as evidenced by their hoarding of zero-days [0].

There's a war going on right now. It's terrifying, and awesome. Throw in some global climate change and our next 50 years are going to get interesting.

When the end comes I'll be that crotchety old guy who knows how to DRIVE A CAR and use a general purpose computer.

Hack the planet!

Here is what was possible in 2011: https://news.ycombinator.com/item?id=12528544[0]https://www.schneier.com/blog/archives/2016/08/the_nsa_is_ho...

MichaelMoser123 4 days ago 0 replies      
Imagine the storage implications - i mean they have a hard time to store every click we make with the mouse, now they will have to store every noise and breath we make as well. I see a data store with the size of Canada.

Also how do they plan to make money besides the initial cost of the gadget? Can they push adds while driving? would be too intrusive, or is this supposed to be based on monthly payment, or a tax? Google for government! Wall-E might be needed to clean up the mess after them.

i didn't know that Sting said in 1983 that his song is really a nasty song about surveillance; at least they have an anthem for promotion purposes.http://www.songfacts.com/detail.php?id=548

Now i really don't think that personal assistants are going to be a success. They do descriptive modelling based on what you do, there is no way to evaluate if the suggestions are any good. Without such an evaluation they can't do reinforcement learning. Also they might suck in too much data - that would make it harder to make meaningful suggestions.

mark_l_watson 4 days ago 0 replies      
With faster multi-core CPUs, much more RAM and SSD, smartphones are probably powerful enough to run a privacy respecting AI assistant locally: watch your web access, phone access, location, and do all processing locally with no data leakage. Ideally it would be open source, or at least from a trusted company that made money only from selling the app, and not off of our personal data.
Waterluvian 4 days ago 0 replies      
I was pissed when Google suggested I take pictures of the restaurant I was in and submit them.
AJ007 4 days ago 2 replies      
Something to think about -- making the assumption that in the future, a bunch chunk of things currently done with search engines and forms are done by voice command:

Ok Google, order new toilet paper -- order is routed to any ecommerce provider which outbids everyone else to fulfill the order.

Alexa, order new toilet paper -- order copies previous toilet paper order and goes to merchant with lowest advertised price that reports to have that specific product.

Hey Siri, order new toilet paper -- ?

leecarraher 4 days ago 1 reply      
I'm always confused by articles bemoaning the AI and tech revolutions, written in magazines that expound new tech revolutions.

I understand Apple and the EFF are staunchly against merging products databases involving the same user's data, but for me this is an essential feature of the google ecosystem. I can ask for traffic and have directions appear on my phone while driving, play movies on the tv that i am looking at instead of my phone, audibly alert me to meetings while at home or work, and turn the lights on and off in the place that I am.

I don't think they are misleading people, the mute button pretty strongly implies the duality that you can't hear and un-hear things post-hoc. In addition they dont hide the fact that you are talking to a computer at a company by obfuscating it with some quasi-futuristically named caricature.As is often with these article "always listening" is far more misleading, an embedded keyword processor is listening for keywords, and only if they match the phrase "ok google" are they sent to google servers. otherwise it just sits there sharing nothing.

jimkri 3 days ago 0 replies      
I was really against getting updates about where I usually travel or other notifications that are constantly tracking me. But after a while I really like some of the tracking and notifications. Now I love when google reminds me to leave for a appointment and gives me directions, it really saves.

I do hate being tracked, but I have slowly started to like the connivence of it. Having all my information on Facebook, Linkedin, Instagram, and everything else privacy has really gone down. If my Nexus 5x can save me time, I will sacrifice some privacy.

Kevin_S 4 days ago 0 replies      
So I see both sides of this privacy/AI debate going on and am wondering, what is the solution? Is it technically feasible to create a useful future AI without compromising privacy? Because if not, I fear for our future. While I agree with what everyone on the privacy side is saying, I believe in the long run the consumer value AI will win, leading us down this path.
lima 4 days ago 0 replies      
The new assistant in Allo which automatically suggests message replies to images by detecting what's in the image... it scares me.
JustSomeNobody 4 days ago 0 replies      
Ok, so no personal assistants and high-tech homes for anyone. Done. There, you happy now? Because if Google isn't being fed all that data, it can't provide those things. You either have it or you don't.

Edit: I've made this point before, but your data is a currency. Spend it wisely (or even not at all). It's up to you.

grandalf 4 days ago 0 replies      
It's fascinating to think about how exciting this technology is -- passive monitoring by helpful AI that can drastically increase convenience and efficiency.

But we've seen that Google is happy to turn over massive amounts of customer data to government without a warrant and without alerting customers to the practice, which makes the technology seem ominous.

First the GPS, then the microphone, then the camera, accelerometers, 3D touch sensors, etc. Gait, affect, and all sorts of factors will be able to predict criminal behavior before it happens.

Let's hope the next generation of tech giants will take customer privacy and freedom seriously and avoid the dark patterns and privacy violations of the current era.

Only now, when it's likely too late, can we actually get a glimpse of the sort of Orwellian dystopia that so many have warned about in decades past.

anondon 4 days ago 0 replies      
The idea behind ZeroDB might be applicable here.

All data generated by a user is encrypted and stored in the cloud with the decryption keys on the user's device. This way, the service provider (eg Google) can't read your data. The major advantage of this approach is that the user is in complete control of the data. The drawback is that service providers and AI systems will be starved of data that enable targeted ads/recommendations.

A good middle ground might be to offer users an option : 1) Give us your data 2) Pay up and we will not collect any data.

Thinking deeply about the state of the internet, I think we have to move towards a model where users pay for services they use if privacy is a concern. As it stands, a lot of services offer free services in exchange for our data which is monetized through ads.

jamoes 4 days ago 1 reply      
What I'd like to see is some good open source software that can compete with Google Assistant, Siri, and Alexa. It looks like there are some promising projects, but nothing turn-key yet. I'd like to be able to simply apt-get a package, and have voice recognition on my box.
tobbe2064 4 days ago 0 replies      
Reading this brings the thoughts to Kahlil Gibrans book the Profet. The section "On Houses" reads:


Or have you only comfort, and the lust for comfort, that stealthy thing that enters the house a guest, and becomes a host, and then a master?Ay, and it becomes a tamer, and with hook and scourge makes puppets of your larger desires.Though its hands are silken, its heart is of iron.It lulls you to sleep only to stand by your bed and jeer at the dignity of the flesh.It makes mock of your sound senses, and lays them in thistledown like fragile vessels.Verily the lust for comfort murders the passion of the soul, and then walks grinning in the funeral.


azinman2 4 days ago 0 replies      
I think it's right for people in the know to be concerned. The direction has all kinds of possible disastrous consequences. But it also has lots of amazing dual-use possibilities for making our lives more fluid, and technology more magical. You can't deny both.

Trying to take some big stand against it I don't believe will work. Look at all those who took a stand against the 2nd Iraq war -- they were drowned out. And now everyone thinks the opposite. Culture/society always pushes a particular direction until there's a very big disastrous reason to think otherwise. Until something clearly really really bad happens this is the track we're on, like it or not.

pmontra 4 days ago 0 replies      
> We are excited about building a personal Google for everyone, everywhere

Not a problem if it run on an appliance at my home, disconnected from all the other appliances in other people homes. That would be a truly personal Google.

hhsnopek 4 days ago 0 replies      
The trade off here is technological advancements in AI for lose of Privacy. Everyone can have their own opinion on this but in reality, consumers are the best for collecting data. The great thing about the grand scheme of it all is that if you don't like the privacy you're giving up, don't use it. There's always going to be the lesser known alternative that doesn't track any of your data. I don't see why articles like this arise as it's clear as day that without tracking and getting analytics, AI won't improve.
piyush_soni 4 days ago 2 replies      
From the article: So the actual price for building a personal Google for everyone, everywhere would in fact be zero privacy for everyone, everywhere.

That has no basis. It is completely possible to do what they are doing by keeping everyone's individual privacy intact. And if I go by Google's privacy policy, that's exactly what they are doing. And I think it's in their best interest to keep it that way, because the day it comes out in public that our privacy is not safe with them, everyone will stop feeding them more data.

ianai 4 days ago 1 reply      
How much hardware would it take to keep 80% or 99% of the data local?
sidcool 4 days ago 0 replies      
Thus article is rightful in raising concerns for privacy. But it's just repeating and rehashing same things. Google had been devouring data for long time.
Beltiras 4 days ago 0 replies      
I've long since given up hope that I can achieve any sort of privacy. Even were I to do my utmost I will still be on the periphery of others using tech, visible through their actions. The only thing I really want is for the content of my messages to be private. This can be achieved through end-to-end encryption. I push everyone around me to install and use Signal instead of Messenger or Google Talk.
thght 4 days ago 0 replies      
I would love to have personal devices that collect my private data and apply AI for my personal benefit, but only then if all data stays inside the device under my exclusive supervision. As soon as this data is sent to some cloud service beloning to a company in the business of generating wealth and power, I totally lose my appetite.
danso 4 days ago 0 replies      
I'm considering purchasing a Home. I already have an Alexa, which I really like, but based on what I know of Google's data and AI (and their service APIs, which I assume/hope will be extended to Home), I can't imagine it not being significantly better than Alexa.

That said, I agree with the OP's takeaway: people should be asking questions. I mean, people should have been asking these questions long ago, even as just search users questioning how Google manages to return such geospatially relevant results. But most people don't even stop to think about it, as that kind of thing is just taken for granted as the thing computers just do.

Maybe with Google's data and AI in the form of a physical, listening bot (I don't know many people who use OK-Google on their phones) will be the thing that clues people in. I'm mostly comfortable with Google's role in my life (though not comfortable enough to switch to Android just yet), but I'm aware of what it knows about me. If AI is to have a trusted role in our lives and society, people in general need to at least reach the awareness that the OP evinces, if not her skepticism.

contingencies 4 days ago 0 replies      
Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we dont really have any rights left. - Marshall McLuhan
mellis 4 days ago 0 replies      
One thing that I think is lost in a lot of the comments here is that, to a large extent, privacy is experienced, not factual. That is, in many cases, the breach of privacy is the act of mentioning something that should be private, not whether or not the system (or the person) knows that thing. This is something we tend to intuitively understand in our human relationships, but one that somehow seems to be forgotten in the design of these systems (or, at least, the conversations about them). We need good ways to tell the Google Assistant that something is private (or for it to figure it out for itself) -- even if it still possesses the underlying data.

(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)

mkhpalm 4 days ago 0 replies      
I seriously wonder if this article would have a different tone if all the same products and vision were presented at an Apple Keynote with Apple branded devices.

Would it be different? My gut says this article would have had a different title.

mankash666 4 days ago 0 replies      
How's this different from Apple or any other company that wants to stay relevant? AI is the future, like it out not.
b34r 4 days ago 1 reply      
Many software companies have been trending toward on-device AI though that largely avoids these privacy issues.
xuomo 4 days ago 4 replies      
Why is everyone so incredibly paranoid?
hans 4 days ago 0 replies      
in WAZE they went ahead and dropped the "while using the app" option for gps privacy. it has become rude actually, so now i turn off location services instead of micro-managing the unmanageable.
api 4 days ago 2 replies      
I will never put an always on Internet connected microphone run by an advertising company in my house. I don't care if it literally spits out cash and cures cancer.
phycodurus 4 days ago 0 replies      
cat <that article> | sed -e s/data/your data/g
carapace 4 days ago 0 replies      
Star Trek or North Korea? In the limit that's the choice.
Shivetya 4 days ago 0 replies      
well my iPhone and my navigation systems both will tell me how far I am from home/work/parents depending on my location all without my asking.
fiatjaf 4 days ago 0 replies      
I'm waiting for the barbarians.
Thaxll 4 days ago 0 replies      
That's the big difference between Apple and Google, Apple just want $$, Google want to know everything at a more deeper level.
hota_mazi 4 days ago 0 replies      
tl;dr: Google wants your data on its servers.

As opposed to all the other companies out there I guess?

Yhippa 4 days ago 3 replies      
I'm willing to sacrifice an amount of privacy now for future potential improvements to my life.
Hydraulix989 4 days ago 0 replies      
So because Google makes phones and hardware (like it has been already for quite some time), there's somehow an even greater threat now on our privacy? I don't buy it.

N.B.: I don't work for Google.

intrasight 4 days ago 1 reply      
I've come to believe that there is nothing that humans can do to resist the privacy encroachments of our machine overlords. Don't blame Google - it's a universal inevitability.
NickVst 4 days ago 0 replies      
I'd be completely fine with sharing most of my data with Google, really. Personal and Private interlock a lot with each other, and if Google wants to give me a personal experience they're going to have to use some of my private experience.
The Dutch Reach: Clever Workaround to Keep Cyclists from Getting Doored 99percentinvisible.org
794 points by misnamed  13 hours ago   444 comments top 58
smartbit 11 hours ago 10 replies      
It is true that getting doored is not part of the Dutch vocabulary as it is not something that happens often. But there are more reasons than grabbing the handle with the opposite hand.

A non extensive list: 1) Dutch car drivers all have been bicyclist before they get their driver license, everyday to school more than an hour being nothing being frowned on. 2) Major transit bike routes have separate bike lanes, the tiny narrow ones of the gif in the article barely exist. 3) Bike lanes in cities usually are placed between the footpath and the parked cars, with most of the times a 50cm wide band left of the bike path allowing for car doors being opened without going over the bike paths, usually this is used for planting trees too 4) All politicians drive bike, the Dutch Prime Minister comes to work on his bike 5) There are local associations part of the national http://fietersbond.nl in every town and they passionately lobby every time they see an opportunity. 6) these volunteers are highly respected and their input is valued by the municipalities 7) one of the prime goals of Dutch national ministerie of Traffic is lowering the number of injured and death in traffic, good recording of cause by police is step one, good statistics then determine the ways roads are laid out. 8) On smaller roads without a separate bicycle path, as a bicyclist you're always on the watch if someone might step out of a car and try to keep a distance by bicycling towards the middle of the road which isn't an issue as this is low traffic street, major bike transit always has separated bike paths with distance to the parked cars. 9) during driving lessons, watching bicyclist is a prime part of the lessons and a good driver keeps an eye on the mirrors for back-coming bicyclist and will warn passengers on the back seats before they get out.

And there are probably more reasons that Dutch have few accidents being doored.

juiced 6 hours ago 9 replies      
Bullshit. I'm Dutch, never heard of or exercised this "Dutch Reach" behaviour and I was not tested for it when getting my driver's license. Instead, you just look around and in your mirrors to see if other traffic is coming before blindly and like an asshole opening your door, it doesn't matter how you open the door after you became aware of what is going on around you. I call this behaviour the "Dutch Common Sense".
scraft 59 minutes ago 0 replies      
I am curious about how may incidents are caused from people that consistently are a bit lazy, and how many are caused by people that make a one off mistake. It is a genuine question. The reason it comes to my mind is recently I was doing a three point turn (actually a 20 point turn) to get out of my drive way, as other cars had left me with almost no space to get out, part of this manouver involved me having to pump up onto the kerb when reversing, and just after doing it, I saw a little girl appear from behind my car on her scooter, she then stopped and looked at me. I didn't find out from her (she disappear shortly after) whether I had almost hit her, or not, but either way it shook me up a little, as I appreciate I could have so easily caused an accident, and I would say I am a very careful, cautious driver. In this particular scenario other road users had left the cars parked in a way that bumping up the kerb was my only option, but I simply didn't see this small girl, I think she came from behind a parked car, so I couldn't see her. Luckily no harm was done, but it shook me up a little. Hopefully it'll reduce my chances of having a similar incident in future as it'll make more more vigilent to my surroundings.
stephenr 7 hours ago 5 replies      
I'm not quite sure I understand. Do Americans not look at what's coming from behind them before opening a slab of metal and glass a meter out the side of their cars?

Kind of reminds me of when I first moved to Thailand. My (now) mother-in-law asked her daughter why I kept looking over my shoulder when driving her car, and why/if I couldn't use the side mirrors. She had literally never heard of nor understood the concept of a blind spot.

diamondo25 12 hours ago 1 reply      
As a Dutch person, this is probably the cause of our parents always telling us to carefully open the door by looking first. And if you are not sure, open it a bit to see more. Also, in our license test, you'll have to learn the following rule: Getting out of the car, is just like turning the car, a special operation and you need to give everyone else priority so that your action will not cause problems. So take your time, be patient and be cautious.
nathancahill 12 hours ago 5 replies      
Getting doored is honestly terrifying (I don't say that lightly). I bike and longboard frequently, and I'll avoid bike lanes if they are alongside parked cars. And this is in Boulder, one of the most bike friendly cities in the US:


Zanni 11 hours ago 11 replies      
The real answer is to get rid of street parking when you have a bike lane. They're incompatible. In fact, just get rid of street parking altogether. As a bicyclist, I've been doored. As a pedestrian, I find it difficult to see around parked cars to know if it's safe to cross the street in some locations. As a driver, I find it difficult to to see around parked cars to know if it's safe to enter the street AND I have a massively increased cognitive load of paying attention, not just to the traffic on the street, but to everything that may be emerging from a car or from between cars.
nightcracker 27 minutes ago 0 replies      
As a Dutchman, this 'dutch reach' technique doesn't exist here. People just look, they don't use some trick to remind them not to be assholes.
CorbenDallas 8 hours ago 3 replies      
Amsterdam resident here. Such a good infrastructure and laws for the bycyclists make them not giving a single shit about the way they are actually driving their bycicles. And their behaviour and driving style is often very offensive to both pedestrians and car users. So it's not that straight and black and white.
DavideNL 2 hours ago 1 reply      
So am i the only one who uses a different technique? I always look in the side mirror (the one on the driver side=left side) before i open the door, where i can perfectly see incoming bikes/etc passing my car.When i got my drivers license i was taught to do this and it became a habit, so i've been doing it ever since. Works very well.

fyi i'm Dutch and i've never heard of the "Dutch reach" before.

saulrh 12 hours ago 3 replies      
Getting it added to the driver's license test is probably the key here.
awjr 6 hours ago 0 replies      
One thing to consider is that the Netherlands is also considered the best place to drive in the world. Being pro-bicycle removes a significant number of short car journeys from the road. http://dailyhive.com/vancouver/best-place-in-the-world-to-be...
carterehsmith 10 hours ago 3 replies      
Perhaps we could add radars that block the door if there is a risk of dooring, either by a bike, or another car.

The car manufacturers have just agreed to put collision mitigation systems in all cars (by 2020 or something), and that requires radars and cameras. Even better, some cars have radars in the back, too, to identify cross traffic & c. So, identifying potential dooring event as well, might be quite doable using those same radars.

Edit: I forgot that my car also has this 'Blind Spot Indicator', that is, some radars that look to the side and back, to figure out if the other car is in your blind spot. I guess that could be used to prevent dooring, too.

mooneater 8 hours ago 1 reply      
Great, a solution that boils down to "bikers can be safe from just one of many mortal dangers, if and only if every single driver learns this new habit and does it right every time".

I want solutions in which safety doesn't depend on perfect behaviour from people at minimal risk (drivers), like physical separation of lanes.

bbarn 11 hours ago 1 reply      
The problem is, dooring only affects the doorer after it happens. Most other traffic violations, like speeding, blowing stop signs, failure to yield, etc, can get you a ticket regardless of their outcome. Opening a car door into traffic, in Chicago at least, puts you at fault but there's no getting a ticket for not looking. There's only a ticket after your action causes an injury.

Obviously, we can't have police ticketing everyone parallel parking for not looking in their mirror, so we think we need to attack this from an education perspective alone.

I propose instead, we stop trying to segregate bikes into lanes that they don't fit into! Good bike lanes have a buffer between cars that makes dooring almost impossible.





Where B = Bike and C = car and slashes and pipes = paint on the roads.

Or, if you prefer an image: http://nacto.org/wp-content/uploads/gallery/bufferedlane_3d/...

Or even better: http://farm4.staticflickr.com/3746/9711441935_3df2f28926.jpg

jawbone3 12 hours ago 7 replies      
Right, Large scale change of driver habits is a simple solution... a more enforcable suggestion is that the car owner is by default liable for any damages that happen from dooring, and that dooring is a fineable offence. Give car owners a real motivacion to check for bikes
tribby 7 hours ago 0 replies      
1) this is not a clever workaround, this is common sense. if you don't do this already, you're a terrible motorist and shouldn't be allowed to drive.

2) do high end cars have proximity sensors for this sort of thing yet? comparable to the way that large vehicles alert you if you're about to back into something.

adamc 1 hour ago 0 replies      
I know this would be a lot harder, but I think the best solution would be some city redesign where we separated bike and automobile routes, or at least minimized the places where they are together.

The Dutch reach _is_ clever, but it takes a lot of retraining of existing motorists, so it will take concerted awareness raising and probably a fairly long time-frame to take effect here.

virtualritz 3 hours ago 0 replies      
I Germany this is taught as 'Schulterblick' ('over the shoulder look') in driving school (which is mandatory in this country).

In general, you are taught to always use Schulterblick when:

1. Changing lanes.

2. Entering/exiting from/to a ramp.

3. Turning.

4. Getting out of a parking spot.

5. Opening a door to exit the car.

Modern rear mirrors are curved at the outer vertical edge to avoid any dead spot.This means just checking the mirror before you do any of the above should -- in theory --, be enough (except for some cases of 3).

But better safe than sorry.

An exception are trucks. They have a dead spot at the right which the driver can't check. Mercedes introduced a an electronic solution for this in 2014.

Qantourisc 8 hours ago 1 reply      
In Belgium we don't have a word for doored either. But we don't use our right hand, we are just learned to look and make sure we are not going to smack someone of their bike.Also often the lane next to you is a car-lane. As such you also learn not to swing open your door wildly, as you might loose your door ;)
ereyes01 10 hours ago 0 replies      
Here in Austin, some streets have done away with parallel parking in favor of back-in angle parking. This approach has some benefits, one of which is eliminating the possibility of dooring bicyclists.

Music in the video is lame, but this video illustrates the concept: https://www.youtube.com/watch?v=HddkCbsWHlk

IgorPartola 10 hours ago 0 replies      
Clearly the solution is to have doors that open like this: https://i.ytimg.com/vi/0oV4IVy8tvE/maxresdefault.jpg
jbverschoor 5 hours ago 0 replies      
Dutchy from Amsterdam here. Never heard about the term nor the technique.We're to look in all mirrors and behind the shoulder before acting.
thrownblown 12 hours ago 0 replies      
Just reach over the handle bars and close the door first.


M_Grey 13 hours ago 0 replies      
Honestly, given how many broken driver's side mirrors you see around, likely received the same way, it seems like a smart move in general. Good for peds, good for bikes, good for cars, and good for the people getting out of their cars.
tschellenbach 12 hours ago 1 reply      
Cycling here in Boulder is definitely much more dangerous compared to The Netherlands. A lot of cars turn right without checking for bikes. What's up with that?

On the other hand the bike sharing program here in Boulder is the best I've ever seen.

(I'm Dutch and live in Boulder, Colorado)

Anechoic 12 hours ago 1 reply      
I was doing something similar to this over the summer when I sprained some ligaments in my left hand so it was painful to open the door using the "usual" left hand motion. The video seems to imply that reaching over your body to open the door with your right hand will naturally force your body to turn to see behind you (or to the side). This never happened in my experience, I can reach over and open the door with my right hand while still looking forward. Actually having to turn your torso has to be an additional learned behavior. I suspect that folks might make the turn at first if taught to do so, then eventually do the lazier thing and stop turning.

As a cyclist who has been nearly doored on multiple occasions, I emphasize but the cultural change might be a bit too much to expect. TBH, I prefer jawbone3's suggestion of making the driver/car owner liable by default for "dooming" and applying a fine and/or surcharge.

megablast 12 hours ago 0 replies      
Good luck changing drivers behaviour. It is hard enough getting everyone to use an indicator.
andmarios 5 hours ago 0 replies      
This is very old, for as long as I can remember (decades) you will fail to pass your driving test in Greece if you try to open the car's door with your hand closest to it and without checking the side mirror first.
eecc 3 hours ago 0 replies      
It's been standard practice in Italy since forever: I witnessed someone failing their exam while I was waiting for my turn to get my driving license.

The idea is that looking behind also protects you from being smashed to a pulp by an incoming car.

emodendroket 2 hours ago 1 reply      
I don't know why anyone would fling open the car door on a busy road without checking their mirrors anyway.
jorgenhorstink 4 hours ago 0 replies      
A couple of months ago I watched a short video about how we got our cycling lanes. I think it provides some interesting context on why we built our cycling infrastructure. If you like this topic, you'll like the video...


lukaslalinsky 8 hours ago 1 reply      
I'm honestly surprised this is such a common problem in the US. Most of the parallel parking spaces here are next to the road, so if you open your door without looking, some other car will crash into you. We have been trained since we were children to always look when exiting a car parked on a street. We were not even allowed to exit from the road side of the car most of the time. As a driver, you have to be super careful, because when you are parked, there is usually a steady traffic less then 50cm next to your left. You basically always have too look and double-check before opening your door on a street. I live in Slovakia, but I think my experience matches most of the countries in this part of Europe.
INTPenis 7 hours ago 3 replies      
It's weird to see this described as a workaround, or a deliberate method, when to me as a swede living in Malm it's the most natural thing ever.

It's up to drivers, and cab drivers, to warn their passengers to be careful when opening the door. Of course this is an imperfect system but from my perspective it has worked for as long as I've lived.

Swedish traffic law dictates that the one opening the door is responsible for any damages caused, so to protect themselves, their insurance premium and their cars drivers obviously become careful when opening doors.

It messes up your whole daily routine if your door is broken because of a bicycle.

cateye 8 hours ago 0 replies      
Bullshit article: the rule that everyone learns is to check your mirror and double check by looking over your shoulder. I have never heard of the right hand rule.

It gets a habit because the chances are pretty high because of narrow roads and a lot of other traffic behind you.

anexprogrammer 3 hours ago 0 replies      
Interesting. I'd just mandate drivers had to spend 10,000m on a motorbike, or time on bicycle and motorbike, before being let near a car. Car driving standards would improve dramatically.
failrate 13 hours ago 0 replies      
I'm going to try this and see how it goes.
paulsutter 12 hours ago 1 reply      
How do existing self-driving cars handle this case? Perhaps regulation could cover this if the manufacturers aren't. The cars should have sensors to know it's not safe to open the door.

Getting rid of parked cars will be a good start.

EDIT: obviously the car can sound a warning or even briefly prevent the door from opening. Similar case, what if a car or truck is passing by very closely at that moment.

dirkdk 5 hours ago 0 replies      
I don't wear a helmet in my home town Amsterdam. I do wear a helmet here in San Francisco. Why? Car drivers are just not used to bicyclists in SF. Yes open their door without looking, turn without using their signals. And in general, don't pay attention and are on their phones all the time
matt4077 12 hours ago 0 replies      
I wonder if the car couldn't be more helpful.

For example, a lit rear indicator light (not blinking) would be a signal that I could check for without much cognitive effort, and it wouldn't be too confusing for the rest of the traffic. Just having it turn on when car was turned off but the doors haven't been opened should cover most situation.

ScottBurson 12 hours ago 2 replies      
I guess this is not a bad idea, but just looking in the side mirror before opening the door is easier to do, and probably easier to teach people to do.

As a cyclist, I give parked cars as much room as possible. I've never been doored... though I've never ridden in NYC either.

_ph_ 7 hours ago 1 reply      
Interestingly, the bike lane shown in the first picture of the article rather marks the region that by German law cyclists are required to avoid. Cyclists are required to keep about a meter distance from parked cars to prevent dooming. That of course does not stop local authorities from creating bike lines exactly as shown in the article :(.
caf 12 hours ago 0 replies      
This is great, because it's simple, easy to explain and highly actionable. It's incentive-compatible because implementing this yourself greatly reduces the chance you'll door someone, and few people actually want to cause an accident.
sixQuarks 9 hours ago 0 replies      
I simply don't understand people that swing doors wide open without even a thought of looking. It's just plain stupidity
london888 8 hours ago 1 reply      
Great idea but I think more of us cyclists could be riding and anticipating 'if that car door opens can I stop in time?'. We can't rely on people in cars doing the right thing. Many of us cycle too fast to be able to react in time.
happyslobro 8 hours ago 0 replies      
This guy has the right idea. The problem, is that your doors are not the doors of a billionaire.


chanandler_bong 1 hour ago 0 replies      
...all this and no mention of Dutch Rudders?
flycaliguy 12 hours ago 0 replies      
Slightly off-topic, but I found it difficult to read an article with a looping clip of a traffic accident embedded. I skipped the second paragraph.
thr0waway1239 11 hours ago 0 replies      
A honest question for people who say getting rid of street parking is an economically viable idea - would it be economically viable if there was an underground road system purely for bikes, where there would be no way for cars and larger vehicles to share the ride? You also don't need to worry about the weather.
mercora 9 hours ago 0 replies      
i am suprised how watching out for cyclist, or anything really, is not common sense. I do that and i do not even have a drivers license... Oo
jasonkostempski 12 hours ago 0 replies      
How about side view mirrors for rear seat passengers? As a driver with that technology I've managed to never open my driver door into an oncoming car.
Shivetya 5 hours ago 0 replies      
When riding my bicycle I treat it as if I were on my motorcycle, all cars are threats and if I can see someone in a car I am passing they will open the door so I plan for it
Jugurtha 3 hours ago 0 replies      
I've always opened the door this way. I want to know what's happening and it makes sense to open it that way. Kind of looking both ways before crossing, it'd be strange to give it a name like "The British Method" since you don't imagine anyone mentally undiminished not doing it.
gambiting 6 hours ago 1 reply      
Is it just me, or is the cyclist in this gif(from the article):


Cycling way too fast on a narrow lane like this?

I'm not saying that he's at fault, obviously the fault is 100% on the side of whoever opened the door, but if I was driving 60mph next to a lane of standing traffic, that would be completely irresponsible, because if someone pulled out into my lane the accident would be severe. Driving in cities requires caution, but I notice that cyclists cycle as if they are alone on the road, zooming at 30mph past standing traffic.

I'm not guilt-free, I used to cycle ~2000 miles a year in a large city and a I did my share of stupid irresponsible stuff on a bike, but I think both sides need a bit of education on how to be safe on the road.

Asooka 6 hours ago 0 replies      
We don't let people do drugs, so why do we let them ride bikes on roads? It's the exact same kind of self-harm, brought on by poor life choices. All the problems in the article can be solved simply by having everyone be in a large sturdy steel frame, i.e. a car. Cars are cheap safe convenient and easy to use, so why is everyone getting on these two wheeled death traps?
fuckbicyclists 8 hours ago 3 replies      
RandyRanderson 10 hours ago 1 reply      
Did anyone else click thinking the "Dutch Reach" was a sexual maneuver? 99% sadness.
FullMtlAlcoholc 12 hours ago 7 replies      
Or, as a bike rider one can simply look into a cars cabin and see if a person is inside. I assume everyone is a terrible driver, have no idea I'm on the road.

I am a regular bike commuter on traffic heavy Los Angeles. I'll never understand the mentality of bicyclists who are more concerned with asserting their rights as opposed to exercising caution and restraint. Graveyards are full of people who had the right of way.

A decentralized web would give power back to the people online techcrunch.com
614 points by endswapper  1 day ago   317 comments top 45
zer0gravity 23 hours ago 7 replies      
So nobody can "give the power back" to anybody, but people can stop giving their power away to others. We all have power, and we manifest it through our actions and choices.

Some say knowledge is power, but in order to gain knowledge you have to make that choice. One can try to educate people about the benefits of hosting their own data, but unless they make the choice to listen and understand it's all for nothing.

I tend to agree with others that posted here, that in general, people are just not interested. It may be too hard for them to grasp the real implications of giving all those informations about them to third parties.. They may also consider that it's too hard to handle all those problems by themselves so they're willing to pay the prices...

I don't think that the change can happen unless the people who do understand, do something about it, but usually these people are more interested to cash in on the ignorance of those who don't...

chestnut-tree 1 day ago 12 replies      
A real barrier to a decentralised web is the difficulty of installing software on a server. I know that sounds really mundane and inconsequential in the broader debate about a decentralised web, but consider the following...

Imagine if installing a server-side chat app, message board, project management app, or CMS were as easy as installing a desktop app. In a desktop app, it's usually one click to start the install and then, if necessary, you're guided through a few screens to complete the install. Want to uninstall? The OS (operating system) will provide a feature to manage that.

Now consider how complicated installing on a server is in contrast. Upload you files to a folder or directory, enable permissions, set configurations not just for your server but also the language the program is written in - the list goes on. No wonder SaaS (Software as a Service) is thriving like never before. Who, other than technical folks, could possibly have the time, interest or inclination to set up a self-hosted solution when the barrier is so high? Perhaps some in the tech field would like to keep it that way? Would Saas be less attractive if installing a self-hosted solution was simple, easy, quick and secure?

Surely an essential part of a decentralised web is that companies, organisations and individuals choose to run their own software using open protocols and data formats. But until the ease, security and simplicity of installation improves for web software, it simply won't happen on a large scale.

Gys 1 day ago 6 replies      
To most people (99% ?) Facebook, Instagram, Google etc offer everything they want. So they do not feel a need for getting the 'power back'.

It all comes down to perceived (!) value. Once people use one thing, they will only switch if something else offers a clear higher value. At that point offering the same value is not enough anymore. This higher value has to be something that makes every day life in a very obvious way a little better. I am afraid something abstract like 'more power' will not do the trick.

userbinator 1 day ago 4 replies      
Does anyone remember the "decentralized web" of the early 2000s? Varous P2P protocols existed that allowed everyone to share content freely, and they did. Content that was not even originally in digital form was digitised and nothing but a search away. It was amazing. The copyright/media industry didn't like that. Security paranoia (possibly assisted by the industry) also heightened its demise.

I really do hope we see another "rise of P2P", but there seems to be strong commercial interests against it. (Bitcoin itself is rather commercial in nature, as it deals specifically with currency.)

mark_l_watson 1 day ago 1 reply      
I went to the Decentralised Web conference in June. Fantastic venue, good talks, and lots of interesting people to talk with during the breaks. I blogged about the experience http://blog.markwatson.com/2016/06/action-items-after-attend...

Until recently, I was trying to go 'all in' by favoring GNU Social over centralized social media, almost always running Linux on the laptop, etc.

I have backed off somewhat, realizing that my workflow for writing books and consulting is more efficient using OSX, and sometimes Facebook and G+ are much better at connecting with friends and better for publicizing book updates, etc. than GNU Social.

I am trying to live in a practical 'middle ground' where I can get my work done and still participate in keeping the web open and decentralized.

Two new developments that are promising: a Ruby version of GNU Social that uses the same protocols that looks much more hackable, and TBL's W3C Solid project.

infodroid 1 day ago 2 replies      
Decentralized services have not been able to compete with their walled garden counterparts due to lack of resources and access to capital, as well as the coordination costs of federation. It's a structural and not a technical problem. I can't see how this will ever change.

Most decentralized services are open source projects maintained by volunteer developers. They are competing with centralized commercial projects with deep pockets and the ability to hire not only smart developers but also artists, testers, sysadmins, designers, marketers, researchers, and project managers - some of the things you need to deliver a best-in-class service to users. The odds are stacked against the decentralized service from the beginning.

Even when these projects attract commercial interest, such as Dat or Ethereum named in the article, it is not clear how their funding will be sustainable given that decentralized platforms are more difficult to monetize than centralized ones. And it's really hard to see Github as a "posterchild" for the decentralized web, since it is really a centralized service.

dgudkov 1 day ago 1 reply      
I believe the internet is naturally drifting to "governed countries" (e.g. Google or Facebook) and one of the biggest drivers is not just convenience, but decreasing security of the web in general. Just like the wild West eventually became a number of states with borders, police and government. It became especially relevant in the recent years, when harvesting exploits en masse has become a huge industry. Yes, centralized "web governments" have exponentially increasing attractiveness for attackers although they can benefit from scale, applying good security practices more consistently than multiple decentralized self-governed "nodes". The criminals tend to self-organize and centralize. Why? because it's more effective. As long as there are centralized "bad guys" "good citizens" don't have any other choice than unite and develop a centralized policed "country". And "the bad guys" won't go anytime soon. The decentralized web is like youth -- it's a wonderful idealistic time but it's gone and will never return back.
melvster 1 day ago 3 replies      
The most advanced decentralized system Ive seen in web 3.0 is also the one Tim Berners-Lee talks about, solid. While still very new, I think it has more than sufficient power to take back your data from the large monopolies. One nice feature, as you'd expect from the inventor of the web, is that it's 100% backwards compatible with existing Web technology.


Link to github repo above

1propionyl 22 hours ago 1 reply      
It would really be interesting to see something like a "personal SOLID server" that runs on your phone, and is reachable in IPv6 only (or through tunneling via "DNS for people", replacing phonebooks conceptually and handled as a separate distributed service).

The biggest obstacle to running a personal server is that it must always be on, and always be connected. Using cell phones (which certainly have the capacity for most individuals) would make that easy, and also refocuses "my data" into a physical concept. When you used to unplug your phone, the entry in the phonebook would fail. Now, when you turn off your phone, you "go dark" in the truest sense. It would be very easy to have a secondary battery that powers a low power coprocessor optimized for this task so that even when your phone dies, the server lasts a while longer (say 48 hours?).

I think ultimately if we're going to live in a world of personal data ownership in the truest sense, our data must be something tangible we can carry with us (and that isn't an extra thing to carry).

Of course, you'd want to back up that server config and your data, so there would be monetized services for that. Celebrities and business owners would have substantially more traffic or need to be online all the time, and as a result would need to actually pay for hosting and maintenance as a function of their utility. This introduces yet another market.

And throughout this whole system, you are now explicitly trusting your data as an object to a company for their services. You are not participating on their platform while they farm your interactions.

Any thoughts or criticisms?

mixedCase 1 day ago 2 replies      
"A decentralized web would be very slightly inconvenient to the people so forget about your freedom and privacy who cares about that."

More realistic title. It would make for a shorter article though.

skybrian 20 hours ago 2 replies      
The problem with this plan is that it doesn't really tackle the first rule of the Internet: spam and abuse make everything suck.

The last really popular decentralized service was email, and users migrated to large providers because they had better spam filters. Also, setting up your own mail server has become increasingly difficult, due in part to anti-spam measures. As a side effect, email has become pretty lossy.

Centralized services are far from perfect on this, but they have a somewhat easier time of it. They can fund dedicated teams to deal with abuse. So, the result is feudalism, where you join a larger organization that provides some measure of protection. (Not enough protection, according to many people.)

I'm hopeful that Sandstorm (or something like it) will make it easier for people to run their own server-side software, but it seems more for private use; you can publish a blog but it's not designed for large-scale sharing like a social network. That still seems like an unsolved problem.

oconnor663 15 hours ago 0 replies      

> We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. To answer his question, that's how far the internet got. It got to the late 90s.

> That has taken us pretty far, but it's undeniable that once you federate your protocol, it becomes very difficult to make changes. And right now, at the application level, things that stand still don't fare very well in a world where the ecosystem is moving.

> Indeed, cannibalizing a federated application-layer protocol into a centralized service is almost a sure recipe for a successful consumer product today.

BjoernKW 1 day ago 3 replies      
Ironically, GitHub is a prime example of why centralisation is harmful. If GitHub suddenly disappeared tomorrow (or even just went offline for a few hours), many software build processes in the world would grind to a halt. Sure, because Git is a decentralised protocol eventually it'd all be brought back up again but it'd cause a major disruption and at the very least create enormous costs.
Alex3917 1 day ago 8 replies      
The real issue is that we need to make it illegal to give out free content, unless it's distributed under an open license. Otherwise advertising-supported businesses will always be able to outcompete everyone else via dumping, and these businesses always benefit from being increasingly centralized. That's why we don't have liberal newspapers anymore in the U.S., because ad supported papers put them all out of business.

At the very least people need to be trained that it's unethical to surf the web without using an adblocker.

greenyouse 20 hours ago 2 replies      
The is awesome! I hope they're able to bring their vision of a decentralized web to fruition!

The biggest outstanding problem that nobody seems to be talking about though is how to monetize a decentralized business. With the current web there are lots of options that have varying degrees of nastiness for their users but ultimately pay developer salaries: 3rd party ads, IP/API licensing, pay for product, data collection, in-app purchases, e-commerce, etc. Since not all of these carry over well to a decentralized web, how can it be profitable for companies?

The problem of building a generic, distributed platform (or at least some technology for decentralized services) probably comes first but putting money behind its development couldn't hurt. I think if engineers won't be able to make money with it, then it will be much harder to sustain development in the long run.

Any ideas for this part?

milansuk 22 hours ago 1 reply      
I think that article is missing one important thing. "Web 3.0" needs a killer app! Something which is not simple to duplicate in current centralized web and still It's very attractive to web users.
foobarbecue 1 day ago 3 replies      
It seems to me that one significant barrier is that most people don't have a static IP. If we can move to IPv6, will ISPs start issuing static IPs for everyone by default?
amelius 18 hours ago 1 reply      
I think the European union should fund the research and development of a decentralized web. Because right now all their information is flowing to the US.
shmerl 15 hours ago 0 replies      
Also, interoperaibilty. The major problem of all these major services is not just centralization, but the fact that they can't even communicate with each other. Their stupid walled garden nature and the fact that they on purpose avoid open standards prevents it. That's what disgusts me the most about Whatsapp, Hangouts, Skype and etc. They are stuck in the non interoperable stone age of computing, while e-mail, much older technology managed to break through years ago as federated and interoperable standard.
dannyrosen 1 day ago 2 replies      
The nature of the net us to centralize and then decentralize, it's a cycle. See 90s AOL vs the open net as an example. At this point it's more about usability than it is about access. The open tools that are being built still have a ways to go towards fluid user on-boarding and compelling retention experiences.
hhnn 1 day ago 0 replies      
It's nice, how alive and kicking IRC still is. If you are into startups, there is a channel called #startups on freenode that has a lot of likeminded people.
z3t4 16 hours ago 0 replies      
Many people think Facebook is the web, just like people think the web is the Internet. I'm sure both web and Internet will outlive Facebook though.Facebook have introduced a lot of new people to the web/Internet. It's just a matter of time though, until those people discover that there's more, like Youtube for example.
stevewilhelm 22 hours ago 1 reply      
In my experience, one of the largest hurdles decentralization faces is agent identity. Or simply put, how do you determine who is who on the network. The original Internet failed to address this problem.

The successful solutions I have seen to this issue have employed centralization. SSL certificates from Verisign and login with Google Sign In so consumers can trust your website.

Don't confuse this with distributed authentication. They exist , but I would say difficult to use.

I am talking about identity.

hkt 15 hours ago 1 reply      
This remains a bit of a silly argument. It is like saying we can wipe out poverty if only more people were able to get rich: the way that people progress is through collective provision.

That means the state, or associations of people who value privacy (who may not all be technically savvy!) who can pool their resources and offer something normal people can use. This, unlike the decentralised web, could engender the same kind of network effect that whatsapp and facebook benefit from.

I've written about this and heard more people make the same argument lately: it isn't technology that will bring about privacy on the internet, but a democratic organisation, founded by concerned citizens, governed by anyone who has an interest in promoting privacy and run on behalf of anyone who wants to benefit from it.

There are a lot of benefits to this approach: you lose the engineering challenges of VC backed companies (no data mining operation) and of cyberpunks setting out to let everyone be their own little island (protocols and amateur operators). Jurisdictions can offer different advantages, like company forms with regulated purposes written into their founding documents, or more favourable privacy laws.

Off the shelf software can scale all the way that is required for this sort of thing (think apache kafka as the basis for a messaging system like WhatsApp) and the organisation would only have to be financially sustainable like Mozilla, rather than profitable like Facebook.

It also puts privacy into the hands of normal people, who can take part in governance and protect their data that way, rather than having to learn techie things they don't care about.

Talking about the decentralised web is great, but we often forget what motivates people to want to decentralise in the first place: the fact that their desires are not something the market can cater for, and that what they believe should be their right is simply not available any other way. If we want privacy, we should think about all the means by which we can provide it in the general case, including by collectively owning the means of communication.

erikb 23 hours ago 0 replies      
The internet is decentralized. The thing is that you need to get the data from person A to person B. And neither A nor B can alone afford the infrastructure to enable that.
wineisfine 20 hours ago 0 replies      
We should not be worried about chat apps, but rather the cancer that is called Facebook
mtgx 1 day ago 2 replies      
Because Techcrunch didn't link to any of the projects they're talking about, here they are:





j2kun 20 hours ago 0 replies      
In addition to spam/abuse and the difficulty of setting up a server, another area of centralization that hasn't yet been solved is payments. If we want a truly decentralized web, we need a way for computers to posses and spend money without human intervention. This compounds the difficult of spam/abuse and server setup, but is still a huge factor in why the web is the way it is today.
Animats 20 hours ago 1 reply      
So far, nobody has a federated social network that 1) doesn't suck, and 2) has a worthwhile user base.

Someone in college should take one of the federated systems, polish it up, and market it to fraternities and sororities at name schools as FratNet. This would give all the cool kids a private social network that they controlled.

Paul-ish 20 hours ago 0 replies      
One area that may be ripe for the picking is video chat, as this article starts out describing. I think everyone is sick of having half a different applications to talk to their friend. Why can't this be a protocol like email?

I think the technologies exist and have existed for a long time; RTC, SIP, ICE, etc... The hurdles are social and political.

Animats 19 hours ago 0 replies      
We need handset-to-handset encrypted communication without a server. How good is IPv6 inbound connectivity?
LukeB42 21 hours ago 0 replies      
Here's a decentralised caching proxy I made earlier this year: https://github.com/psybernetics/synchrony

Future plans include a C port (already in the works) and dialing down on the contacts API to bootstrap a three.js CAD tool.

brador 1 day ago 3 replies      
A decentralised web is not viable until we can create an NP soft or better mesh network. Until then the network load to manage itself crushes it before it can walk.
EGreg 1 day ago 1 reply      
The example of GitHub they use is actually telling. Git is decentralized, so why use GitHub? The answer may be surprising:

It's because social has not been decentralized.

Yes, status.net existed for a while but it was just about publishing short updates. Also, a few years ago several guys from my college raised money to make Diaspora. That didn't fulfill the promise of a decentralized social web either.

Bitcoin is decentralized money. Email is decentralized communication. The Web is decentralized publishing. But what about decentralized social?

Ideally, it should start with the Web and enhance it. It should:

+ Work on every device out of the box, and take advantage of the special characteristics of each device (eg mobile phones are private to the user, and work as endpoints for text message invitations with auto-confirming linkshttps://qbix.com/platform/features/invitations )

+ Be as easy to install as WordPress by organizations, who can choose the hosting provider to grow their community, and be able to move it anytime

+ Seamlessly support user identity AND contact lists across domains. Be able to sync with personal address books, social network friend lists, etc.(https://qbix.com/platform/features)

+ Seamlessly support a standardized access control model, ideally where roles correspond to contact groups or friend lists. (https://qbix.com/platform/features/contacts)

+ Be modular, so developers working for organizations can easily use components, developed and maintained by different developers, in the organization's social apps.

+ Allow people to subscribe to certain streams of information. Take care of real time updates via WebSockets for online users while delivering offline notifications for people who subscribed, via text, email or native notifications. (https://qbix.com/platform/features/streams)

+ If done correctly, such a system would decentralize search engines and social networking sites, allowing local communities to get massive value from being networked without having to send all their signals to California and back, or connect to Facebook's "web for India" or "web for Africa" just to organnize meetings or talk to someone next door. (https://qbix.com/platform/features/distributed)

That is what we built. And we are slowly rolling it out. And it's free and open source. You can downoad it right now and play with it:


(It took us about 5 years to make it, so some screenshots are a bit dated.)

If anyone here knows Tim Berners-Lee and can introduce us, that would be extremely helpful.

applecore 21 hours ago 1 reply      
If Twitter led the charge to decentralize the web, starting with their chief product, I believe the company would be worth 10-100x more than it is now (i.e., a market capitalization between $100B and $1T).
dredmorbius 1 day ago 1 reply      
Successfully attacking this problem means progress on numerous fronts: technology, connectivity, bog-simple configuration and operation, recoverability from user or system/hardware errors. Whittling these down over time may work. Most significantly, hardware costs are not presently constraining, limitations lie elsewhere.

Google and Facebook's advertising support strongly tend to centralisation.

Kicking the legs of advertising out from under the stool of publishing might be a means of attacking that particular tendency, which would make a more distributed technical model much more viable.

There's the question of just how distributed you'd want a system to be. There are problems at both low and high levels of centralisation. A fully decentralised model might have a tendency to go rogue, or be subject to petty dictators. It's been interesting to note that even a well-capitalised entity such as the NSA is reported to prioritise specific and generally small values -- single-digit, often single-hand -- of systems for specific attack and interest. There's an argument to be made that strategically weak targets, such as the perennially troubled Yahoo, headed by the morally compromised Ms. Meyer, can be made to crumble where a more commercially robust firm, say, Google, Apple, or Facebook, would be willing to resist. I'm mindful of this.

There's some reason to believe that perhaps a suite of Free Software alternatives would offer more robust installations, and compromises would impose high workfactors, for any given would-be surveillor. The question of creating systems by which data exfiltrations might be more readily detected is another area for exploration.

Going back to the publishing world, there's a long-standing practice of including intentionally fictitious entries within compilations, maps, etc., whose observation in the wild would indicate copying. This might be worth pursuing on a formal basis, particularly through such indicators as financial accounts, email or communications addresses, URLs, etc., extant as canaries which would reveal a data breach.

A huge problem for any widely distributed infrastructure is maintenance and administration. I'm reluctantly concluding that a world in which individuals and households host and administer their own personal data servers isn't viable. I'm not abandoning all hope, but it seems a difficult problem, and one which the Internet of Shit seems determined to prove intractable.

Much of this has to do with the financial underpinnings of the infotech world. As several recent authors have noted (Paul Romer, Jeremy Rifkin, Paul Mason), information-dense goods function poorly in a market-based economic system. They have numerous characteristics which make for poor price discovery and dynamics, with informational asymmetries, heavy up-front (and hence average & fixed) costs, low-to-zero marginal costs, susceptibility to low-cost copying by others (notably China, though this is a practice with ancient traditions -- see Ha-Joon Chang and Frederick List), diminishing marginal returns, long-tail support obligations, and unintended and non-evident consequences (to vendors, users, and bystanders).

A few possible models for a largely-distributed personal informations ervice suggest themselves.

One is the residential-server-as-utility model. Comcast has had such an offering for nearly a decade, that I'm aware of. Essentially, it's a set-top box which can take on additional responsibilities, including home automation and security functions. There are several other utility-type providers who might offer similar capabilities.

There might well be ad-hoc collections of friends or neighbours. A tech-oriented person could easily provide services to hundreds or thousands of others on commodity equipment. The main limitations here are trust and discoverability.

There are arguments for making email into a government-provided service, via the post office. This introduces the risks of government surveillance (already an issue with postal systems), though the protections of legislative restrictions, and public-sector union whistleblower protections, against gross abuse. The fact that physical mail delivery already ensures a brick-and-mortar point of presence in virtually any habitation means that one of the perennial problems of information technology -- establishing, asserting, and recovering identity -- can be achieved through a local visit.

The early Internet spread through a set of social institutions, largely universities. These provided points of access, administration, and accountability to populations of users ranging from a few hundreds to a few tens of thousands per site at major public universities. Whilst this was not an all-encompassing level of provisioning, it is a model of access and social organisation which might be useful to draw on for a more modern implementation.

Public libraries, as an alternative to universities, might offer another option. They already serve as an internet access option for a significant population.

Banks, schools, major retail establishments, and religious centres might be other options.

I'm not sure what exactly will work, but the dynamics I'm pointing at here involve:

Technical capabilities and equipment. A very low bar, and getting lower all the time.

Connectivity. Slightly more difficult, given distribution, land-use, and reliability concerns, but still generally tractable.

Trust. A major factor, especially in a world of eroding social institutions and values. This plays into the dynamics of various systems providers / maintainers quite heavily.

Workforce technical capabilities. Information management skills are remunerative, and could prove difficult to retain. This might change, possibly rapidly. The present market is, however, exceptionally geographically centralised. A distributed node-and-service model might allow for greater flexibility in specific node administrators and technical staff to operate in a much broader choice of areas, including those with lower housing costs than New York City or San Francisco, closer to family or hometowns, etc. I expect some de-skilling of routine obligations, but technology tends to require at least a modicum of fire-fighting capabilities, and perhaps some engineering and planning capacity as well.

Identity and pseudonymity. Both matter. Balancing the risks of data disclosure with those of data loss gets complex. Principle control over data, plus remote backups, ensures against physical loss. Encryption with some model of key escrow, in which a set of parties, several with strong and vested interests in personal privacy can balance privacy against the problem of loss of specific tokens (physical or otherwise).

Ease of use / access. This stuff has to be bog-fucking-simple to provision, deploy, and maintain.

Copyright and copyright compliance. Ultimately, my view is that we're going to have to recognise that attempts to re-bottle the genie, and provide centralised or authorial control over reproductions is doomed. Information-as-public-good strikes me as increasingly inevitable. (Which isn't the same as "inevitable", though it's getting warmer.)

Misbehaviour. People can be asshats. Another reason for forms of key escrow and the like is to allow specific access to specific individuals' information in specific instances, subject to very strong controls.

Search. Traditionally this has relied heavily on data-center based operations. There are distributed alternatives, but none I've seen are particularly adept.

So: yes, challenges, but the fundamental capability (hardware) is well in hand. Spitballing at approaches should eventually see something stick.

bogomipz 23 hours ago 0 replies      
In my opinion one of the disturbing trends increasingly presenting a "usability" issue is the insistence of sites putting content behind a login wall, Pinterest, LinkedIn, Glasdoor etc. Some of these make a subset of content viewable but then prohibit me from scrolling further until I sign up. Usually the sign up requires selecting one of FB, Google, et al as an Oauth provider. So by extension you kind of have centralized control of third party content as well.
partycoder 14 hours ago 0 replies      
Well, initially the web was a fantastic place to be. People used the web for entertainment and people used to trust people online. Every website was unique and handcrafted.

Then, as evil started going online in the form of harassment, fraud, scams, spreading malware... people started to be reluctant of following any link, and people started to seek for "trusted websites".

In addition, people gave away the ability of customizing webpages in return for searching people by name, real time cross platform publishing, and photo sharing with access control lists.

jamesbercegay 1 day ago 0 replies      
Decentralized web would not work for nation state type attacks. They have too many resources. They could dilute and attack it seems.
ommunist 22 hours ago 0 replies      
Ehm... how can you decentralise backbone services, that can do anything to your ISP if it does not comply. Say, tomorrow as a counter-terrorist measure, your ISP will be enforced to close all encrypted p2p traffic, you won't be able to do anything to circumvent top level traffic control policies.
drivingmenuts 17 hours ago 0 replies      
All this has happened before, and all of it will happen again.

Decentralizing will be great until people realize that they need access to information outside their enclaves, then there will be another push for centralization.

Additionally, who are these People the author writes about? All of the tools cited, while freely available to anyone, are geared toward power users in installation and operation. That doesn't sound like "The People" to me; it sounds more like an elite group (although a pretty large and diverse elite group, within itself).

If all you're going to do is stand by the highway and wave signs and shout slogans, I wish you well with that plan but I don't think it's going to work. The people you're opposing (who are actually people, too) have got better and more convincing arguments, as well as positive results that matter to their customers.

ommunist 22 hours ago 0 replies      
The only real way to 'give power to people' and 'decentralize' is to run your own ISP and launch your own satellite, and to negotiate gating your uplink to those forever centralized, otherwise your free netizens will only be able to ping each other within your enclave. And, if I may ask, what shall you do as 'free ISP' without IP addresses, huh? Like Agent Smith told Neo:"What good is in phone call if you are unable to speak?"
ommunist 1 day ago 0 replies      
May be yes, may be no. It really depends from who owns agenda of decentralizing the web.
susurrus 1 day ago 1 reply      
Do they even have proofreaders at TechCrunch? They keep repeating paragraphs!
avadhoot 20 hours ago 0 replies      
Isn't that what blockchain is trying to achieve?
Brain training exercises might just make you better at brain training exercises bps.org.uk
512 points by ingve  3 days ago   208 comments top 47
carleverett 2 days ago 9 replies      
This is the best article I've ever read of the subject of how you can improve intelligence: https://blogs.scientificamerican.com/guest-blog/you-can-incr...

Kind of like how people have to be reminded that the best way to lose weight is to diet and exercise, the real answer for intelligence is challenging yourself frequently and putting yourself in uncomfortable situations that you need to think your way through.

At the end of the article there's a beautiful definition:

"Intelligence isnt just about how many levels of math courses youve taken, how fast you can solve an algorithm, or how many vocabulary words you know that are over 6 characters. Its about being able to approach a new problem, recognize its important components, and solve itthen take that knowledge gained and put it towards solving the next, more complex problem. Its about innovation and imagination, and about being able to put that to use to make the world a better place. This is the kind of intelligence that is valuable, and this is the type of intelligence we should be striving for and encouraging."

grandalf 3 days ago 6 replies      
I played around with Lumosity as part of a research project a few years ago. The games were somewhat fun and it was possible to improve one's score by repeated play.

So little of what humans do is similar to those games, so I don't see how "context" could possibly be similar unless those sorts of brain speed/recognition tasks were part of your day to day work.

I'd be curious about scenarios like:

- Practice identifying the semantic bug in a 20 line snippet of code. How effectively would practicing this help a person identify real bugs in actual code?

- Chess problems with 5 pieces on the board. How helpful would practicing these be to solving problems with 6 pieces?

- Essay writing. Suppose for a moment that a human essay could be judged accurately enough to create a 500 word essay trainer. How effective would training on it be to quickly being able to articulate one's thoughts?

Similarly, I'd be curious about a sunk cost rationality trainer, logical proof trainer, and reading comprehension trainer. These would be harder to build than the simple video games on Lumosity, but I suspect the competency obtained could be a bit more useful in real world tests of ability.

However, there are fairly few cases where specific characteristics of human intellect or rationality are measurable by longer term human performance. If your job entails long term project planning, trade-offs, etc., it's pretty hard to prepare in a meaningful way using short-term training sessions.

j2kun 2 days ago 5 replies      
My wife teaches math at a community college level, meaning she often needs to teach students who are paralyzed by fear of fractions (or worse, negative numbers). Many of her students are also adults. She tells me things like "If I ask one of these adult students a question involving negative numbers, and I phrase it in terms of money and debt, they answer it immediately. If I ask them using just numbers, they have no clue." These students do not know how to take a concept they've used their entire lives to balance their checkbooks, and abstract it beyond money.

I would conjecture that in most cases of "activities that stimulate your brain", the key to generalization requires another skill: being able to abstract a skill you have learned from one situation, and then specialize it to another situation. And this skill needs practicing. I also conjecture that a small fraction of the population actively does this, and this could explain why the results aren't statistically significant. E.g. meditation actively encourages introspection and generalized mindfulness in any situation, whereas crosswords and Chess are played for their own sake.

tonystubblebine 3 days ago 9 replies      
Here's a mystery to me: the failure of brain training games vs. the success of meditation.

The science for brain training games is not encouraging. A number of studies have found that they do not produce generalizable mental improvements.

Meanwhile, every time I turn around I run into a study of mindfulness meditation that did produce a generalizable improvement to mental abilities.

For example, I was just reading one about how meditation can reduce pain perception by 40% and that measurement was backed up by MRI imaging of reduced brain activity in pain centers.

Let's call meditation a brain game that works.

The mystery then is why is there only one game that works? Will we ever find a second?

hmahncke 2 days ago 1 reply      
The article begins from the premise that psychology has shown that brain training can not cause generalized benefits, and then goes on to review more than 130 peer-reviewed publications, meticulously finding fault with each one, and concluding that nothing can be learned from the entire field. It's as if the authors went through an entire forest, finding fault with each tree, but never noticed there was an entire forest (of evidence).

I'm an author on one of the studies discussed, and for what it's worth, there are two factual errors in their review of my study alone.

The authors then go on to state that people seeking to improve cognitive function would be better served by exercising or going to college. Exercising is an excellent idea, but the evidence for cognitive improvement is certainly no better than for cognitive training, and is arguably worse [1]. College is fine as well, but from a methodological perspective there's never been a randomized controlled trial showing that college improves cognitive function, and arguably all college does is select high-achievers and then further filter them with low-performers dropping out. Endorsing college (with no RCTs) over brain training (with RCTs) suggest a biased review.

Disclosure: I work at Posit Science, where we make a brain training program. My work is specifically criticized in the article.

[1] http://www.cochrane.org/CD005381/DEMENTIA_aerobic-exercise-t...

Bartweiss 2 days ago 1 reply      
The entire brain training market seems to be based on a willful failure to understand Goodhart's Law.

The basic pitch is "performance on these games correlates with intelligence, so practice these games to become more intelligent". That's not how anything works! If you influence a metric directly, it stops being a good proxy for all the things it used to correlate with.

It's like fly-by-night companies improving retention by selling at a loss, or running up user count with expensive perks for each new client. Those numbers are important as a reflection of business health, not a cause of it.

DiffEq 2 days ago 0 replies      
Instead of Games that may or may not map to some useful thing and some cognitive improvement...why not just learn more math and or a 2nd language? The time invested here will at the very least provide you with some useful skill or a better understanding of your world.
Sanity_ 2 days ago 0 replies      
I know people who just don't make critical thinking a part of their daily routine, and they tend to be sloppy and thoughtless when confronted with any sort of mental exercise. If these "Brain training exercises" can get people thinking critically and dedicating time of the day to using parts of their brain they wouldn't have otherwise, then it seems like a good thing to me.
snarf21 2 days ago 1 reply      
The weird thing is that there is an initial lift but only because you are learning something new. You have to figure it out, it takes effort and creates new pathways, etc. Once you have learned it, you tend to be on auto pilot. The thing to do is keep learning new things: photography, baking, woodworking, etc. It is the challenge and discovery that are important.
innerspirit 3 days ago 2 replies      
Good. Now perhaps make a study about sensationalized article titles like this one. The conclusion in the title is inaccurate.

FTA:"Overall, Simons and his colleagues conclude that the evidence [...] is inadequate."


"its possible future research will provide new evidence that is more favourable to brain training"

erdevs 2 days ago 0 replies      
This is a shoddy write-up which sensationalizes the more rigorous underlying scientific work.

The actual research here appears to find methodological shortcomings in many papers purporting a broader effect on intelligence or problem-solving ability from some popular brain training games. It does not, however, conclude that there is no effect.

It is unfortunate (though very common) that the article oversimplifies and sensationalizes. The headline "brain training exercises just make you better at brain training exercises" is far too definitive. I'm glad the "might" qualifier at least was added here on HN. Similarly, the article saying "the same is not true", definitively, for the benefits of physical vs mental exercise games. Examples abound throughout the article and it reads as sloppy, biased, and exaggerated. Not uncommon today, but always unfortunate still.

It's a shame many brain-training companies make exaggerated claims themselves. So, in some ways, it's fine to see some counterfire in online press. Unfortunate if that's what it comes down to, though.

whatup 2 days ago 0 replies      
I wonder if part of the reason brain training appears to have little benefit is similar to the reason most people who visit the gym make little gains in athleticism; because they don't push themselves hard enough. A lot of people feel like simply showing up is all that's required, and they don't break a sweat.

This is supported by the fact that many of these brain training apps/games often have addictive traits, similar to other video games. This might lead to their usage being met with undue reward.

I used one of them, and the goal (unduly rewarded) was simply to come back and, essentially, play each day. It didn't matter how hard I worked.

This could be tested by measuring brain activity during brain training (compared to some control activity) and seeing if the delta positively correlates with increased cognitive function.

danielweber 2 days ago 0 replies      
Nothing about N-back, which is the one braintraining thing that had evidence for it.
jackcosgrove 2 days ago 1 reply      
I worked at an academic neuroscience software company back when Lumosity first emerged on the scene. Lumosity was spurred on by I believe statistically significant results regarding the n-back test improving working memory. There was a lot of hope that other games/tests might be designed to improve other cognitive features. However the n-back test is the only one I have ever heard of having a significant effect on cognition, and even then it's temporary. There was a lot of disappointment in these results. Nevertheless, Lumosity et al continued to market minigames as health elixirs.
rixarn 2 days ago 0 replies      
I've been working on "brain training" stuff for almost 8 years and just recently (around a year ago) decided to make a startup out of it. I'm a psychologist and software engineer and my cofounder is a clinical psychologist and neuroscientist. We've done our own research (and have a published paper), read a lot of the stuff that's out there and have evaluated and kept track of the existing products for years.

First thing that baffles me on most commercial products is the frequency and duration of each training session. Let's think for a minute. On average you have this short sessions that range from 5 to 15 minutes. Even in something as evident as physical activity... how fit can you get by doing 15 minutes of low intensity jogging on average 3 times a week? How fair is to say based on that example that jogging is "worthless"?

That said, most critiques to Brain Fitness are correct.

If I could summarize what we've learned so far:

- Transfer effect (i.e. to gain benefits outside of the activity you're doing) is hard to achieve, it takes a lot of time, has to be a n-back and heavy working memory activity (is the only think I can think of having a chance to do transfer) and it doesn't work all the time. On our own experience probably 20% of the population will never benefit from something like this. Also, transfer is limited to some executive functions, not all of them.

- Training duration and frequency. Think of a physical activity like jogging but instead of doing 15 minutes of low intensity jogging 3 times a week, do 35 high speed jogging 4 times a week for two months. You will get fit.

- Training activity. Most n-back and working memory stuff is boring and hard to do for players. It's difficult to engage on a "game" like that and the dropout rate for users is very high.

Most games out there that are part of the package of games in brain training softwares are based on stuff that has not been proved to have any transfer effect.

Our mobile app will focus mostly on working memory and sustained attention games (fewer but with more complex game dynamics that current ones). And I don't think we will advertise it as a "life changer" or a way to "make you smarter". We just want to build a suite of games for people to be challenged and have fun while doing so.

Vernetit 2 days ago 0 replies      
I have schizophrenia and I create a program that combines mnemotechnics with 3d n-back and facial expression memory system to understand.

http://vernetit.blogspot.com.ar/2016/10/eo-and-peo-memory-sy...with 15 minutes of training my mood changes and i understand the sitcom tv series emotions and the people around me intentions and emotions. I experiment also more willingness to do things.Here the program link


Sorry my poor English. I am from Argentina.

rebootthesystem 2 days ago 1 reply      
Yup. Just like being great at Chess just makes someone great at Chess, and not much more.

I taught my kids to play Chess early on. My first-born was beating 12 year old's when he was 6. They had to move him up the age groupings. I insisted they keep moving him up until he lost. Knowing how to deal with losing is very important.

In all cases I pulled my kids from competitive Chess after a few seasons and a good balance between winning and losing. Past a certain point, getting better at Chess requires becoming a human database engine. That, for me, is when Chess demonstrates this idea that getting better at some of these games teaches nothing useful.

BTW, I apply this to the type of programming puzzles typically used in interviews. It's pure nonsense that says nothing about how creative someone can be about solving new problems. Anyone can become a human database with enough effort. True creative intelligence is a quite a different matter.

surrey-fringe 2 days ago 0 replies      
I don't doubt it, but people shouldn't take this to mean that there aren't any possible exercises to strengthen mental ability. You've just got to listen to people who are actually successful, and not some corporate-funded research that might be null in three years. I don't see anyone at the World Memory Championships citing Lumosity.
briandw 2 days ago 1 reply      
Dual n-back training is the only thing that I've read about that shows transference to unrelated tasks.


There are some good apps out there:

Desktop -> http://brainworkshop.sourceforge.net

Android -> https://play.google.com/store/apps/details?id=com.tyrske.dua...

And I make a pitch for my app, IQ boost for iOS -> https://itunes.apple.com/us/app/iq-boost/id286574399?mt=8

devy 2 days ago 5 replies      
Honest question: so if brain training exercises don't work, then what else can genuinely improve brain functions?
robg 2 days ago 0 replies      
Title change suggestion: Bad brain training has limited scope. The question remains what good brain training would is. Paired with electrical or magnetic stimulation? There's almost all fertile ground here and, I believe, our great advances as humans this century will come from better ways to grow brains. Consider mobile apps that adapt to your cognitive strengths and weakness, when you haven't slept well or are under stress or need coffee, and where learning becomes lifelong and personalized. By building our intelligence we also become good at developing artificial forms. To me it's the problem of our time, these early attempts just help to develop better approaches.
dghughes 2 days ago 0 replies      
It wasn't my intention to train my brain but I started using DuoLingo (it's free) language learning and I find my concentration has improved.

Maybe it's just because I do a different activity at a regular rate per day but I do feel brainy plus I am learning three languages.

hamhamed 3 days ago 2 replies      
Does this research also consider actual real Brain training games like Elevate? It teaches you techniques like how to get better at estimating, improving your WPM, your thesaurus, etc. Nothing to do with memorizing a pattern or serving coffee to customers as fast as possible.
dschiptsov 2 days ago 0 replies      
Why, any training, it seems, has the same nature. Highly specialized task, such as joggling will make you better only at joggling (and, perhaps, will improve coordination of your hands) while training at, say, triathlon will make one generally stronger, more coordinated, with better stamina and endurance.

Similarly, mindfulness (awareness) and concentration (focusing) meditation techniques in general will make one better at variety of tasks, while specialized training, lets say, math tricks, will only affect specialized areas of the brain.

The training of a musician requires a lot of listening to the classical music, not just trying to play "mechanically" . All this is known for ages by Greeks and by Indians.

taneq 2 days ago 0 replies      
A guy I used to play soccer with always said "train as you play!" Now, he was upset because on cold rainy Thursdays a lot of us were wearing tracksuit pants to training, and he felt we should be wearing shorts... and for some reason he felt that shin pads were exempt from the "train as you play" ethos. Which was funny, because he was a good lad but he was always getting injured. I think he played three matches the whole time I was at the club, and that was a few years, because the rest of the time he was sidelined by injury. But I digress.

I think "train as you play" is excellent advice. We don't learn the training. We learn the playing.

fsiefken 2 days ago 2 replies      
It seems that meta-analysis confirms the usefulness of things like dual-N-back training. Does anyone disagree?


For an n-back like exercise:http://cognitivefun.net/test/22

thr0waway1239 2 days ago 0 replies      
A friend of mine believes that the reason she didn't do her best in academics is because the "subject was presented in a very boring way". I tried to persuade her that overcoming that feeling of boredom is already a good sign that you are learning the material well - granted, it is really hard to read poorly written stuff, but academic material isn't exactly a heart pounding thrill ride and it is quite a task to make it interesting, particularly as the material gets more advanced. But you know that you can rarely change people's views on such abstract things.

I once mentioned a plan to develop a little learning app (based on spaced repetition) for her kids and asked her if she felt it was something worth paying for. She pointed out this new app called Lumosity and asked me "Why don't you make something like this? It is very interesting, and kids will actually want to use it." I sort of gave up at that point because I didn't quite believe that it was all that effective. After a little while, the topic of this article started floating around the internet and last I heard, my friend had stopped using Lumosity.

On a more cheery note, I am surprised to find no one mentioned the book "Make it stick" by Peter Brown et. al. which would probably be a hard pill to swallow for the Lumosity fans. Someone should find a way to "appify" the principles in the book. It would be one seriously boring app, but very effective. :-)

Y201K 2 days ago 1 reply      
A friend showed me Lumosity and I thought "there's no way this can work", when in life do you have to distinguish the direction moving colored arrows are pointing? The reason I so quickly came to this conclusion is because of the concept of specificity in strength training -- your exercises have to be picked so that they support your goals. I imagine that an approach like this in the brain-enhancing-games space would look a lot more like what we think of as homework.
svjsdjfnskj 2 days ago 0 replies      
If you really want to increase you intelligence just get a good night's sleep, eat well and go outside and walk
trendia 3 days ago 1 reply      
Commenting on HN just makes you better at commenting in HN.
sebringj 2 days ago 0 replies      
I've always found that if I want to get good at some endeavor, partaking in that endeavor does the trick. I wonder how much time and brain power is wasted on preparing. Maybe that's why the really good ones leave school early.
SubiculumCode 2 days ago 1 reply      
Opportunity costs are damning. Even if brain training games were modestly effective, they hold an opportunity cost: The time you could have been spent doing cardiovascular exercise that has demonstrated robust positive effects on cognition.
faragon 2 days ago 0 replies      
Should be similar for IQ tests, I guess. So if new generations have better "IQ", some part of it could come just because of being more prepared/trained for those kind of tests.
h4nkoslo 2 days ago 0 replies      
Interestingly, this is the primary explanation for the Flynn effect, and why culture-loaded questions do not show the same improvements that completely abstract, non-culture-loaded questions do.
EddieSpeaks 2 days ago 0 replies      
Just like playing chess makes you better at playing chess, no generic Intelligence boost from specific activities with narrow problem spaces
maheshs 2 days ago 0 replies      
I think this proves that If we practice anything we will make the improvement.
kruhft 2 days ago 2 replies      
Kinda like practicing math, just makes you good at math?


perseusprime11 2 days ago 0 replies      
This has to become some kind of meme...reading books might just make you better at reading books :)
kefka_p 2 days ago 0 replies      
But then ... is life much more than a brain training exercise?
tetrep 2 days ago 0 replies      
incentives incentivize...

The entire "I'll find a mental analogue that's easier/more fun than real exercise" industry smacks if an industry dedicated to perpetual motion. Extreme scientific discoveries aside, it's not going to happen.

afinlayson 2 days ago 0 replies      
Anyone who has used these could tell you that. If they are mindful enough to notice.
projektir 2 days ago 0 replies      
Maybe those brain training exercises are just not very good?
davidgerard 2 days ago 1 reply      
Neither headline nor URL contains the word "might".
zump 2 days ago 0 replies      
Well, what about spaced repitition?
andrewguy9 2 days ago 0 replies      
Just like the SAT.
emodendroket 2 days ago 0 replies      
Wow, shocking.
mridulmalpani 2 days ago 0 replies      
I knew it :)
Farm grows vegetables in a desert using sun and seawater newscientist.com
536 points by rmason  3 days ago   184 comments top 31
dlss 3 days ago 13 replies      
From OP:

> Sunshine and seawater. Thats all a new, futuristic-looking greenhouse needs to produce 17,000 tonnes of tomatoes per year in the South Australian desert

> The $200 million infrastructure makes the seawater greenhouse more expensive to set up than traditional greenhouses, but the cost will pay off long-term, says Saumweber. Conventional greenhouses are more expensive to run on an annual basis because of the cost of fossil fuels, he says

From the #1 google result for "cost of a ton of tomatoes" http://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum=15889:

> Processors agreed to pay growers $83 per ton in 2014, up from $70 per ton last year.

So assuming 100% profit margins (ie the tomatoes grow themselves, no human labor needs to be paid, nothing needs repairing or replacing, tomatoes deliver themselves to processing plants, etc, etc), the 17,000 tonnes produced would yield ~$1.4M annually. That's an awful (0.7%) annual return on $200M. Much less than you could get by investing the $200M in an index fund.

Which is to say there's a 0% chance it will "pay off in the long-term".

achow 3 days ago 2 replies      
Seems like this idea is couple of decades in making..

A seawater greenhouse is a greenhouse structure that enables the growth of crops in arid regions, using seawater and solar energy.

The technology was introduced by British inventor Charlie Paton in the early 1990s and is being developed by his UK company Seawater Greenhouse Ltd.

The technique involves pumping seawater (or allowing it to gravitate if below sea level) to an arid location and then subjecting it to two processes: first, it is used to humidify and cool the air, and second, it is evaporated by solar heating and distilled to produce fresh water. Finally, the remaining humidified air is expelled from the greenhouse and used to improve growing conditions for outdoor plants. https://en.wikipedia.org/wiki/Seawater_greenhouse

Making Namibias desert green using seawaterhttps://www.newera.com.na/2016/07/08/making-namibias-desert-...

DavidWanjiru 3 days ago 4 replies      
Where are the nutrients for plants coming from?

They say plants are grown in coconut husks instead of soil. Does the coconut husk act merely as a plant holder, or is it the one that provides the nutrients that the plant would otherwise obtain from soil (with/without fertilizer)? That part is not clear to me.

Edit: "There is no need for pesticides as seawater cleans and sterilises the air..."What does this even mean? Do we normally use pesticides to clean and sterilize the air? Are the pests that we fight with pesticides eliminated when the enclosed air in a greenhouse is cleaned and sterilized in this manner?

femto 3 days ago 2 replies      
sun, seawater .... and fertiliser.

As made clearer in other articles [1], they are adding nutrients to the water.

In response to delbel: As mentioned in [1], the farm has a 10-year contract with Coles, one of Australia's big supermarkets, and that was what enabled the farm to be funded. Consequently, you can go to Coles and buy these tomatoes for about A$7 per kg. [2]

[1] http://www.abc.net.au/news/2016-10-01/sundrop-farms-opens-so...

[2] https://shop.coles.com.au/online/SearchDisplay?storeId=10601...

pilooch 3 days ago 1 reply      
For those interested in these matters, the books by Masanobu Fukuoka (1) are a must. They may challenge some of the HN crowd views on progress, technology and religion, as they did definitely challenge mine, but they carry significative scientific truths. His focus on fighting desertification was both genius and ahead of his time.

(1) https://en.m.wikipedia.org/wiki/Masanobu_Fukuoka

bjelkeman-again 3 days ago 5 replies      
There are several things which aren't covered well in the article. I have spent quite a lot of time looking at the feasibility of the Seawater Greenhouse in multiple locations, including a visit to this Australian location a few years ago. [1] I haven't been involved since the pilot installation in Port Augusta, so they may have changed a few things described below.

The Seawater Greenhouse design is not a conventional greenhouse. It cools rather than heats a crop, it is an open design, rather than a conventional "closed" design.

The system, as designed by Charlie Paton, uses evaporative cooling in the greenhouse. Essentially they have a cardboard wall, sort of like a thick honey comb with holes through it, over which they pour seawater. The air is pulled through the wall by large relatively slow fans. When the air moves through the "honey comb" wall, the air changes direction (30 degree angle channels). Particles and insects in the air essentially get stuck in the seawater. Seawater is particularly bad for insects and other small pests as the salt clogs up the exoskeleton and the breathing channels when the water evaporates.

Even though the greenhouse was standing in an area of vegetation with a significant insect population outside, we hardly saw any insects on the inside. But you could often find quite a few insects in the seawater tanks used to hold the water for the evaporators. The stable climate created in the greenhouse and the seawater "barrier" created by the evaporators means the pest insect pressure is lie and you can easily control it with natural enemies (bio control). (There where poisonous spiders in the canopy of the crop but they don't affect the crop, but act as biocontrol. Just don't let them bite you.)

Plants grow much better in a cooler and high humidity environment. The plants don't have to put so much effort into transpiration to keep an acceptable temperature for photosynthesis. The evaporators with seawater handles both of those things.

The temperature during my visit peaked on Christmas Day at 43 C, but inside the greenhouse it was a much more acceptable 35 C. (We had our Xmas dinner just behind the evaporators, the most pleasant place in Port Augusta at that point.) The energy used for this cooling primarily comes out of the water and the surrounding air. Some energy is used for pumping water. Without the evaporative cooling the temperature in the greenhouse would have been at a level which would have killed the plants. During my visit the plants and crop grew so fast that we had to help harvesting to keep up.

The evaporators are also covered by sea salt, which is hydroscopic (absorbs water) which means that when the temperature drops at the end of the day you don't get water (dew) collecting on the plants. This is important as dew on the plants and produce allows botrytis (mold) to grow and potentially destroy the produce. This also avoids having to burn sulphur in the greenhouse to kill the mold.

[1] pictures from my wife during the visit https://www.flickr.com/photos/ankertje/7044271777/in/album-7...

rmason 3 days ago 2 replies      
When I saw the headline I knew instantly how they did it. But the one thing unexplained in the story was their claim that the seawater piped in eliminated the need for pesticides.

I would bet it was the desert climate that reduced the need for pesticides. I am willing to predict that they will eventually need to use pesticides.

molteanu 3 days ago 0 replies      
In case you missed the article from yesterday, which was also posted here, named "The Dizzying Grandeur of 21st-Century Agriculture":


ph0rque 3 days ago 1 reply      
So, this is a hydroponic greenhouse. It would be neat if they went one step further and made it an aquaponic greenhouse. The use of salt water means they could consider growing saltwater fish (or shrimp, or fish for caviar). The only caveat is whether it would be easy to separate the salt from the water while keeping the other nutrients (nitrogen, etc) in the water for the plants.
foxhop 3 days ago 1 reply      
If this catches your interests, please research greening the deserts using permaculture.

Here is one example but there are many examples and projects:

https://www.youtube.com/watch?v=D4Nb-rqGfWI&spfreload=10 | Reversing Desertification With Sticks, Rocks, and Ancient Wisdom

Permaculture is more practical in the long run, doesn't require much tech.

mikehines 3 days ago 2 replies      
Perhaps we could colonize the deserts on earth before Mars.
didsomeonesay 3 days ago 0 replies      
> no soil, pesticides, fossil fuels or groundwater

What about fertilisers? I would assume they still use conventional ones made from fossile fuels. Anybody got more information, did I miss sth in the article?

jrpt 3 days ago 1 reply      
So it's basically a desalination process in a greenhouse powered by solar? I'm not sure why this is better than leveraging existing infrastructure to get freshwater and electricity. Also, farming somewhere other than a desert, then transporting the food to the population (if they live in a desert).
tuananh 3 days ago 0 replies      
pcmaffey 3 days ago 1 reply      
For a headline that explicitly excludes the 3rd essential component of growing (light, water, nutrients), the article mentions nothing about how these plants are getting nutrients.

I assume they're supplementing liquid nutrients, as they're growing hydroponically in coco husks? If so, that makes the headline seem ridiculous...

The coolest thing in the story for me is using salty air to repel pests. Though I wonder about the longevity of that solution. I've never seen a bullet-proof solution to pest control, and especially in self-contained environments, it's usually just a matter of time. Nevertheless, makes me wonder about a saltwater humidifier....

Nanzikambe 3 days ago 0 replies      
The headline claim is demonstrably poorly researched and as others point out the project seems to be incredibly poor value.

There's a lower tech approach that's been extensively proven and in production since 1998, it produces not just veg, but firewood/biomass, shrimps, fish, fresh water and more:


ars 3 days ago 1 reply      
This would be awesome for Israel!

And if they could only fix their political problems (to put it mildly) Gaza as well. It could make a real difference in their future.

the_watcher 2 days ago 0 replies      
This is fascinating. I grew up about 30 miles from the Salton Sea, which is in the middle of the desert of Southern California. We get tons of sun. I'd be very interested if these researchers are communicating with those investigating how to best clean up the Salton Sea. While those efforts are still in their infancy, there could be some overlap in application of the research.
EGreg 3 days ago 2 replies      
Wow, simply wow.

Humanity is turning the world into farms, destroying natural forests but then engineering new types of habitats.

I wonder what the Earth would look like in 200 years.

What I am worried about is the rapid growth of the human population and its energy use. You can't have exponential growth forever. Most of the issues come from that. Farms and monocultures sound nice, but are they sustainable? Are the resources being recycled somewhere, or are we going to be plugging holes with bacteria digesting plastic and producing plant food?

baq 3 days ago 0 replies      
> Its a bit like crushing a garlic clove with a sledgehammer, he says. We dont have problems growing tomatoes in Australia.

that's an epic way to miss the point completely.

apendleton 2 days ago 0 replies      
I wonder what they're doing with their desalination outflow? The prine produced during desalination can be nasty, toxic stuff: https://en.wikipedia.org/wiki/Desalination#Environmental
donarb 3 days ago 1 reply      
Or instead of wasting energy trying to remove the salt, you could just use diluted seawater.


dx034 3 days ago 1 reply      
Why do they use concentrated solar power? Isn't that much more expensive and risky at current prices?
oh_sigh 3 days ago 0 replies      
...and most importantly, fertilizer

(this is covered in the article, but the headline forgets that part)

bnolsen 2 days ago 0 replies      
They built themselves a prime bird frying machine. Insects like heat, the birds follow them and voila, instant cooked meat! Extra bonus for this type solar.
lolive 3 days ago 0 replies      
Solar mirrors need to be extremely clean to be efficient. And that requires a lot of soap and clean water.Could it be a deal breaker in that kind of systems?
kbart 3 days ago 1 reply      
"The $200 million infrastructure makes the seawater greenhouse more expensive to set up than traditional greenhouses, but the cost will pay off long-term"

It's really nice experiment, but I don't see how that's gonna pay off. You need to sell a hell lots of tomatoes to earn $200 million back.

wodencafe 3 days ago 0 replies      
>First farm to grow vegetables in a desert using only sun and seawater

And Coconut Husks.

nikolay 2 days ago 0 replies      
Just becomes something looks like a tomato, it doesn't make it one! In order to have nutritious produce, you need a lot more than just macro ingredients!
maplechori 3 days ago 0 replies      
This should definitely be tried in the UAE!
delbel 3 days ago 4 replies      
To pay for the infrastructure, they will probably need to sell the tomatoes for $100 a pound for twenty years.
There's no easy way to say this reddit.com
566 points by bexcite  5 days ago   223 comments top 30
sulam 5 days ago 3 replies      
At the risk of repeating a comment I made further down in a thread, the average QA Engineer salary in Mexico is literally 10X more than one of these devs was making. Anyone trying to justify this in terms of the Mexican minimum wage is trying to sell you ocean front property in Arizona.
jewbacca 5 days ago 0 replies      
Interesting. I didn't realize, before trying to figure out what this means for KSP just now, that Squad is not first and foremost a game developer: they're basically a marketing company, for which KSP was originally a 20%-style project.

I hope this development is not so much a product of internal company bullshit, but now I'm very worried it is. KSP is one of the most genuinely important games out there right now. I would be surprised if many less than 100% of the next generation of spacetravel-involved scientists and engineers counted KSP as part of their journey.

dtparr 5 days ago 3 replies      
Can someone give me a little background here? What exactly is 'Squad'? Is this just some of the devs leaving? A particular subgroup? They're true KSP devs and not mod devs?
Paul_S 5 days ago 3 replies      
Ah, it's a pretty awesome game - the right amount of complexity and fun. If it was open source it could continue forever like OpenTTD. Then maybe career mode could be completed.

Honestly, this is one of the very few games I've enjoyed in the last 10 years.

technion 5 days ago 0 replies      
Some incredible open source projects came out of KSP.

This one: https://alexmoon.github.io/ksp/

Is a great example. This is the one game I've spent hours on as an adult. I wish the team the best.

t_fatus 5 days ago 0 replies      
Seem like everybody forgot to say this :

THANK YOU GUYS ! you've made an incredible game with what you had, and inspired a lot of people to write amazing mods on top of it.

orbitingpluto 5 days ago 0 replies      
For those that have purchased it on Steam, I'd archive your current version. Have a feeling the quality is going to go downhill from here.

Wow, just wow. The company must be run by absolute sociopaths.

erikb 5 days ago 2 replies      
I spent less than 5 hours playing this game. But I think it is very, very important for gaming, for space travel simulation, flight simulation, and space exploration. Would buy it again and feel sad that development stops.
qwertyuiop924 5 days ago 4 replies      
First the lead, now half the team. What is it with people jumping ship on KSP?
whybroke 5 days ago 1 reply      
So the main team that made KSP is now all available on the job market?

Who dose not immediately hire that team en masse has missed a huge opportunity.

Animats 5 days ago 4 replies      
The devs should apply to the PR side of NASA or Space-X.
mrfusion 5 days ago 1 reply      
This title really provides no information about what this is about.
fma 5 days ago 0 replies      
At those wages, they can get work off elance, odesk, etc and make a heck lot more money. I guess that's the gaming industry for you. Treat people poorly because they are passionate about what they do and can't find that kind of work elsewhere.
jcrites 5 days ago 1 reply      
Kerbal Space Program is a really fantastic game that rekindled my love of flight (and space) simulation.

For those who don't know about it, it's a spaceflight simulation game. You design a spacecraft from parts, assembling rocket engines, fuel tanks, thrusters, command modules, etc. into a design, and then test it and try to get it into orbit; or from there, to other planetary bodies. Multiple spacecraft if you want: you can dock and coordinate them, or build space stations or moon bases.

It's got an incredible amount of detail, modeling a whole solar system with various planets and moons and asteroids. Remember how the staff working on "No Man's Sky" made claims about how "all other video games are fake, they have a skybox, the planets and sun in our sky are real real and you can fly to them" (claims which turned out to be largely false)? Well, Kerbal Space Program actually delivers on that experience. You can rocket into space, dock in orbit with something you've put up there previously, gravity-slingshot yourself to another planet, parachute a lander down to the surface and roam around, etc.

The game has realistic space flight physics and orbital mechanics (though tuned to be very generous to players compared to real life). You can learn a lot about the basic mechanics of spaceflight just by playing it; you begin to intuitively understand delta-v, apoapsis and when to apply thrust, etc. If you want to dock with with something then you need to plan an appropriate launch window. Maneuvering in orbit is very interesting and initially counter-intuitive (if a spacecraft is "ahead" of you in orbit, in which direction should you boost to "catch up" to it? If you boost directly toward it, you'll increase your orbital speed and thus the shape of your orbit, taking you away from it in a different dimension!). Getting to other astral bodies is tricky and requires more planning. KSP manages to make all of this challenging but fun.

If you'd like to learn more about it, or are even just curious what the fuss is about (the game itself, not the drama), I'd direct you to videos by Scott Manley [1]. Here's a video of a fairly sophisticated mission starting with liftoff from the launch pad, made by another YouTuber [2] (skip to 13:00 to see him planning orbital maneuvers like circularizing his orbit). Manley's "Interstellar Quest" mission has even more complex orbital planning (5:00) [3].

The depth of KSP is astonishing and there's not much else out there like it. It's in the same ballpark as Minecraft in terms of the amazing creative sandbox it provides, with a world that has a ton of depth to explore. There's a wonderful atmospheric feel with the music and artwork that happens when you successfully lift off into space, going from the thrill of launch to the serenity of orbit. It's a beautiful feeling and one that isn't easily captured by recordings.

So it's sad to hear that the company and/or developers who made the game aren't carrying on. The game may not be a commercial success on the scale of Minecraft but the artistic and conceptual achievement are on par or greater.

[1] https://www.youtube.com/user/szyzyg [2] https://youtu.be/RzbDyx4Tpdc?t=10m7s [3] https://www.youtube.com/watch?v=FSzj_uk1fRQ

JoeDaDude 5 days ago 0 replies      
Will have any impact on the educational version of KSP[1]?[1] http://kerbaledu.com/
Waterluvian 5 days ago 0 replies      
Being able to end development is really important. You don't want it to just putter out.
phil248 5 days ago 1 reply      
As a casual fan of KSP, I'm not up to date on all the haps. Can someone tell me what to expect in the future? Will there be more development work done?
mindcrash 4 days ago 0 replies      
Some more information here: https://www.reddit.com/r/KerbalSpaceProgram/comments/55zzmg/...

Most interesting points: "Another point: Squad has been actively censoring the official forums. Any content related to the resignation of the 8 devs was immediately removed. This was done by Squad staff, not the regular forum mods. With this in mind, it's also pretty obvious that the latest Devnote is full of shit. They don't want anyone to think that something is wrong."

And: "Currently, there are 2-3 developers left. Two of them were not held highly by their fellow devs, and the third one is RoverDude, who only work part-time."

Fantastic. I love(d) KSP :(

meric 5 days ago 0 replies      
I would suggest they start a kickstarter for donations for the time they've already spent working on KSP at a 97% below market wage.
iplaw 5 days ago 0 replies      
Wow, $2400 a YEAR? No wonder they just lost their entire dev team. That'd be a week's pay in the US for an experienced developer.
JoeDaDude 5 days ago 0 replies      
Sad and unfortunate, but I expect KSP has left an influential mark on space related games for years to come. For example, see the orbital mechanics segment of Children of a Dead Earth [1].[1] https://www.youtube.com/watch?v=tiIh4Xw2bnQ
hoodoof 5 days ago 0 replies      
I would have thought some wealthy game company would want to buy the company behind a game this popular.
jacquesm 5 days ago 0 replies      
Is there a way to donate directly to these guys? They've done some pretty good work and it would be a nice gesture to say thanks by directly improving their conditions and to bypass their employer.
vintermann 4 days ago 0 replies      
That's OK, but surely there's an easy way to write a more informative headline?
abricot 5 days ago 0 replies      
This is how spiritual successors are made.
EJTH 5 days ago 0 replies      
I am actually surprised, I always held high thoughts of Squad.
Multiplayer 5 days ago 1 reply      
quick scan - not sure what is happening here. My 10 year old LOVES this game. He watches you tubers play this for HOURS AND HOURS. this confuses me.

I think it's pretty cool.

There. :)

SilasX 5 days ago 1 reply      
Could you make the title more informative? And can someone give the background on Kerbal and why this is significant?
brokentone 5 days ago 2 replies      
Can we change the headline to something less clickbaity? "Kerbal Space Program: There's no easy way to say this" etc...
Ask HN: What's your favorite tech talk?
819 points by mngutterman  5 days ago   252 comments top 154
Malic 5 days ago 7 replies      
grin Here we go...

For "laughing at ourselves" and oddities of computer languages, there is "Wat" by Gary Bernhardt:https://www.destroyallsoftware.com/talks/wat

For an opinion on the Sun to Oracle transition, there is "Fork Yeah! The Rise and Development of illumos" by Bryan M. Cantrill, Joyent. His Larry Ellison rant makes me smile:https://youtu.be/-zRN7XLCRhc?t=33m00s

peterkelly 5 days ago 3 replies      
"The Last Lecture", by Randy Pausch. While it's by a well-known CS professor (who was dying of cancer at the time), it's not a technical talk, but about life and work, and how to make the most of it. One of the most inspiring things I've ever seen.


Another fantastic one is Steve Jobs' 2005 commencement address at Stanford:


azeirah 5 days ago 4 replies      
By far my favorite talk is and has been for a very long time Bret Victor's inventing on principle, for me, nothing comes close, except for some of his other work I suppose.


pacomerh 5 days ago 2 replies      
qwertyuiop924 5 days ago 1 reply      
Linus Torvalds on Git. It's funny, and it really does tell you a lot about why Git is the way it is.

Bryan Cantrill's 2011(?) Lightning talk on ta(1). It's fascinating, but it also shows you just long-lived software can be.

Randall Munroe's Talk on the JoCo cruise. Because it's effing hilarious, and teaches everybody the important art of building a ball pit inside your house.

Finally, an honorable mention to three papers that don't qualify, but which I think you should read anyway.

Reflections on Trusting Trust: This is required reading for... Everybody. It describes a particularly insidious hack, and discusses its ramifications for security.

In the Beginning Was The Command Line: If you want get into interface design, programming, or ever work with computers, this is required. It's a snapshot of the 90's, a discussion of operating systems, corporations, and society as we know it. But more importantly, it's a crash course in abstractions. Before you can contribute to the infinite stack of turtles we programmers work with, you should probably understand why it's there, and what it is.

Finally, The Lambda Papers. If you've ever wondered how abstractions work, and how they're modeled... This won't really tell you, not totally, but they'll give you something cool to think about, and give you the start of an answer.

arjunnarayan 5 days ago 3 replies      
> what's that one talk that changed the way you think and you feel everyone needs to see?

Growing a Language by Guy Steele.


madmax108 5 days ago 2 replies      
I see a couple of Bret Victor videos here, but the one I loved the most was "The Future of Programming":https://vimeo.com/71278954

Really set me on a path of re-examining older ideas (and research papers), for applications that are much more contemporary. Absolute stunner of a talk (and the whole 70's gag was really great).

"What would be really sad is if in 40 years we were still writing code in procedures in text files" :(

sebg 5 days ago 0 replies      
Some previous posts:

"Ask HN: What are your favorite videos relevant to entrepreneurs or startups?" -> https://news.ycombinator.com/item?id=7656003

"Ask HN: Favorite talks [video] on software development?" -> https://news.ycombinator.com/item?id=8105732

pdkl95 5 days ago 1 reply      
Y Not - Adventures in Functional Programming by Jim Weirich https://www.youtube.com/watch?v=FITJMJjASUs

The Coming Civil War over General Purpose Computing by Cory Doctorow http://boingboing.net/2012/08/23/civilwar.html

Cybersecurity as Realpolitik by Dan Geer https://www.youtube.com/watch?v=nT-TGvYOBpIhttp://geer.tinho.net/geer.blackhat.6viii14.txt

rdtsc 5 days ago 2 replies      
Pretty much anything by David Beazley or Bryan Cantrill

Discovering Python (David Beazley)


David finds himself in a dark vault, stuck for months sifting through deliberately obfuscated pile of old code and manuals. All seems lost, but then he finds Python on a vanilla Windows box.

Fork Yeah! The Rise and Development of Illumos (Bryan Cantrill)


History of Illumos, SunOS, Solaris, the horribleness of Oracle

These are not technical, but they are entertaining.

KhalilK 5 days ago 0 replies      
Bret Victor - Inventing on Principle https://vimeo.com/36579366

We can argue on some of the points he makes but we can all agree that the demos are very impressive.

corysama 5 days ago 1 reply      
Alan Kay is my favorite tech curmudgeon.

1) Alan Kay: Is it really "Complex"? Or did we just make it "Complicated"https://www.youtube.com/watch?v=ubaX1Smg6pY

Take note that he is not giving the talk using Window & PowerPoint, or even Linux & OpenOffice. 100% of the software on his laptop are original products of his group. Including the productivity suite, the OS, the compilers and the languages being compiled.

2) Bret Victor: The Future of Programminghttps://www.youtube.com/watch?v=IGMiCo2Ntsc

jjp 5 days ago 0 replies      
Hans Rosling's original Ted talk, which has so much passion about data visualisation and making information accessible - http://www.ted.com/talks/hans_rosling_shows_the_best_stats_y...
cgag 5 days ago 1 reply      
Simple made easy is my favorite but I'd also just generally recommend everything by Rich Hickey, Gary Bernhardt, and Jonathan Blow.
kethinov 5 days ago 2 replies      
My current favorite is Jake Archibald's offline-first progressive web apps talk at Google I/O 2016: https://www.youtube.com/watch?v=cmGr0RszHc8

It's a terrific window into the future of web application development.

sssilver 5 days ago 2 replies      
Raymond Hettinger's talk about good code reviews -- https://www.youtube.com/watch?v=wf-BqAjZb8M

Carmack's talk about functional programming and Haskell -- https://www.youtube.com/watch?v=1PhArSujR_A

Jack Diederich's "Stop Writing Classes" -- https://www.youtube.com/watch?v=o9pEzgHorH0

All with a good sense of humor.

bajsejohannes 5 days ago 0 replies      
Jon Blow's "How to program independent games": https://www.youtube.com/watch?v=JjDsP5n2kSM

It's about much more than games. To me, it's about identifying and not doing unnecessary work.

The second half of this video is a Q&A session, which I would skip.

myth_buster 5 days ago 0 replies      
Richard Hamming's You and your research.


okket 5 days ago 1 reply      
Linus Torvalds talk about git


dcre 5 days ago 0 replies      
I already see a bunch of people posting and upvoting Bret Victor's "Inventing on Principle", but I think his "Media for Thinking the Unthinkable" is better.


kornish 5 days ago 1 reply      
Right now it's Boundaries, by Gary Bernhardt. He details the importance of separating out pure business logic from the plumbing code that brings it input and directs its output ("functional core, imperative shell").


grose 5 days ago 2 replies      
Lexical Scanning in Go by Rob Pike


I love everything about this talk. It walks you through building a lexer from scratch in a simple and elegant way, through a very interesting use of coroutines. I appreciate the bits of humor in the talk as well.

ChicagoBoy11 5 days ago 0 replies      
Peter Norvig on the "Unreasonable Effectiveness of Data"https://www.youtube.com/watch?v=yvDCzhbjYWs

I think it is so easy for us to discuss the impact of big data and quickly get into the weeds, but I think in this talk Norvig does an especially great job in making you truly appreciate the seismic impact that the availability of massive quantities of data can have on your way to think about problems. This is one of the first things I ever saw of him, and I've been in love ever since.

dudul 5 days ago 4 replies      
Big fan of Rich Hickey. I found most of his talks really great, and applicable beyond the Clojure universe. My favorites: "Are we there yet?" and "Simple made Easy".
dragonbonheur 5 days ago 1 reply      
The mother of all demos by Douglas Engelbart https://www.youtube.com/watch?v=yJDv-zdhzMY

How I met your girlfriend: https://www.youtube.com/watch?v=O5xRRF5GfQs&t=66s

SonOfLilit 5 days ago 0 replies      
"The Birth and Death of Javascript" by Gary Bernhardt (probably the most talented speaker on tech) at https://www.destroyallsoftware.com/talks/the-birth-and-death...

I'd mention Bret Victor's work before (maybe Drawing Dynamic Visualizations?), but Bret cheats by writing a lot of amazing code for each of his talks, and most of the awesome comes from the code, not his (great nonetheless) ability as a speaker.

Then you have John Carmack's QuakeCon keynotes, which are just hours and hours of him talking about things that interest him in random order, and it still beats most well prepared talks because of how good he is at what he does. HN will probably like best the one where he talks about his experiments in VR, a bit before he joined Oculus (stuff like when he tried shining a laser into his eyes to project an image, against the recommendations of... well, everyone): https://www.youtube.com/watch?v=wt-iVFxgFWk

unimpressive 5 days ago 0 replies      
These aren't necessarily my absolute favorite talks, but they're great mind-altering talks a little off the beaten path so I'd like to highlight them:

"Writing A Thumb Drive From Scratch" by Travis Goodspeed - https://www.youtube.com/watch?v=D8Im0_KUEf8&nohtml5=False

Excellent talk on the hardware side of security, goes into some really cool theoretical hard disk defense stuff, incredibly insightful and introduces a hardware security tech toy so fun you'll want to go out and order it the moment you're done watching. The speaker is entertaining as all heck to boot.

"Programming and Scaling" by Alan Kay - https://www.youtube.com/watch?v=YyIQKBzIuBY&nohtml5=False

Interesting talk on the theoretical limits of code size and engineering versus tinkering. Also talks a lot about Alan Kay's philosophy of computer science which analogizes systems to biological systems, which are the systems with the largest proven scaling on the planet.

"The Mother Of All Demos" by Douglas Engelbart - https://archive.org/details/XD300-23_68HighlightsAResearchCn...

This talk is so prescient you won't believe your eyes. Given in 1968, Douglas demonstrates just about every major computing concept in use today on a modern machine, along with some ones that are still experimental or unevenly distributed such as smooth remote desktop and collaborative editing.

mwcampbell 5 days ago 1 reply      
A few of Bryan Cantrill's talks have already been mentioned here, but this one about DTrace, from 2007, is a gem:


I especially like the part in the middle where he tells the story of how a an awful GNOME applet was killing a Sun Ray server, and how he tracked down the culprit with DTrace.

archagon 5 days ago 0 replies      
I don't really have a favorite, but recently I really enjoyed "8 Bit & '8 Bitish' Graphics-Outside the Box"[1]. The name didn't catch my eye, but then I learned that it was a lecture by the very same Mark Ferrari who made these[2] unbelievably beautiful color-cycling pixel art animations. Master of his art definitely worth listening to!

[1]: http://www.gdcvault.com/play/1023586/8-Bit-8-Bitish-Graphics

[2]: http://www.effectgames.com/demos/canvascycle/

shahar2k 5 days ago 0 replies      
https://www.youtube.com/watch?v=sI1C9DyIi_8 "the greatest shortcoming of the human race is our inability to understand the exponential function"

not a high tech talk, or particularly technically complex, but it shows a common blindspot in a way that is both clear, enlightening and frightening.

nommm-nommm 5 days ago 0 replies      
Elevator hacking (seriously) https://youtu.be/oHf1vD5_b5I
mrob 5 days ago 0 replies      
CppCon 2014: Mike Acton "Data-Oriented Design and C++"


Detailed discussion of how to get the most out of your memory cache and memory bandwidth, focusing on games development. It's full of examples of how understanding both the problem and the hardware, and working in a straightforward way, can give you huge performance gains over using poorly suited abstractions. It shows how low level thinking is still important even with modern compilers. I recommend people interested in performance optimization watch it.

anondon 5 days ago 0 replies      

This was the first time I watched pg give a talk. It was the talk that brought about the biggest change in the way I think about the world, my ambitions. The talk was the beginning, reading more about pg, I came across his essays and then HN.

cconroy 5 days ago 0 replies      
Doing with Images Makes Symbols, Alan Kay.https://www.youtube.com/watch?v=p2LZLYcu_JY

The title says it all. It's really a summary of several software systems with good ideas abound. I believe all the software is 80s or prior.

Edit: I also forgot to mention some psychology and math.

lukewrites 5 days ago 0 replies      
Mine is "The Internet With A Human Face", by Maciej Cegowskihttp://idlewords.com/talks/internet_with_a_human_face.htm

It's what I direct non-technical people to when they ask what the big deal about internet privacy is.

runT1ME 5 days ago 0 replies      
Propositions as Types: https://www.youtube.com/watch?v=IOiZatlZtGU

I think this can really really change how we look at everyday programming tasks everywhere from the type of tooling we choose to how we approach problems.

joeclark77 5 days ago 0 replies      
First, the "Mother of all Demos" by Doug Engelbart: https://youtu.be/yJDv-zdhzMYThis was in 1968, at a time when most people thought about computers as being machines for solving computation problems, like processing payrolls or calculating rocket trajectories. Engelbart and his students had the radical idea that computers could be used for human "knowledge worker" productivity. In one 90 minute presentation, he introduces everything from the idea of a GUI, to the mouse, to word processing, hypertext, computer graphics, and (simulated) videoconferencing. You have to be able to put yourself in the shoes of the audience that has never seen this stuff before, and it'll blow you away.

Something more recent:Martin Fowler's great introduction to NoSQL: https://youtu.be/qI_g07C_Q5INot so technical, this is a great overview of the reasons why (and when) NoSQL is valuable. He crams a lot into a short speech, so it's one of the rare videos I've required students in my database classes to watch.

Now, really getting away from the technical, I have to recommend watching the IDEO shopping cart video: https://youtu.be/taJOV-YCieIThis is the classic introduction of Design Thinking to the world, in 1999. If you're using the Lean Startup or an Agile method, but have never heard of IDEO's shopping cart, you may be able to get along fine at work, but you should be kind of embarrassed like a physicist who's never read Newton.

monksy 5 days ago 0 replies      
Agile Is Dead: By Dave Thomas https://www.youtube.com/watch?v=a-BOSpxYJ9M

I love his talks for a few reasons:

Often times...

 1. He's anti-hype 2. He's contriversal 3. He's right.

pradeepchhetri 5 days ago 3 replies      
One of my favourite talks is by James Mickens at Monitorama 2015: https://vimeo.com/95066828
VLM 5 days ago 1 reply      
Aside from the typical, I watched Damian Conway "Standing on the shoulders of giants" from YAPC 2016 last week and found it interesting. Always fun to see a modern feature full language collide with history and algorithms.


intelekshual 5 days ago 0 replies      
Keyframe 5 days ago 0 replies      
Too many great talks to mention, but if I had to pick one it would be Ted Nelson's few minutes of demonstration of Xanadu. Demonstration is lacking, but what he said about the concept/idea is what stuck with me. Deep and referential(?) content. https://www.youtube.com/watch?v=En_2T7KH6RA
taeric 5 days ago 0 replies      
https://www.infoq.com/presentations/We-Really-Dont-Know-How-... is by far my favorite technical talk right now.

Sussman goes over some interesting ideas on the provenance of calculations and asserts that "exact" computation is possibly not worth the cost.

indexerror 5 days ago 0 replies      
My favourite talk is:

"What the heck is the event loop anyway?" by Philip Roberts


petr_tik 5 days ago 1 reply      
1Martin Thompson busting myths about hardware and explaining why it's important to know. Mechanical sympathy makes you better, because you know how the code actually runs on the machine and interacts with different layers of memory


2Matt Godbolt (the man behind GCC explorer) - Emulating a 6502 system in Javascript

Great talk about BBC micro and much more


3Matt Adereth - Clojure/typing

History of keyboards and a custom keyboard written in Clojure


I like the 3 for their content and how each speaker presented the background and their project/hack/ideas.

Highly recommend

philbo 5 days ago 0 replies      
Joshua Bloch: How to design a good API and why it matters


0xmohit 5 days ago 2 replies      

 How To Design A Good API and Why it Matters [0] The Principles of Clean Architecture [1] The State of the Art in Microservices by Adrian Cockcroft [2] "The Mess We're In" by Joe Armstrong [3]
[0] https://www.youtube.com/watch?v=aAb7hSCtvGw

[1] https://www.youtube.com/watch?v=o_TH-Y78tt4

[2] https://www.youtube.com/watch?v=pwpxq9-uw_0

[3] https://www.youtube.com/watch?v=lKXe3HUG2l4

danblick 5 days ago 0 replies      
I think Alan Kay's "Doing with Images makes Symbols" talk from 1987 might make my list:


It's mostly about the history of HCI up to that point.

agentultra 5 days ago 0 replies      
We Really Don't Know How To Compute! [0] is probably my top... next to the christmas tree lectures.

[0] https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

agumonkey 5 days ago 4 replies      
After lots of talks I started going to the library and found out it's a lot more effective to grow knowledge. Maybe I'm too ADHD-able when watching videos.
beyondcompute 5 days ago 0 replies      
Bret Victor is pretty interesting though a bit philosophical.

The best practical talk is of course this:

https://www.youtube.com/watch?v=asLUTiJJqdE - Robert "Uncle Bob" Martin, Clean Architecture and Design

jonbaer 5 days ago 0 replies      
Richard Feynman: Fun to Imagine (BBC Series, 1983) - https://www.youtube.com/watch?v=v3pYRn5j7oI&list=PL04B3F5636...
samcal 5 days ago 0 replies      
James Mickens at Monitorama: https://vimeo.com/95066828

Aside from the comedic aspect (which makes the talk incredible), Mickens is a genuinely brilliant thinker and has a marvelous way with words.

evilgeneralist 5 days ago 1 reply      
Can I just say anything with Bryan Cantrill?
hackaflocka 5 days ago 1 reply      
Paul Buchheit - Startup School Europe 2014


Anjana Vakil: Learning Functional Programming with JavaScript - JSUnconf 2016


Bret Victor - Inventing on Principle


Philip Roberts: What the heck is the event loop anyway? | JSConf EU 2014


kruhft 5 days ago 1 reply      
Growing a Language by Guy Steele (video and transcription):


utefan001 5 days ago 0 replies      

InfoSec talk. Best lines from talk..

"Basic lessons are not learned such as know thy network"

"You have to learn your network, you have to have skin in the game"

"Defense is hard, breaking stuff is easy"

"If you serve the God's of compliance you will fail"

"Compliance is not security"

"Perfect solution fallacy"

"People are falling over themselves not to change, shooting great ideas down."

"Perfect attacker fallacy, they don't exist, they are a myth!"

"Attackers are not that good because they don't need to be that good."

Speaker is Eric Conrad

raspasov 5 days ago 1 reply      
mtmail 5 days ago 0 replies      
"Avoiding Burnout, and other essentials of Open Source Self-Care" https://www.youtube.com/watch?v=RbeHBnWfXUc
IntelMiner 5 days ago 1 reply      
Not quite as low-level as some of the other talks, but I love watching LazyGameReviews "Tech Tales" series when ever a new one comes out

It's fairly high level, but he really burrows into computer history and it's simply fascinating to watch, helped by the fact the person is extremely passionate about what he does https://www.youtube.com/watch?v=gB1vrRFJI1Q&list=PLbBZM9aUMs...

dorianm 5 days ago 0 replies      
Aaron Patterson talks (aka @tenderlove): https://www.youtube.com/watch?v=B3gYklsN9uc
antouank 5 days ago 0 replies      
Rich Hickey - Simplicity Mattershttps://www.youtube.com/watch?v=rI8tNMsozo0
JoshTriplett 5 days ago 1 reply      
For reasons completely unrelated to the content, Identity 2.0: https://www.youtube.com/watch?v=RrpajcAgR1E

Watching that talk brought me over to the "a picture or a few words per slide" style of presentation, rather than the "wall of bullet points" style. It also helped me move from "stop talking, change slides, start talking again", to smooth transitions while talking.

jack9 5 days ago 0 replies      
What We Actually Know About Software Development, and Why We Believe Its True


jordanlev 5 days ago 0 replies      
As a web developer, my favorite recent talk is "Modern Layouts: Getting Out of Our Ruts" by Jen Simmons


...very inspiring if you're bored with the way websites have been looking for the past few years.

dantle 5 days ago 0 replies      
Indistinguishable From Magic: Manufacturing Modern Computer Chips.

Explains a lot of recent mass-market innovations that keep the semiconductor manufacturing industry rolling, and goes into detail about the many tricks used to ensure scaling down to the 22nm node.


raglof 5 days ago 0 replies      
Bret Victor's "Inventing on Principle" [1] or Rob Pike's "Concurrency Is Not Parallelism" [2].

[1] https://vimeo.com/36579366[2] https://www.youtube.com/watch?v=cN_DpYBzKso

antigremlin 2 days ago 0 replies      
Temporally Quaquaversal Virtual Nanomachine is another gem by Damian Conway: https://yow.eventer.com/events/1004/talks/1028
fivealarm 5 days ago 0 replies      
I'm relatively early in my career, and I feel like I've learned a ridiculous amount of useful stuff from talks given by these people:

Brandon Rhodes

Raymond Hettinger

David Beazley

Sandi Metz

Avdi Grimm

exarne 5 days ago 0 replies      
It's an old talk but I really enjoyed it at the time, Paul Graham on Great Hackers: http://web.archive.org/web/20130729231533id_/http://itc.conv...
zerognowl 3 days ago 0 replies      
Always refreshing to hear one of Haroon Meer's talks:


Jake Appelbaum's Digital Anti-Repression Workshop is de rigeur listening too:


m0llusk 4 days ago 0 replies      
Google TechTalks Personal Growth Series: William Dement on Healthy Sleep and Optimal Performancehttps://www.youtube.com/watch?v=8hAw1z8GdE8
wyldfire 5 days ago 0 replies      
wedesoft 2 days ago 0 replies      
Dr Meister: Using Lisp, LLVM, and C++ for molecular programming: http://www.youtube.com/watch?v=8X69_42Mj-g
x0x0 5 days ago 0 replies      
Cliff Click was the jvm architect at sun then spent a decade at azul systems as their jvm architect. The talk is "A JVM Does That?"

It's well worth watching if you are interested in vms at all.


makmanalp 5 days ago 0 replies      
Aside from a lot of the classics here, one that stands out is this AMAZING live demo at pycon by David Beazley:


The simple and followable progression to more and more complex ideas blows my mind every time.

recmend 5 days ago 0 replies      
People don't buy what you do, they buy why you do it by Simon Sinekhttps://www.ted.com/talks/simon_sinek_how_great_leaders_insp...
andycroll 5 days ago 0 replies      
Slightly self-serving as the organiser but Sarah Mei's talk at Brighton Ruby this year was terrific.


agconti 5 days ago 0 replies      
Mike Bostock's talk on visualizing algorithms is one of my favorites: https://vimeo.com/112319901

> Visualizing Algorithms A look at the use of visualization and animation to understand, explain and debug algorithms.

cvwright 5 days ago 0 replies      
Gary McGraw: Cyber War, Cyber Peace, Stones, and Glass Houses https://www.youtube.com/watch?v=LCULzMa7iqs

I like how this talk cuts through a lot of the BS in security. One of his points is that the US and other rich Western countries have a lot more to lose from a possible "cyber war" than our potential adversaries do.

Another key point is that we'll never make much progress unless we can somehow start building better systems in the first place, with fewer vulnerabilities for an adversary to exploit.

I think the second point has become a lot more widely accepted in recent years since McGraw started giving this talk. Unfortunately it sounds like a lot of government folks still haven't got the memo on point #1.

akkartik 5 days ago 0 replies      
Moxie Marlinspike at Blackhat 2010 on how we lost the war for privacy in spite of winning the Crypto Wars of the 1990's-early 2000's: https://www.youtube.com/watch?v=unZZCykRa5w
anoother 5 days ago 0 replies      
"How to Speed up a Python Program 114,000 times." - https://www.youtube.com/watch?v=e08kOj2kISU

Humour, serious technical insight and a good reminder of why being a generalist is an advantage.

daveguy 5 days ago 0 replies      
Geoffrey Hinton "The Next Generation of Neural Networks". A google tech talk from 2007 about this newfangled "deep neural network" thing:


vvanders 5 days ago 0 replies      
Herb Sutter, Modern C++ - https://channel9.msdn.com/Events/Build/2014/2-661

Great overview of value types, performance and how hardware that runs things still matters.

teamhappy 5 days ago 0 replies      
Keith Winstein presenting mosh at USENIX 2012 is easily the most entertaining tech talk I've ever seen: https://www.youtube.com/watch?v=XsIxNYl0oyU

Scott Meyers' talks are fun to watch too.

drizze 5 days ago 0 replies      
David Beazley's, "Discovering Python": https://youtu.be/RZ4Sn-Y7AP8

A fascinating tale about using python during the discovery phase of a trial. Very fun watch. Anything by David Beazley is great!

simscitizen 5 days ago 0 replies      
"An Introduction to SQLite" by Richard Hipp (who wrote the library) is actually a pretty good intro on to how to build your own DB engine.


ebcode 5 days ago 0 replies      
John Holland is always worth watching, and not very many people have seen this one: https://www.youtube.com/watch?v=a_u_d-KLEsE#t=1183.549186
sideb0ard 5 days ago 0 replies      
I love the Ted Nelson "Computers For Cynics" series - https://www.youtube.com/watch?v=KdnGPQaICjk

He is kinda awesome in Herzog's recent 'Lo and Behold' too.

amelius 5 days ago 0 replies      
Rupert Sheldrake, "The Extended Mind, Experimental Evidence", Google Talks 2008, https://www.youtube.com/watch?v=hic18Xyk9is

If you are in for something out of the ordinary.

ajankovic 5 days ago 0 replies      
I like this one because it's a good reality check:Opening Keynote: Greg Young - Stop Over-Engenering https://www.youtube.com/watch?v=GRr4xeMn1uU
vonklaus 5 days ago 0 replies      
Ashton Kutcher--Startup School

I like it because it is the intersection of so many things. He starts slow, is very intimidated by the audience. The audience, obviously super skeptical of the clown from that 70s show giving any useful information, they could learn from. He finds his footing with a great morivational story (albeit laden with a few cliches) about a forgotten entrepreneur and how he built some lasting value.

For me, this is a great talk. The story is extremely motivational and has some interesting bits of history & entrepreneurial genius-- but the entire experience is extremely educational. About bias, drive & success.

I liked it for what it wasnt.

peelle 4 days ago 0 replies      
Clay Shirky on Love, Internet Style. He has several great talks.


thegeekpirate 5 days ago 1 reply      
Black Hat USA 2015 - The Memory Sinkhole Unleashing An X86 Design Flaw Allowing Universal Privilege


johnhenry 5 days ago 0 replies      
Douglass Crockford's series of 8 videos, "Crockford on JavaScript" really helped me gain a understanding of the language and a better understanding of programming in general. If you don't like or understand JavaScript, this will definitely change that. He's an excellent speaker and the talks are quite enjoyable. Here is the first video: https://www.youtube.com/watch?v=AoB2r1QxIAY. If you like it, the other 7 are available in the suggested section.
djfdev 5 days ago 0 replies      
I always enjoyed Ryan Dahl's casual at-home talk on the history of Node.JS: https://www.youtube.com/watch?v=SAc0vQCC6UQ
geichel 5 days ago 0 replies      
Zed Shaw's presentation, it's Not You, It's Them: Why Programming Languages Are Hard To Teach -- https://vimeo.com/53062800
stewartw 4 days ago 0 replies      
Lawrence Lessig's 'free culture' from OSCON 2002:-https://randomfoo.net/oscon/2002/lessig/

Anything at all by Richard Feynman:-https://www.google.co.uk/search?q=%22richard+feynman%22&tbm=...

nicwest 5 days ago 0 replies      
The Clean Code Talks - "Global State and Singletons": https://www.youtube.com/watch?v=-FRm3VPhseI
joshux 5 days ago 0 replies      
Damien Katz - CouchDB and Me: https://www.infoq.com/presentations/katz-couchdb-and-me

The talk is about how Damien quit his job to hack on open source software. It shows his struggle and doubt while embarking on the project and then finally invented CouchDB. It's a passionate and human account of the process of creating something significant. I recommend every hacker watch this.

boulos 5 days ago 0 replies      
In addition to Linus's git talk, I really enjoyed Jeff Dean's EE380 retrospective on Building Systems at Google (http://m.youtube.com/watch?v=modXC5IWTJI). Many people have mentioned Jeff's basic premise elsewhere ("Design a system for 10x your current need, but not 100x, rewrite it before then") but this talk gave several useful examples where tipping points occurred (at least with Search).
vayarajesh 5 days ago 0 replies      
TED talk - Elon musk - https://www.youtube.com/watch?v=IgKWPdJWuBQ

D10 conference - Steve jobs and Bill gates - https://www.youtube.com/watch?v=Sw8x7ASpRIY

TED talk - Bill gates (Innovation to Zero) - https://www.youtube.com/watch?v=JaF-fq2Zn7I

miiiiiike 5 days ago 0 replies      
Chuck Rossi - How Facebook releases software: https://vimeo.com/56362484 I remember thinking "Dr. Cox as release manager."
Veratyr 5 days ago 0 replies      

How Google backs up the internet.

At the time it changed how I thought about backups/reliability.

Philipp__ 5 days ago 0 replies      
Everything by Mr. Bryan Cantrill! This one is special:https://www.youtube.com/watch?v=l6XQUciI-Sc
glitcher 5 days ago 0 replies      
One in particular comes to mind that really changed the way I think about the larger problem of security in computer science and what a mess our current state of affairs seems to be in:

"The Science of Insecurity" by Meredith L. Patterson and Sergey Gordeychik (2011)


Warning: speaker likes to use profanity (which I enjoy :) but possibly NSFW if you're not on headphones

ciroduran 5 days ago 0 replies      
I love Kevlin Henney's talks, he's very entertaining and informative at the same time, here's one called "Seven Ineffective Coding Habits of Many Java Programmers", very useful even if you don't use Java - https://vimeo.com/101084305

The rest of his channel is full of his talks https://vimeo.com/channels/761265

oleksiyp 5 days ago 0 replies      
Google I/O 2009 - The Myth of the Genius Programmer

One of the best talks about code reviews and similiar things


1057x31337 5 days ago 0 replies      
Therapeutic Refactoring by Katrina Owen https://www.youtube.com/watch?v=J4dlF0kcThQ
tboyd47 5 days ago 0 replies      
"Being Awesome By Being Boring"https://www.youtube.com/watch?v=Iheymi5QFEY
michaelmcmillan 5 days ago 0 replies      
Fast test, slow test by Gary Bernhardt: https://www.youtube.com/watch?v=RAxiiRPHS9k
sedachv 5 days ago 0 replies      
QueueTard's Manufacturing Modern Computer Chips at HOPE number nine: https://www.youtube.com/watch?v=NGFhc8R_uO4

Guy Steele's How to Think about Parallel Programming: Not! at Strange Loop 2011: https://www.infoq.com/presentations/Thinking-Parallel-Progra...

danpalmer 5 days ago 0 replies      
I find Simon Peyton Jones to be an excellent educator. He talks mostly about Haskell and the GHC compiler, but his talks are very accessible to a wide audience of programmers.
augustk 5 days ago 0 replies      
Edsger Dijkstra's Turing Award Speech:


theviajerock 4 days ago 0 replies      
My favorite is this one about Drones and IA. One of the best:


EvanAnderson 5 days ago 0 replies      
I very much enjoyed the talk John Graham-Cumming gave "The Great Railway Caper: Big Data in 1955": https://www.youtube.com/watch?v=pcBJfkE5UwU

Any of Jason Scott's talks given at various hacker cons are usually historically informative and always a lot of laughs (but they're decidedly not "technical").

romper 5 days ago 0 replies      
Secret history of silicon valley: https://youtu.be/hFSPHfZQpIQ
jagermo 5 days ago 0 replies      
"Pwned by the Owner" (https://www.youtube.com/watch?v=U4oB28ksiIo), a DefCon 18 talk about a stolen Mac that one day popped back up on the owners DynDNS service, he was able to connect to it and had some fun afterward.

Not a technical deepdive, but entertaining.

lumannnn 5 days ago 0 replies      
by Dave Thomas (PragDave)

"LoneStarRuby 2015 - My Dog Taught Me to Code by Dave Thomas" - https://www.youtube.com/watch?v=yCBUsd52a3s


"GOTO 2015 Agile is Dead Pragmatic Dave Thomas" - https://www.youtube.com/watch?v=a-BOSpxYJ9M

lewisl9029 5 days ago 0 replies      
The Front-end Architecture Revolution by David Nolen: http://www.ustream.tv/recorded/61483785

It completely changed the way I approach front-end development (Not that talk in particular though. I saw an earlier, similar talk on Youtube but this one has much higher quality).

ericssmith 5 days ago 0 replies      
Not at all high-brow, but I revisit the in-the-trenches case study of "Scaling Pinterest" on Infoq from time to time because I find their fighting through the pain inspirational for my own scaling troubles.


exawsthrowaway 5 days ago 0 replies      
It's not publicly available, but it was an internal AWS talk and very-deep-dive on the design & implementation of S3. A real eye opener for what it meant to build at global scale.

It's worth joining a global-scale tech company (AWS, Google, Azure, Facebook) just to have your mind blown by some of the internal materials.

davur 5 days ago 0 replies      
Cal Henderson "Why I Hate Django" DjangoCon 2008 Keynote - https://www.youtube.com/watch?v=i6Fr65PFqfk. Not that it is the most educational talk, but it's really funny (edit: added youtube link).
zengr 3 days ago 0 replies      
My personal favorite is "The ACL is Dead" by Zed Shaw https://vimeo.com/2723800
rimantas 5 days ago 0 replies      
Anything by Sandi Metz.
ruairidhwm 5 days ago 0 replies      
Hacking with Words and Smiles by James Lyne https://www.youtube.com/watch?v=KrNo0XpQxBk

He was a co-speaker at TEDxGlasgow with me and I thought his talk was brilliant. Cyber-crime is a really interesting area.

jboynyc 5 days ago 0 replies      
I like all of Carin Meier's talks, but I think the one that made the most lasting impression was "The Joy of Flying Robots with Clojure."


jacques_chester 5 days ago 0 replies      
Stop Building Products by David Edwards.

A deeply thoughtful discussion of the impact of metaphors on how we think about software development.

Skip to 0:40 if you don't want to hear the MC.


NetStrikeForce 4 days ago 0 replies      
ECCHacks - A gentle introduction to elliptic-curve cryptography [31c3]


peoplee 5 days ago 0 replies      
The Pixel Factory by Steven Wittenshttps://www.youtube.com/watch?v=4NkjLWAkYZ8

For those how likes computer graphics (or want to learn), this is a gold piece.

vinkelhake 5 days ago 0 replies      
"Desktop on the Linux" by Wolfgang Draxinger (guest appearance by Lennart Poettering):https://www.youtube.com/watch?v=ZTdUmlGxVo0
c0l0 5 days ago 0 replies      
Artur Bergman, creator of the Fastly CDN, at Velocity 2011 - giving a (very) short talk about SSDs: https://www.youtube.com/watch?v=H7PJ1oeEyGg
lukego 5 days ago 0 replies      
superplussed 5 days ago 0 replies      
React-motion, the react animation package that boils all of the animations down to one concept, a spring.


0xmohit 5 days ago 0 replies      
I have a list of interesting talks on Haskell/OCaml [0].

(Plan to organize and add more categories.)

[0] https://github.com/0xmohit/talks

unkoman 5 days ago 0 replies      
Eric Brandwine at AWS talking about how they solved the networking part of the cloud: https://www.youtube.com/watch?v=3qln2u1Vr2E
d1ffuz0r 5 days ago 0 replies      
krsna 5 days ago 0 replies      
"When We Build" by Wilson Miner: https://vimeo.com/34017777

It completely changed my perspective on how design shapes our world.

nickysielicki 5 days ago 0 replies      
DEFCON 20: Owning Bad Guys {And Mafia} With Javascript Botnets https://www.youtube.com/watch?v=0QT4YJn7oVI

This guy is just too funny.

simula67 5 days ago 0 replies      
"Greg Wilson - What We Actually Know About Software Development, and Why We Believe Its True"


tehwebguy 5 days ago 0 replies      
That guy fat from the Bootstrap team - What Is Open Source & Why Do I Feel So Guilty?


jpetitto 5 days ago 1 reply      
Deconstructing Functional Programming by Gilad Bracha:


samblr 5 days ago 0 replies      
There is a sort of palpable energy in (Ryan Dahl) node.js original presentation.


edit: +Ryan Dahl

RodericDay 5 days ago 0 replies      
I really liked "The Life and Death of Javascript" by Gary Bernhardt
sunils34 5 days ago 0 replies      
Resilience in Complex Adaptive systems by Richard Cook at Velocity Conf 2013:


bluefox 5 days ago 0 replies      
Dynamic Languages Wizards Series - Panel on Runtime: https://www.youtube.com/watch?v=4LG-RtcSYUQ
jentulman 5 days ago 0 replies      
Dan Abromovich sort of introducing Redux in this talk. https://youtu.be/xsSnOQynTHs
verandaguy 5 days ago 0 replies      
I'm a fan of "Knocking my neighbors kids cruddy drone offline" by Robinson and Mithcell from DEFCON 23.

 [0] https://www.youtube.com/watch?v=5CzURm7OpAA

rhgraysonii 5 days ago 0 replies      
Closure, by @steveklabnik


So many lessons in short, beautiful piece.

fitzwatermellow 5 days ago 0 replies      
Well. There's enough quality content in this thread to start a dedicated cable television channel, a la Viceland ;)

Not sure if it's my favorite. And the subject is more technology than "tech". But the talk that keeps haunting me is Michael Dearing's lecture from the Reid Hoffman "Blitzscaling" class at Stanford:

Heroes of Capitalism From Beyond The Grave


Dearing draws upon an obscure letter by Daniel McCallum, superintendant of the New York and Erie Railroad, written to his bosses in the 1850s. In the report, McCallum bemoans the stress and frustration of operating a railroad system spanning thousands of miles. All of the joy and magic he used to revel in whilst running a fifty mile stretch back in his home town has long since dissipated. Furthermore, the unit cost per mile seems to be exploding rather counter-intuitively!

Dearing goes on to elucidate the absolute necessity of the railroads ("the thing to know about the railroads is: they were startups once") themselves. As guarantors of civilization and progress. Beacons bringing light and reason to the dark swamps of ignorance and inhumanity. And not just in the physical transport of goods, people and ideas across the continent. But as the wealth created from that creative destruction remains the best cure for all of our other inimical maladies: poverty, injustice, disease and stagnation.

So, no pressure. But civilization depends upon you!

Links to References in the Talk:

Estimates of World GDP: From One Million BC to the Present


The Process of Creative Destruction by Joseph Schumpeter


The Visible Hand: The Managerial Revolution in American Business by Alfred D. Chandler, Jr.


Report of D. C. McCallum to the stockholders of the New York and Erie Railroad


Things As They Are In America by William Chambers


MailChimps founders built the company slowly by anticipating customers needs nytimes.com
482 points by kellegous  5 days ago   253 comments top 29
echelon 5 days ago 20 replies      
The Atlanta tech scene is blossoming, just like our film industry. We have a couple of incubators, including a few that are funded/supported by Georgia Tech. The cost of living here is super cheap, and there are brilliant and talented people everywhere.

We have satellite offices for tons of major tech companies, so there are traditional tech jobs too. Earning $200k here while the cost of living is so low is phenominal.

You can comfortably live in the city with a roommate and pay only $500-600 rent. Just outside the city, you can get a 1200 sqft apartment for $700.

Our music scene is amazing, and the local food is fantastic.

I try to convince my friends in SF to come out here and give Atlanta a look, but nobody bites. I think this city is an incredible opportunity, especially for an early stage startup that wants to focus on growth prior to investment. The talent is here, the city is amazing, and the rent isn't absurd.

jasode 5 days ago 6 replies      
>In fact, its possible to create a huge tech company without taking venture capital, and without spending far beyond your means. Its possible, in other words, to start a tech company that runs more like a normal business than a debt-fueled rocket ship careening out of control.

The author, Farhad Manjoo, is romanticizing a bootstrapped business as "good" and (via his prosaic examples of restaurants and dog walking) dismissing the VC-backed businesses as "bad."

It should be obvious that the opposite can be true: a bootstrapped business can also be dysfunctional and a VC-backed firm can be disciplined with its money.

Bootstrapping is great strategy especially if you're company that doesn't benefit from "network effects" such as Mailchimp/Sendgrid. You acquire customers one at a time and offer a good enough value proposition for them to subscribe or pay. A lot of SaaS/enterprise companies and lifestyle businesses can grow that way.

Venture capital is really helpful when you need to deliberately grow exponentially faster than bootstrapping will allow because you're trying to build a giant footprint for the network effects. Snapchat is a good example of this. It wouldn't make sense to try to sell the Snapchat app on App store for $4.99 each so it can be cashflow positive and pay for programmer salaries. The first users of their apps were teens in high school and they can't just purchase an app like that without their parents' permission. If Snapchat charged money for the app, they wouldn't even know that teens were the leading edge of that trend. In that case, you need to wisely use vc funding to pay the bills while you grow the audience. Hopefully, Snapchat will end up profitable like Facebook instead of losing money like Twitter.

If you're a "network effects" startup that insists on bootstrapping as the only funding, you will be beat by the competitors that are willing to live with $0 revenue for a few years while their equity financing allowed them to build their user base faster.

taf2 5 days ago 1 reply      
It's called doing something kinda boring, consistently and doing it well. It's hard to say in software when you really need huge capital if you can start with the following:

1. Some financial understanding of how to invoice, pay taxes and write contracts - the business side

2. Manage people, setting expectations and holding people accountable, while empowering them to be successful in there job as clearly defined when hired

3. Take action to address the immediate. Reds of the customer while always keeping an eye on the longer term needs - always be available and responsive to needs of the customer - email, phone, chat

4. Have a solid foundation in the technical aspects of what you are building and operating

If you have these 4 things and a product that is a good fit in an emerging market than raising capital is probably not necessarily needed because you have the resources and skills to make it happen. I think probably a 5th requirement is you have enough personal capital to pay for your living expenses until the business is making enough money. Also avoid hiring until you can pay for double the salary of the first hire... this way if you are wrong you have some padding and it's proved you can work through hard times. I remember thinking before our first hire that this was way too much stress and it would be so much easier when we have more people. Now at 16 employees, it's an order of magnitude harder but I'm much more prepared than I was back then. The children analogy is good I think. When you first have a child you think this is going to be hard but they grow with you and so it's not so bad it's even kind of fun

ilamont 5 days ago 1 reply      
MailChimps path was circuitous, and it came without the glory of enormous funding rounds.

Its time to retire the idea that raising money equals glory. Its not a measure of business success as much as it is a measure of founders being able to convince rich people to back them. As we know from the "XX is shutting down" stories that regularly grace HN, many if not most tales of massive fundraising success will eventually become business and investment failures. Yet the TechCrunch/Fortune/BI coverage anglespushed relentlessly by the investment community and hired PR peoplealmost always emphasize the former over the latter.

ohnoesmyscv 5 days ago 2 replies      
Depends on what you mean by 'making it' as a startup - is it the revenue growth? burn rate? profit growth? Number of users? Number of employees? A SF office? How well funded you are?

It's hard to grow a business without going the conventional SV route and getting VC funding. Unless you have a revolutionary product, the bigger competition will likely stomp over you unless you have resources to grow your team and product and marketing. Or if you are comfortable with a small market share but a profitable one.

Not saying it is impossible, but just hard. I know a few SaaS out there like Roninapp and Reamaze that like Mailchimp are not VC funded and are growing well and are run by a small but effective team, but the question is would startups like these benefit from funding and be in a better position with regards to growth and user base than without vc funding?

More often than not when startups receive funding they move away from satisfying the customers to making investors happy. As the company starts to hire, get a nice office, increase spend on things like office perks, ads, marketing etc while it might contribute to growth it doesnt necessarily work well for the end user. You go from lean to bloat more often than not. I guess that depends on how you manage resources but it isnt exactly easy with investors breathing down your neck

I have a company that is bootstrapped and while there are well funded competitors out there, i'm perfectly fine with my startup running lean and being profitable, albeit slowly. At least I am my own boss and I answer to myself, and that in my world is 'making it'.

hitekker 5 days ago 7 replies      
I'm curious if there's interesting data, information, or anecdotes about bootstrapped, well-intentioned, well-executed startups being utterly floored by VC-funded competitors. Particularly if the "average" or most common reason behind a bootstrapped failure diverges significantly from the 'common sense' reason, i.e., "bootstrapped startup couldn't move fast enough/expand quick enough/hire great talent because bootstrapped startup didn't have the money".
jonstokes 5 days ago 5 replies      
And then somehow they managed to totally disregard their customers' needs and screw everyone with the Mandrill changeover. Seriously, has everyone forgotten that fiasco?


I'll never trust them or use them again, after that. No way no how.

nathancahill 5 days ago 0 replies      
Too bad. After the Mandrill/MailChimp price hike, every company I've worked with is moving off of them as fast as possible, towards Mailgun/Sendgrid.
brightball 5 days ago 0 replies      
I don't see why this is a surprise. From my observation, there are typically two types of startups out there.

1. Startup with a clear revenue model created by generating tangible value for businesses.

2. Startup without a clear revenue model that is doing something interesting and will probably be acquired if successful.

The first is a successful business like MailChimp that can grow itself from it's own revenues and doesn't need funding. The second is the type of business that needs funding because they are essentially investing in building technology to sell to a larger company OR are building a large pool of users to sell to a larger company.

combatentropy 5 days ago 0 replies      
A similar message is in Getting Real, a free PDF by the makers of Basecamp: https://gettingreal.37signals.com/
fideloper 5 days ago 1 reply      
Interestingly in the marketing space (in particular, email-based marketing), all the tools I gravitate towards have been bootstrapped rather than funded:

Mailchimp, ConvertKit, Drip (altho lead pages bought it, and is actively funding it's growth), curated.co (I'm not 100% sure on Curated - is that funded?), Edgar

These types of apps can actively and easily translate into $$ for businesses, so it's no wonder they can bootstrap rather than take on funds - individuals and businesses are willing to pay to make money!

sametmax 5 days ago 3 replies      
Then it's not called a start up, but a normal company. How is that a revelation?
wslh 5 days ago 0 replies      
We really need this contrarian view of startup creation. In a way, having a few millions in profits can't be called a lifestyle business, your business can't scale more and you are happy with what you created and control. Also, only the process to get initially funded is very time consuming (e.g. look at the kickstarter videos).
Ayraa 4 days ago 0 replies      
As a regular user of all the main email marketing services:

I feel Mailchimp is missing the boat by focusing entirely on email and not offering a way to contact customers via:

1. In-app messages2. SMS 3. Push notifications

It's also difficult / impossible to set up advanced automation sequences with it. For example, if Customer X does Y on your site, direct them to another branch with a different sequence.

Of course, their main target customers are small businesses so they may not need these advanced features but these customers would benefit tremendously from being able to for example text certain messages to customers instead of only being able to email them.

pitchups 4 days ago 1 reply      
Another example of a company that has grown quite large without using venture funding is Zoho.com - based out of Chennai, India. They are even larger than Mailchimp in terms of revenues - clocking over $1 Billion and with over 3000 employees.

[1] https://pando.com/2014/10/14/anti-burn-how-bootstrapped-zoho...

gopi 4 days ago 0 replies      
I think such a story is difficult now unless its a niche small market. Lets say someone start a company and stumble-upon a massive market but decides to grow slow financed by revenue. The problem is others are going to copy the idea and grow fast with VC money and crush the original company. May be possible in the next downturn when VC money dries up

Read Blitzscaling by Reid Hoffman - https://hbr.org/2016/04/blitzscaling

ungzd 4 days ago 1 reply      
Glory to the company but shame to modern internet you have to use one of few "email providers" instead of just installing Postfix otherwise all your emails go to spam folder.
yoamro 4 days ago 0 replies      
If you can continue growth and profitability without taking outside investors, great for you and I recommend it. The reality is, a lot of times founders are faced with the problem of funding/paying bills and are left with no other option than to take VC money. If you go down that route, just make sure that everyone on-board has the interests of your users in mind.
traviswingo 4 days ago 0 replies      
"Believe it or not, start-ups dont even have to be headquartered in San Francisco or Silicon Valley."

Lol. This was a great read. But yeah, don't take money unless you literally cannot finance your growth. A real business builds itself.

qwrusz 5 days ago 1 reply      
The article seems to acknowledge Silicon Valley is good and prolific at startups to the point of metonymy. SV is the standard way, and so you get descriptions like "Un-Silicon Valley Way" instead of say the "Atlanta Way" (another startup from Atlanta called Coca-Cola appears to be doing OK too ;))

But why does this article have this tinged negativity toward SV? Why not just highlight MailChimp's success without the jab on VCs? Clearly both VC or bootstrapping approaches can work for a company (though both approaches fail in the majority of cases and journalism is in love with survivorship bias).

I'm not in SV, but it's obviously the place important innovation has/is/will be coming from (and some crap too). Innovation and growth is needed and should be encouraged in this economy.

Just frustrating to see big journalism knock SV for no reason.

Better story: "Chimps and the Un-Silicon Valley Way to Make it as a Primate".

ex3ndr 5 days ago 0 replies      
Isn't VCs needed for building such company not in 16 years but in 2-3?
rsp1984 5 days ago 0 replies      
What a lot of people miss (including the author) is the time factor. Yes you can bootstrap a business and grow things organically once you have found decent product/market fit. The problem is that it will just take a much longer time until you reach certain milestones than with a VC-based approach.

"There is perhaps no better example of this other way than MailChimp, a 16-year-old Atlanta-based company that makes marketing software for small businesses."

This just kind of proves the point. If you have a company with great product/market fit and lots of VC in the bank you would either reach their numbers much quicker or have higher numbers after 16 years of operation.

pjlegato 4 days ago 0 replies      
> a tech company that runs more like a normal business than a debt-fueled rocket ship

Most tech startups are funded with equity, not debt.

Animats 5 days ago 1 reply      
Spam is profitable.

(And yes, Mailchimp is a spammer, based on the Spamhaus definition of spam.[1] It may be legal, but it's still spam.)

[1] https://www.spamhaus.org/consumer/definition/

kareemsabri 5 days ago 0 replies      
Works better in a market that is not "winner take all".
Taylor_OD 5 days ago 0 replies      
and my buying ad space on every podcast ever.
balls187 4 days ago 0 replies      
gallerytungsten 5 days ago 7 replies      
You know what word doesn't appear in that story?


Mailchimp is a company that provides lots of unsolicited commercial emails. In other words, spam. You can dress it up and call it "marketing software for small businesses" but that doesn't change the essential fact: Mailchamp is a spammer. Is it any surprise that spamming is profitable?

I've received hundreds of Mailchimp emails. Not once did I sign up for any of those lists.

Does Mailchimp make it easy to unsubscribe? Sure. But that doesn't change the fact that they are spammers, and that if you want to send spam with some semi-plausible deniability that you're a spammer, Mailchimp is probably a good choice.

Of course, this story, like nearly all "business news" stories, is very likely the work of a highly paid public relations agency. That is one more reason that the word "spam" does not appear in this story.

veryhungryhobo 4 days ago 0 replies      
I'm looking to write a powershell script for creating lists and updating subscribers for mailchimp api v3. However, Im kind of lost. I only found old code samples. http://poshcode.org/3479http://poshcode.org/3351 Does anyone know any new code samples for mailchimp v3 api for powershell.
IRC v3 ircv3.net
553 points by bpierre  1 day ago   188 comments top 28
prawnsalad 1 day ago 4 replies      
There's a lot of neat stuff in IRCv3 to bring IRC up to date, but most importantly it is standardising a lot of the existing protocol and patching up some of the existing warts that make pushing IRC difficult - all while being backwards compatible.

Developing kiwiirc.com over the past few years to cover many different IRC servers in all different languages and using many different services/auth services has been a real pain. It won't improve overnight but these IRCv3 extensions making their way into many different IRC server projects are really helping to smooth things out.

There is currently a re-write of the Kiwi IRC project to experiment and make use of the entire IRCv3 extension set, along with some other features to make web based IRC clients just as friendly and modern as people expect from messaging applications today. This will be a huge boost to the millions of people using IRC via Kiwi IRC.

IRC is getting interesting again and IRCv3 is up there making a lot of this possible.

forgotpwtomain 1 day ago 7 replies      
I second a comment made below regarding https://matrix.org/. I've used IRC for years and still use it almost daily - but come on Mosh + tmux + ec2 just to have permanent chat history? I seriously can't advocate this crap to anyone in 2016. It's too little, too late.
c3RlcGhlbnI_ 23 hours ago 2 replies      
My favourite bit is how they skipped adding length negotiation and it is causing them problems already. See the brief discussion of size limit on http://ircv3.net/specs/core/message-tags-3.2.html

Adding tags forced them to increase message length because of how IRC messages are hilariously limited to 512 bytes in all directions. In the existing protocol you already have to guess how long your messages are allowed to be because the server tacks on a prefix to your message before relaying it, which can require it to truncate characters to fit the modified message into the length limit. Having some amount of tags tacked on as well would have made it impossible to guess how much room you had.

Now if they had started with fixing the actual protocol they wouldn't have had to deal with that. Honestly this feels more like a standard "let's add cool stuff" push than intelligent stewardship of the protocol. I guess that is kind of cool too though.

k__ 21 hours ago 3 replies      
I used IRC for over 10 years, but after using Slack for 1 it just felt outdated to me.

Offline messages and simulatnous logins from multiple places are just two features I was missing in IRC and didn't even know till I had them, haha.

But yeah, IRC networks just worked as message broker and not as databases, so I never thought about it.

kodisha 1 hour ago 0 replies      
We, small gaming community were using IRC as matchmaking "service" where we would all add to a queue, and the bot would then create teams based on players ranking.

That worked quite nice, for ~15 years, and whole community was created, things worked quite nice, diehard users used BNC and terminals, but lately everyone is switching to Discord.

Main matchmaking is still on IRC, but other sub-groups migrated to discord long ago, and that scares me, because no one at the moment knows what will happen with Discord year or two from now.

jwr 23 hours ago 2 replies      
I am very glad this is happening. IRC was one of the first applications I started using on the internet. These days it nearly disappeared because of closed silos like Facebook, Twitter and Slack. This effort could help bring back a decentralized service that is not under the control of any single company.
qwertyuiop924 1 day ago 1 reply      
I'm not sure if this is the best approach. Matrix might be a better way to go, at least for some things. If you really want persistent history and persistent identity, I'm not sure why you bother with IRC at this point: both will always be second class there. Try XMPP, Matrix, Psyc, or whatever. Just so long as you can convince your friends to use it...
Jizzle 23 hours ago 0 replies      
In many ways, the idea of IRC chat has grown significantly beyond the conventional client. From the ubiquity of chat for productivity in applications like HipChat to the hugely successful Twitch.tv platform, IRC has proven to be a viable backbone in these scenarios. It's quite nice then to see a concerted standardization effort. Hopefully, many of the proprietary features we might recognize today will be widespread and freely available in the future.
drcross 19 hours ago 4 replies      
I run an XMPP server for myself and a few friend and have had no end to the trouble with it getting OTR or any level of encryption working reliably before friends lose their interest.

Please, someone, just offer us an open source slack clone that runs off on a R-pi and has a walk through wizard to enable the 99% of typical install requirements.

Even as someone who works in the industry, I dont have time to learn the bells and whistles that running a chat server requires.

lucb1e 21 hours ago 1 reply      
They've been working on this for years and I'm seeing very little of it. And beyond that, it's missing everything we take for granted nowadays: scrollback, keeping note of where you were and media sharing.

Yeah media sharing can be done with, what's it called, CTCP? That's just piping stuff through a short message service, like Twitter or DNS. It doesn't properly display media either, but it allows for small file transfers.

And of course scrollback can be done with bouncers. But it's not something I see my mother using. Facebook Messenger is what I see my mother using.

Is there anything new on the website, or it it just being linked again?

jbk 1 day ago 2 replies      
That's a great idea.

Maybe also backport some features of Slack, like edition of messages (with a flag marking it as edited), or a common way to have plugins/extensions, or tagging links as images.

znpy 19 hours ago 1 reply      
I know I'll be dowvoted to oblivion but I'll say it anyway...

You advocate for privacy,You advocate for openness,You advocate for transparency,You advocate for interoperability,You advocate for host-it-yourself,You advocate for customisability,You advocate for scriptability...

But then all of a sudden "oh no IRC, not enough colours, geez, running a bouncer, who wants to do it?"


I'd design irc-v3 to be bouncer-friendly, and maybe to integrate some bouncer-y features into IRC servers as well (most IRC servers keep logs anyway)

jfe 3 hours ago 0 replies      
Do a substantial number of IRC users actually care that the current version of IRC lacks the features that IRCv3 proposes, or are we just modernizing because we want to make writing IRC bots more complicated?
bborud 18 hours ago 0 replies      
I ran an IRCnet server for 15 years or so and when the PSU crapped out I decided 15 years would be enough.

If I were to give one piece of advice: forget IRC. If you want to make a free, large scale chat system, just forget you ever saw IRC. There are no solutions in that neighborhood.

rwmj 1 day ago 3 replies      
IRC (current version) works fine for me. The trick is to front it with ZNC or similar software.

Anyway, why isn't this being done through the IETF?

meira 23 hours ago 0 replies      
Is IRC v3 better than XMPP? There are any ircd as good as ejabberd?
pmarreck 8 hours ago 0 replies      
I still use irccloud.com. Fave feature is preserving history so I can drop right into a conversation.
unixhero 18 hours ago 0 replies      
Unrealircd is imcredible, as far as irc servers go. I ran it with 300 concurrent users at a local LAN and was able to do a lot of excitimg things like load balancing between servers other cool things, already back in 2001.
vesinisa 1 day ago 1 reply      
Interesting development. Apart from the original IRC specification published in 1993 and revised in 2000 (supposedly the namesake of "IRC v3") there has been quite little real standardisation work in IRC protocol, with various clients, servers and platforms each implementing their custom extensions.
tete 22 hours ago 2 replies      
Apologies for being off topic. I am happy about this story now being on the front page.

However, I don't really understand how reposts work. I always thought that URLs are a unique key.

Is this a way to kind of "retry" them? In this case I'd be curious about how they work.


rachelbythebay 22 hours ago 2 replies      
It's still a graph with single points of failure throughout, right?
singularity2001 23 hours ago 0 replies      
same for voice would be nice. webrtc seems to fail as skype replacement (so far, in our network)
tarancato 1 day ago 8 replies      
I don't feel like typing much because it's nap time but IRCv3 is an almost closed group of friends, mostly znc core developers, who have decided they can choose what the future of IRC looks like.

They have put lots of pressure on and harassed other developers of clients and networks, sending them patches and infiltrating their devs if necessary, so their ideas are actually implemented.

If you complain about those ideas and specs, they'll tell you to refer to their github issue tracker, but they mostly ignore those who are not part of that core group I mentioned.

Iggyhopper 16 hours ago 1 reply      
Apps like Discord are the new IRC.
grizzles 18 hours ago 1 reply      
To fix irc, the best thing they could do would be to add first class support for js and websockets.
nocarrier 1 day ago 4 replies      
Why isn't encryption baked in by default? It's optional from what I can tell. I don't know how you can design a new protocol and not include encryption.
JshWright 1 day ago 6 replies      
I am a long time IRC user (irssi is my 'daily driver'), but I'm pretty sure IRCv3 is called "Slack".
paradite 1 day ago 3 replies      
Looking at homepages of various IRC clients, I feel like I am back to 5 years ago when jQuery UI is still a thing: http://ircv3.net/software/clients.html

Equally out-of-date are the screenshots of the clients on the homepages. I don't think I will ever see an IRC desktop or web client that has decent UI according to today's standard. Then again probably I am too young to be their target audience.

Edit: Okay there are some IRC clients that actually look good, such as Riot, but not on the list of IRCv3 client.

IP Spoofing popcount.org
501 points by majke  3 days ago   130 comments top 22
mrb 2 days ago 1 reply      
I will never understand why some people disregard IP spoofing as a real risk. For example when I reported a vulnerability to the nginx developers (http://blog.zorinaq.com/nginx-resolver-vulns/) about their DNS stub resolver using predictable transaction IDs, they refused to consider it a vulnerability, effectively saying no one could exploit it because spoofing the IP of the DNS server can't be done on the Internet. And yet https://spoofer.caida.org/summary.php shows ~25% of network prefixes can be spoofed... sigh
AndyMcConachie 2 days ago 4 replies      
A couple things:

1) There is no evidence that the recent giant DDOS attacks on Brian Krebs used IP Spoofing. In fact, there is every reason to believe that they did not since the generators of the packets were low powered IoT devices. There is increasingly little reason for attackers to even bother with IP spoofing given how easy it is becoming to capture giant herds of low power IoT devices. The attackers don't care if some of their herd gets taken offline due to effective attribution.

Amplification/reflection attacks which will still require IP spoofing. What I'm curious about, and only time will tell, is how much IP spoofing will continue to play a part in lsrge DDOS attacks? Why bother spoofing IPs if your botnet herd is already large enough to bring someone offline?

2) Go and download CAIDA's Spoofer application. Test it and give them bug reports. I gave them one a few weeks ago.https://www.caida.org/projects/spoofer/

IgorPartola 2 days ago 2 replies      
I have used IP spoofing for good in the past: I had a large number of sensors reporting real time data to our servers. As we wanted to migrate to a completely new infrastructure we wanted to have replication from the old servers to the new. Instead of setting up some kind of higher level system, I wrote a tiny service in C which received the datagrams and then re-sent them to the new servers but spoofed the source IP so it matched the sensor. This worked incredibly well, and the tool was later used for various other purposes.

Of course toward the end of it I learned that I could have done this all with iptables, but I like my way better because I got to learn a lot.

lossolo 2 days ago 7 replies      
There is no excuse for not securing your network to allow spoofing from it. Most of the big players like leaseweb or ovh do not allow that. But there are some providers that still allow you to spoof source ip address. There should be consensus about droping routes on BGP level to networks that send packets with source ips that they do not announce. It's really simple to drop packets on switches/routers that do not originate from your network. It would make ddosers life harder.
zmanian 2 days ago 3 replies      
Netflow is a great example of the dual use aspects of tech between surveillance and defense. Making Netflow data more widely available looks like it is going to be essential for defending that Internet but at the same time Netflow data can threaten the anonymity of Tor users.[0][1]

[0] https://blog.torproject.org/blog/traffic-correlation-using-n...

[1] https://gitweb.torproject.org/torspec.git/tree/proposals/251...

nodesocket 2 days ago 0 replies      
I consider myself pretty technically savvy... Developer and DevOps.

However, when I see tricks like in this talk using iptables with BPF bytecode[1] to block SYN packet floods, I get completely humbled. I know that I know nothing.

[1] https://idea.popcount.org/2016-09-20-strange-loop---ip-spoof...

biot 2 days ago 1 reply      
"Operating a content neutral service in today's internet is a tough job. Some people dislike some websites, and they want to stop them from being available on the internet. The easiest way to do so is to launch a DDoS attack."

... which you can conveniently buy on one of the CloudFlare-protected DDoS service websites! I know this point has been hammered to death before, but I find it curious that despite their strong advocacy of being content neutral and not removing a site unless they receive a court order, their Terms of Service explicitly allows them to shut off service based solely on their opinion:

 "SECTION 11: INVESTIGATION CloudFlare reserves the right to investigate you, your business, and/or your owners, officers, directors, managers, and other principals, your sites, and the materials comprising the sites at any time. These investigations will be conducted solely for CloudFlares benefit, and not for your benefit or that of any third party. If the investigation reveals any information, act, or omission, which in CloudFlares sole opinion, constitutes a violation of any local, state, federal, or foreign law or regulation, this Agreement, or is otherwise deemed harm the Service, CloudFlare may immediately shut down your access to the Service."
Does CloudFlare ever make use of this provision?

superkuh 2 days ago 1 reply      
Am I the only one that believes that the kind of non-preferential routing and anonymity that the current exchange setups provide is a benefit? In terms of society as a whole this far outweighs the downsides of DoS attacks using IP spoofing.

"Solving" the "problem" of ip spoofing is only a benefit for centralized authorities and services. The loss of privacy is also serious. People advancing this idea are advancing it to better their commercial interests rather than the interests of individuals using the 'net.

MOARDONGZPLZ 2 days ago 2 replies      
Can someone explain the consequences of DDoS attacks to me? My understanding is that the worst case is that the target server goes offline for the duration of the attack.

If that's indeed the endgame, it seems like a lot of work on the attacker's part to disable a company's servers for a bit, but maybe I"m missing something? The article did mention servers boiling, but that was likely hyperbole unless there's a way to physically damage servers with DDoS.

tuxidomasx 2 days ago 1 reply      
Since IP spoofing could be done by a few malicious hosts, is it really accurate to consider a DoS via ip spoofing 'distributed'? Sure the source IPs may appear to come from all over, but that's just an illusion.

When I think DDos, I envision thousands of compromised hosts all over the internet making requests to a target to consume resources.

Thaxll 2 days ago 2 replies      
UDP spoofing is one thing but the latest and largest attacks are TCP based.
bikamonki 2 days ago 0 replies      
Am I correctly understanding that big/most attacks are against unwanted content? For instance, in my country the two most popular political blogs that make strong opposition to the government are regularly DDOSed. If that is the case, maybe an easier solution would be to make this a software problem and not a network one. For that, I think we already have the tech to make P2P websites feasible. My phone and data plan cannot serve hundreds of requests per second but I'm sure I can get ALL the content I read on a given day from peers as well as pass it along to others. Clearly not a solution for all Internet use cases but a viable alternative to blogs.
lukasm 2 days ago 0 replies      
nodesocket 2 days ago 0 replies      
This is a great talk. Marek does an awesome job of explaining something that is very technical in a clear and casual manner. I definitely learned some new things.


LyalinDotCom 2 days ago 0 replies      
I've been a software engineer for 18 years, but this amazing post reminded me how little I know about how the internet works lol. Going to read this a few times for sure!
scurvy 2 days ago 1 reply      
Large bandwidth attacks might look sexy, but they're trivially easy to block. Network operators care about pushing packets, not bits. The OVH attacks look huge to the average AWS user, (ZOMG a terabit!) but to even an average tier 2 transit provider it's a trivial attack to block. Especially when the attackers are hitting a single endpoint.

To be honest, attackers are not very smart. They almost always use the same old attacks, attacks that are easily stoppable, and attacks against a single IP.

100gig coherent is easily turning volumetric attacks into old hat. Ending spoofed attacks today is a tactic that's 10 years too late.

indolering 1 day ago 0 replies      
If only we had a method for authenticating packets : /
equalunique 1 day ago 0 replies      
The slide showing the Internet Exchange's switch says those are Ethernet cables, but they're actually fiber.

I'm really impressed with this presentation, but I wish that one flaw could be fixed.

grogenaut 2 days ago 1 reply      
Warning: Sales pitch masquerading as a technical talk.
davedx 2 days ago 0 replies      
Fantastic article.
happytrails 2 days ago 0 replies      
Time to spoof!
complaint 2 days ago 2 replies      
That was an excellent presentation, very informative.

Except, why put that ridiculous meme in the middle of it? It's cringeworthy seeing an excellent technical presentation littered with such childish imagery.

(Not that I agree with this bastardization of "meme" to mean "silly image with text overlaid in capital letters", but unfortunately that is what everyone is calling these things.)

A Brief History of Who Ruined Burning Man burningman.org
433 points by jseliger  3 days ago   240 comments top 40
mseebach 3 days ago 7 replies      
Two semi-random thoughts that struck me when reading this:

First, this Douglas Adams quote:

I've come up with a set of rules that describe our reactions to technologies:

- Anything that is in the world when youre born is normal and ordinary and is just a natural part of the way the world works.

- Anything that's invented between when youre fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

- Anything invented after you're thirty-five is against the natural order of things.

The second was this very insightful article on the dynamics of the evolution of sub-cultures: https://meaningness.com/metablog/geeks-mops-sociopaths

For the record, I've never been to Burning Man, nor seriously tempted to change that.

akiselev 2 days ago 7 replies      
I've been to Burning Man several times and this article is spot on. No matter which year I go, there is always some new group "ruining" the event according to the "true" burners (and since BM is all about inclusivity, this group seems to grow exponentially). The first few years I came, the moaning was about "sparkle poanies", people in their 20s who came to party for a week with drugs, costumes, and glitter... without bringing a weeks worth of food or water and no tent or shade structure. This evolution is inevitable and it doesn't change the fact that there is no event like Burning Man on the planet. Standing in the middle of a dessert as a city as vibrant and alive as the Vegas strip pops up around you is an experience I'd recommend for anyone.

In my opinion, the only group that is on the path to actually "ruining" BM is the organization itself. From the rapid price hikes to the parking passes to general incompetence running infrastructure for such a large popup event to clashing egos, every time I talk to a current or former employee I get the sense that BM is run by children, half of whom want to run it like it's the 90s when the event was a tenth of the size it is today. The worst part has been the antagonism between the BM org and the BLM/local law enforcement. Every move the org makes seems to make life harder for attendees (although the cops seem to have stopped arresting for drugs, giving out heavy fines to recoup lawsuit costs instead, so I guess that's a plus).

wpietri 3 days ago 8 replies      
I would like this article a lot better if they included some of the legitimate criticism about what Burning Man has become.

The number one "ruined" point I hear about from early attendees is when it went from a volunteer effort to something with a year-round staff. Seeing a small band in a pub is different than a giant, commercialized stadium show. Neither is wrong, but nobody is served by pretending that they're the same. As most developers know, things change when you go from "I do this for the love" to "this is my job".

I don't think Burning Man was "ruined" by that transition, but I don't like how this article mocks the whole notion that something might have been lost along the way. Of course, it was written by somebody whose job is Burning Man, and as Upton Sinclair says, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

FreakyT 3 days ago 8 replies      
I love this breakdown; it really demonstrates how the "ruining" of Burning Man is no different than the sort of aversion to change that occurs with basically anything that people are enthusiastic about.

* Bands? "They used be all about the music, then they went mainstream"

* Video games? "Now they just cater to beginners; they don't care about real gamers anymore"

* Movies? "They're so focused on the special effects, they don't even bother with the story"

...and the list goes on. Name just about anything that people enjoy, and you'll find a vocal contingent complaining about how recent changes have "ruined" it.

gwbas1c 2 days ago 0 replies      
I went to Burning Man twice. What I found the last time was that it was so biiiiig that it was hard for me to have fun. Large traffic jams getting into and out of an event just aren't my cup of tea. I also stayed for a very long time in a tent; if I were to go back for longer than a 2-3 days, I'd need to stay in an RV.

What I did realize, though, is that in order to really get the most out of Burning Man, I'd need to really get into the culture. People get into Burning Man like they get into their churches, where there are lots of social events. Ultimately, the groups that enjoy Burning Man the most are large groups of extended friends that like can be found in a small church.

The hardest thing with Burning Man, IMO, is the participatory prep. Meaning, the easy part of the prep is like planning for a long camping trip. What ultimately is required for me to be more than a tourist is much more than what I was able to do as an introverted person who just doesn't like to make artwork or run games for other people.

vhost- 2 days ago 1 reply      
I grew up in Nevada along a main highway that people take to burning man. I really wish "leave no trace" included places outside blackrock.

People would also routinely leave their playa covered and broken down vans, RVs and cars in my neighborhood. In the desert by my parent's house we would find numerous trailers abandoned full of trash, empty water bottles and old clothes. You knew it was from burning man because the stuff was covered in playa dust.

projektfu 3 days ago 2 replies      
As soon as I heard about burning man, back in 1996, it had definitely jumped the shark. If this guy knows about it, it's no longer hip.
donmb 3 days ago 5 replies      
Swap [Burning Man] with any modern or long lasting festival. Here in Germany it would be "Rock am Ring" or "Southside". Very well written. We all will sooner or later act like our grandparents and say "in the past everything was better". If we are honest, it wasn't.
heartsucker 3 days ago 1 reply      
I feel like this takes the term "ruining" too literally. Yes, clearly nothing was completely ruined with each subsequent change, but the current incantation is certainly not the same as the first, and people are right to criticize the change (just as people are right to defend it). No one change might have been the death, but all together, it is very understandable that some people might call it "dead" and have moved on.
protomyth 3 days ago 1 reply      
I am impressed, this was a really good article that I thought was just another rant going in.

The universal rule of groups: its always the next guy who ruins the coolness of a group

As to Burning Man itself, it always felt like one of those events that I was not cool enough to go to. Camping in the desert would be interesting, and probably a lot less stressful than some of the camping I did during high school.

acomjean 3 days ago 1 reply      
I know a someone that goes to burning man a lot (they call themselves "burners"). I asked how it was this year and was told one of the best years ever. Last year wasn't so go according to this person. Its probably who you are and what you're looking for. For her, its all about the art installations.

The logistics and planning of getting everything out to the dessert from the east coast was more significant than I had imagined.

I borrowed her popup tent that was covered in burning man dust, its weird gray and alkali. Its remarkable that burning man actually happens.

IsaacL 1 day ago 0 replies      
I've never been to Burning Man but for a few years I was heavily into meditation, dabbled in psychadelics, and was reading Timothy Leary, Robert Anton Wilson, and any other mind-expanding literature I could get my hands on.

I get the sense Burning Man used to attract people like my younger self. From my Facebook and Instagram feeds, I now see many startup scenesters I know attending. If I want know, I feel I'd have to be much more careful to suppress my "weird" side. I can understand why the original attendees feel the event has become too diluted to be appealing.

Not having been, I don't know, but from a distance it looks like a pattern I've seen play out in other communities. One thing I've noticed is that many people who are genuinely cool, interesting, curious, adventurous and so on, don't see themselves that way at all - and many people who see themselves as cool and creative are in reality vapid and shallow.

Chathamization 3 days ago 3 replies      
I first became aware of Burning Man in the late 90's. It sounded interesting, but a lot of people said it wasn't worth going to anymore because all the new people had ruined it and it was nothing like it used to be. The argument doesn't seem to have changed much since then - "it's gone downhill over the last few years and is nothing like it was 8-9 years ago when I first went."

It wouldn't surprise me if in 2026 people are talking about how terrible Burning Man has become and how it's nothing like the wonderful Burning Man of 2016.

whybroke 2 days ago 1 reply      
The confusion stems from the idea that Burning Man is still a cultural phenomenon rather than just entertainment/art installation. An erroneous belief that the promoters seem to have internalized to their own confusion. Indeed, part of the entertainment value is the perception that just showing up in costume means you are engaged in significant culture production. So easy!

Disappointment stems from attendees expecting a cultural phenomenon and getting a show; the expectation that Burning Man is still important but finding it's just a show. You don't get this problem at Disneyland because and talk of 'Disney spirit' is tongue in cheek.

Here are some more substantial questions than the sort of feel good straw-man in the article. What new ideas have come out of it in the last 15 years? How do you create culture by showing up for a weekend to a catered camp ground and show? Is there any overlap at all to what attendees think about the meaning/point of Burning Man? So if you come west just add it to the Vegas-Disneyland circuit because that is all it is now.

Indeed there is one advantage to Vegas and Disney, the cops don't shamelessly hassle gay people.

rburhum 2 days ago 0 replies      
Huh... and here I was thinking that Burning Man was ruined when they decided to run their infrastructure on Oracle and Salesforce
bdrool 2 days ago 0 replies      
The very first thing I ever heard about Burning Man was how it wasn't cool any more. This was in the 1990s.

I've come to believe that the real purpose of Burning Man is to have these kinds of meta-discussions. The actual gathering in the desert is nothing but a side effect.

droopyEyelids 3 days ago 2 replies      
But each time, what it "was" did get "ruined" ... at least enough to change it into something different.

It's like the old koan about a radio: if you replace the speakers, then the housing, then the circuitry, then the controls, is it the same radio?

sixQuarks 2 days ago 1 reply      
I've been to burning man 8 times, first was 2001, most recent was this year (hadn't been in 7 years).

the shark has definitely jumped higher since the last time I went. Hardly saw any naked people this year. There were lots more "coachella" types - doing a lot of instagramming, but the worst for me was that there really wasn't a lot of room to be by yourself out on the playa. There were always people around. In past years, I could be out there by myself if I went out far enough.

That being said, BM is still incredible and I'm not against it evolving. The spirit is still alive and well.

jseliger 3 days ago 0 replies      
There is an intriguing consilience between this post and sama's "We're in a Bubble" (http://blog.samaltman.com/were-in-a-bubble).
JoeAltmaier 3 days ago 1 reply      
The most powerful line is the last one:

 "The 10 Principles are at their most powerful when given to strangers."
So quit resenting new folks joining; that's kind of the point of the whole thing.

davesque 2 days ago 0 replies      
I guess this must be true of any strong subculture. Subcultures are, by definition, "sub" or differentiated from some other mainstream culture. So the subculture's very existence is based on this boundary between itself and the mainstream culture. Participants in the subculture are obviously concerned with who is closer to or further away from this boundary.
Semiapies 2 days ago 0 replies      
No mention of This Is Burning Man and other media coverage in the early 90s? I'm vaguely surprised.
engx 2 days ago 0 replies      
There's some interesting parallels between Burning Man and the Cremation of Care. The latter started at the Bohemian Grove in 1881-


op00to 3 days ago 0 replies      
The way this relates to FOSS communities and the culture that surrounds them is really interesting.
brianolson 3 days ago 0 replies      
pfft. It was better next year.
njharman 2 days ago 0 replies      
Time aka change ruins everything.

Every wave of people, SF residents, phone phreakers, immigrants, every generatuin; bemoans the next wave of people are ruining it.

bertiewhykovich 2 days ago 0 replies      
Burning Man is an enabler of some of Silicon Valley's deepest pathologies and I'd prefer to never hear about it ever again.
jokoon 2 days ago 0 replies      
I have not yet planned to go to burning man, lack of fund or motivation, and I don't plan to. Am I too ruining Burning Man ?
beatpanda 2 days ago 0 replies      
God, I know, it was so much better next year
pasbesoin 2 days ago 0 replies      
The same people who ruined the Internet and every other unconventional new thing.
frik 2 days ago 0 replies      
Next years headline, "HN post ruined Burning Man" ;)
computerwizard 2 days ago 0 replies      
Burning man was pretty awesome this year. Can't wait for next year :D
scelerat 2 days ago 0 replies      
"Nobody goes there anymore. It's too crowded"
swayvil 2 days ago 0 replies      
He could have simply summed his "who ruined it" assertions.

He could have said "Allowing an uncontrolled influx of new participants ruined burning man".

And that is precisely right.

Some clubs should be exclusive.

amdlla 3 days ago 2 replies      
So what is the newer untainted version?
thomasmarriott 2 days ago 0 replies      
'Next year you'll say last year was so much better.'
lintiness 3 days ago 2 replies      
those sound like the same people who ruined my indy band.
SeanDav 2 days ago 0 replies      
I predict overseas tourists will ruin Burning Man soon
d--b 2 days ago 0 replies      
This guy just ruined burning man...
vegabook 2 days ago 0 replies      
Ooh the delicious irony... The very demographic (overpaid valley types) who ruined burning man now upvotes a story about Who Ruined Burning Man.
A Javascript journey with only six characters jazcash.com
544 points by Jazcash  2 days ago   111 comments top 22
qwertyuiop924 2 days ago 3 replies      
I'm a fan of Javascript. It has proper lambdas, true lexical scope, will soon have TCO, and is a really flexable language.

But it's not without its warts, and this is one of the worst. Although it's sometimes fun to mess with, nonetheless.

To see this taken to one of its logical extremes, check out If Hemmingway Wrote Javascript's entry for Douglas Adams:

 // Here I am, brain the size of a planet, and they ask me to write JavaScript... function kevinTheNumberMentioner(_){ l=[] /* mostly harmless --> */ with(l) { // sorry about all this, my babel fish has a headache today... for (ll=!+[]+!![];ll<_+(+!![]);ll++) { lll=+!![]; while(ll%++lll); // I've got this terrible pain in all the semicolons down my right hand side (ll==lll)&&push(ll); } forEach(alert); } // you're really not going to like this... return [!+[]+!+[]+!+[]+!+[]]+[!+[]+!+[]]; }

madflame991 2 days ago 1 reply      
The title is incorrect; you need "!" and the only reason why I clicked on the article was that doing it without "!" would be a BIG deal. The title of the original article is "A Javascript journey with only six characters" and the topic has been posted/reposted and explained more times than I can count on HN
tempodox 2 days ago 3 replies      
This is an extreme demonstration of the validity of a delightfully snarky blog post by Robert Harper on how dynamic typing is actually static typing using a unitype:


A string is a Boolean is a number is a function, and braindead conversions can happen without anyone noticing. How does one keep their sanity using a language like that?

shp0ngle 2 days ago 2 replies      
Why do I feel like I have read this article a few years ago? I remember it, but it has 2016...
Centime 1 day ago 0 replies      
Without parenthesis (requires 8 characters though):


JelteF 2 days ago 2 replies      
The article does not explain how it gets the {}, which is used to get the Object constructor string. Other than that it's very clear.
lell 2 days ago 0 replies      
Reminds me a bit of the quest for alphanumeric shell code.
novaleaf 2 days ago 4 replies      
Does anyone know a good sandboxing technology to execute user written javascript in a safe way? (like, on other user machines or on the server)? I have some ideas like "learn programming" that would benefit from this immensely.
sjclemmy 1 day ago 0 replies      
This is quite timely. I was looking at a library the other day which had IIFEs preceded by '+'. I wondered what the purpose was. Now I know!
joshschreuder 1 day ago 0 replies      
I saw a similar video recently called Code Without Keywords which was quite interesting.


pvdebbe 1 day ago 0 replies      
I sighed with relief that the characters were not emoji.
fibo 1 day ago 0 replies      
Wow, JavaScript is also an esoteric Language.

I remember similar strange and interesting stuff in Perl, like the spaceship operator.

keyle 2 days ago 0 replies      
I knew about these languages but I've never understood how they were made. This is a fun, fantastic article! These articles make me excited about technology. Even bad ones.
CiPHPerCoder 1 day ago 1 reply      
> Javascript is a weird and wondeful language that lets us write some crazy code

Wondeful! A typo six words in.

amelius 2 days ago 0 replies      
I'm getting a browser security warning on the url on Android.
a_c 1 day ago 3 replies      
I don't see the point of a programming language allowing itself braining fucking its users(developer) for serious use
thanatropism 2 days ago 9 replies      
mooveprince 2 days ago 0 replies      
One more reason for the world to hate JavaScript ? :(
kyriakos 2 days ago 3 replies      
this is insane. someone should create a js obfuscator that converts human readable code to this.
eweise 1 day ago 0 replies      
So glad I don't write in a "weird and wondeful language that lets us write some crazy code that's still valid"
catscratch 2 days ago 2 replies      
This doesn't make me want to use JS. The power of JS is in two things, it's in every major browser and it doesn't completely suck. JS syntax kind of sucks. The power in JS is that it's dynamic and lets you send functions around, but defining functions is much uglier than defining a lambda in Ruby:

-> {anything goes here}


->(a,b,c) {anything goes here}

The problem with Ruby is that you then have to .() the lambda vs. (), so that is more verbose than just calling the function.

If browsers were to embrace a language that was more Ruby-like and less clunky than JS, I'm sure I'd use it more.

Reflections of an Old Programmer bennorthrop.com
477 points by speckz  3 days ago   320 comments top 59
delinka 3 days ago 9 replies      
I'm a bit older than the author. Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same, this time someone has added another C to MVC; or the put their spin on an API for doing X; or you can tell they didn't learn from the previous solution and this new one misses the mark, but it'll be three years before anyone notices (because those with experience probably aren't touching it yet, and those without experience will discover the shortcomings in time.)

When learning something new, I find that this group implemented with $NEW_THING in a completely different way than that group did an implementation with the exact same $NEW_THING. I have a harder time understanding how the project is organized than I do grokking $NEW_THING. And when I ask "why not $THAT_THING instead?" I get blank stares, and amazement that someone solved the problem a decade ago.

Sure, I've seen a few paradigm shifts, but I don't think I've seen anything Truly New in Quite Some Time. Lots of NIH; lots of not knowing the existing landscape.

All that said, I hope we find tools that work for people. Remix however you need to for your own edification. Share your work. Let others contribute. Maybe one day we'll stumble on some Holy Grail that helps us understand sooner, be more productive, and generally make the world a better place.

But nothing's gonna leave me behind until I'm dead.

carsongross 3 days ago 7 replies      
I get where the guy is coming from, I'm right there as an old guy.

On the other hand, I think there is a bit too much fatalism in the article. Sometimes the kids are being stupid, and they need to be told so.

The vast majority of web apps could be built in 1/10th the code with server-side rendering and intercooler.js. All this client-side crap is wasted when you are trying to get text from computer A into data-store B and back again. It's the front-end equivalent of the J2EE debacle, but with better logos and more attractive people.

And people are starting to wake up[1][2]. It's up to us old guys to show the way back to the original simplicity of the web, incorporating the good ideas that have shown up along the way as well as all the good ideas[3] that have been forgotten. Yes, we'll be called dinosaurs, out of touch, and worse.

Well so what? We're 40 now. And one of the great, shocking at first, but great, things about that age is you begin to really, truly stop giving a fuck what other people think.

Besides, what else are we going to do?

[1] - https://medium.com/@ericclemmons/javascript-fatigue-48d4011b...

[2] - https://hackernoon.com/how-it-feels-to-learn-javascript-in-2...

[3] - http://intercoolerjs.org/2016/01/18/rescuing-rest.html

bsenftner 3 days ago 5 replies      
It is plain and simple, Kids. I'm 52 - been programming professionally since the 70's when I started writing C code and getting paid for it in 5th grade. Our "professional" is writing glue code, and how it is done and what hoops are jumped through simply do not matter: all that matters is the final shipping product, widget, or logical dodad works for the immediate marketing moment.

I speak from enviable experience: game studio owner at 17, member of original 3D graphics research community during 80's, operating system team for 3DO and original PlayStation, team or lead on 36+ entertainment software titles (games), digital artist, developer & analyst for 9 VFX heavy major release feature films, developer/owner of the neural net driven 3D Avatar Store, and currently working in machine intelligence and facial recognition.

Our profession is purposefully amateur night every single day, as practically no one does their computational homework to know the landscape of computational solutions to whatever they are trying to solve today. Calling us "computer scientists" is a painful joke. "Code Monkeys" is much more accurate. The profession is building stuff, and that stuff is disposable crap 99% of the time. That does not make it any less valuable, but it does render it quite ridiculous the mental attitude of 90% of our profession.

Drop the attitude, write code freely knowing that it is disposable crap, and write tons of it. You'll get lazy and before you know it, you'll have boiled down whatever logic you do into a nice compact swiss army knife.

And the best part? Becuause you'd stepped off the hype train, you'll have more confidence and you'll land that job anyway. If they insist or require you to learn and know some new framework: so what? you're getting paid to do the same simply crap over again, just more slowly with their required dodad. Get paid. Go home and do what you enjoy. This is all a huge joke anyway.

keithnz 3 days ago 2 replies      
I started programming when I was 7, I'm 45 next month :)

The one thing in the programming world that is almost 100% applicable to almost every article like this ( and many other topics ) is..... it depends.

I'm fortunate in that for most all my career I have spanned many technologies from embedded systems to the latest crazes on the web. Mostly what becomes redundant is language syntax and framework. If your programming career is largely centered around these then you become redundant pretty quick (or super valuable when critical systems are built with them then need maintenance forever ).

Frameworks come and go so if you spend a lot of time creating solutions that shuffle data from a DB to a screen then shuffle data back into a DB.... then a majority of your programming skills will become redundant relatively quickly. ( maybe a half life of 4 years? ). But often when you are doing this, the real skill is translating what people want into software solutions, which is a timeless skill that has to be built over a number of projects.

If you work in highly algorithmic areas, then not a lot of your skills become redundant. Though you may find libraries evolve that solve problems that you had to painfully do from scratch. However that deep knowledge is important.

Design, the more complex a system is to engineer (that isn't provided to you via a framework), the more likely you will have skills that won't go redundant. Design knowledge is semi timeless. My books on cgi programming circa the mid nineties are next to useless, but my GOF Design Patterns book is still full of knowledge that anyone should still know. OOSC by Betrand Meyer is still full of relevant good ideas. My books on functional programming from the 80s are great. The Actor model which has its history in the 70s is getting appreciated by the cool kids using elixir/erlang

Skills in debugging are often timeless, not sure there's any technique I'd not use anymore. ( though putting logic probes on all the data and address lines of a CPU to find that the CPU has a bug in it's interrupt handling is not often needed now )

thesmallestcat 3 days ago 8 replies      
Hm. The author works for a web/mobile development agency and uses React Native and GWT as examples of the new and the old, respectively. I hope it isn't news to anybody here that this sort of work is a race to the bottom and has such turnover precisely because it's mostly being done by junior developers. Linux systems programming arcana, for instance, doesn't disintegrate so quickly as the ten years the author cites. That's why, after getting into the industry as a frontend web dev, I will only do that sort of work now as a last resort to pay the bills (the other reason is because it's easy/boring as hell, apart from the greater opportunity for mentoring). Doing that sort of work now feels like I am sabotaging my career.
mafribe 3 days ago 1 reply      
If somebody had found himself in Edinburgh in 1986 and bumped into a tall gentleman called Robin, who was a bit familiar with this new-fangled thing called computers, and had asked Robin, what kind of programming language should one learn to use these computer thingies, what would Robin have said? Not sure, but maybe something along the lines of "well ... there are many interesting languages, and different languages are suitable for different purposes. But if you are interested, I'm dabbling in programming language design myself. Together with my students I've been developing a language that we call ML, maybe you find it interesting. With my young colleagues Mads and Robert, I'm writting a little book on ML, do you want to have a look at the draft?"

Maybe such a person would have chosen to learn ML as first programming language. If this person had then gone on to work in programming for 3 decades, and if you'd asked this person 30 years later, i.e. today, what's new in programming languages since ML, what might have been his answer?

Maybe something along the lines of: "To a good first approximation, there are three core novelties in mainstream sequential languages that are not in ML:

- Higher kinded types (Scala, Haskell).

- Monadic control of effects (Haskell).

- Affine types for unique ownership (Rust).

Could I be that somebody?

dkarapetyan 3 days ago 5 replies      
This statement is false

> Half of what a programmer knows will be useless in 10 years.

and the rest of the article seems to be based on it so it negates much of what is said.

Foundational knowledge does not decay. Knowing how to estimate the scalability of a given design never gets old. Knowing fundamental concurrency concepts never gets old. Knowing the fundamentals of logic programming and how backtracking works never gets old. Knowing how to set up an allocation problem as a mixed-integer program never gets old.

In short, there are many things that never get old. What does get old is the latest fad and trend. So ignore the fads and trends and learn the fundamentals.

smoyer 3 days ago 1 reply      
Apparently old is a matter of perspective ... To me not quite 40 is still a young'in.

I'm over fifty and just got back from presenting at a major conference. I've managed to say current through 35 years of embedded systems design (hardware and software) as well as a stretch of software-only business. It's really not that hard if you understand your job is to continually be learning. I must be doing it right because often those I'm teaching are half my age.

As an aside, I've done the management track and moved back to the technical track when I found it unfulfilling.

iamleppert 3 days ago 1 reply      
You don't need to learn React or Angular or another framework. Spend your time getting really good in your preferred stack of choice. That could be a framework or something of your own creation. Do not go to work for a company that only wants to hire people familiar with a specific framework. It's a huge red flag. The work will be boring and the team mediocre. More often than not there will also be culture issues.

Great companies who have interesting projects will want to see what you've built in the past; the technology is just a tool. They will trust you to use the right tools for the job, and will respect you enough to let you pick those which you prefer.

For legacy systems, it's helpful to have some experience but it's not like you won't be able to be effective if you're good given sufficient ramp up time.

In my experience it's far better to hire the smart, motivated engineer who can actually get stuff done and has created high quality software before than someone who is an expert in a specific framework.

Also I avoid going to tech conferences about web stuff, unless it's a legitimately new technology. A new way to organize your code and conventions are not new technology, it's just some guy's opinionated way of doing things. And most of the talks are less about conveying useful information that will help you and more about the speaker's ego and vanity.

vacri 3 days ago 0 replies      
> The doctor at 40 doesn't seem to be worried about discovering that all his knowledge of the vascular system is about to evaporate in favor of some new organizing theory. The same goes for the lawyer, the plumber, the accountant, or the english teacher.

And the same is true of programming. There are still variables, arrays, syntax errors, IDEs and so forth - the underlying algorithms don't change that much. Lawyers and accountants have to keep up to date to keep their credentials, and doctors almost always do (but not actually always - just like some programmers don't update their skills). Fads come and go in teaching as well.

I know less about plumbers, but there are few white-collar professionals where you don't have to keep on top of things throughout your career. From architects to engineers to social workers to pilots to biologists to meteorologists to managers, things change in your profession and you need to adapt. It's just that usually those changes don't have the ridiculous levels of hype and fanfare that they have in our bubble (management being an exception here as well, lest I draw the wrath of a six-sigma black belt!)

oldprogrammer52 3 days ago 0 replies      
One of the consequences of this wide-spread ageism is the amount of unnecessary, ill-conceived, and often dangerous wheel-reinvention that 20-something hipster programmers get away with.

Exhibit A would be NoSQL. Little more than a rehash of the hierarchical and network (graph/pointer) databases popular in the 1950s before the ascent of relational databases, these systems enjoy increasing popularity despite few, if any, advantages over relational databases besides allowing 20-something hipster programmers to avoid learning SQL and the ins-and-outs of a particular relational database (like PostgreSQL) and allowing VC-backed tech companies to avoid paying senior developers who already possess that knowledge what they're actually worth.

If these new data stores were at last as reliable as the older relational databases they are supplanting, it wouldn't be so bad. But they aren't. Virtually all of them have been shown to be much less reliable and much more prone to data loss with MongoDB, one of the trendiest, also being one of the worst[1].

And these systems aren't even really new. They only appear that way to young developers with no sense of history. IBM's IMS, for example, is now 50-years-old, yet it has every bit as much a right to the label "NoSQL" as MongoDB does--and amusingly, it's even categorized as such on Wikipedia.[2]

1) https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-read...

2) https://en.wikipedia.org/wiki/IBM_Information_Management_Sys...

ams6110 3 days ago 2 replies      
To me, it seems a bit like JSPs of 15 years ago, with all the logic in the presentation code, but I'm "old", so I assume I just don't "get it".

No, you get it. It's the people who get excited about stuff we tried and abandoned two decades ago that don't get it.

minipci1321 3 days ago 0 replies      
Very surprised and honestly, even shocked. Not sure what to think about this post, and I am good deal older and been working longer too. Maybe learning is hard for him? He has 2 degrees.

In his sig he says he likes to write about making decision, and not a word about intuition, how it builds up in you over years from that seemingly pointless going round in circles? Very little about how we improve in relationships with people (going from 0% skill for many of us), and accomplish even more by making others pull in right direction?

I have never wanted to do anything else than what I do. Differently, yes, but no farming / opening a restaurant or an art gallery. Maybe that is the real culprit?

Last thing, knowledge does not "afford increased measure of respect and compensation". Adding value, helping people, and solving their problem does. If you have this trail on your CV, maybe that long list of the technologies is less needed.

transfire 3 days ago 1 reply      
You know, if truth be told, we really haven't come very far. You'd probably be surprised at just how well a modern COBOL system can operate. (http://blog.hackerrank.com/the-inevitable-return-of-cobol/)

In fact, in many ways we've made things worse because not only does the sand keep shifting, there is now way too much sand. Young people come into the field and they want to make their mark. So we are constantly going through "next big thing" phases, some big like OOP, some smaller like React, only later to realize that what seemed so very interesting was really a lot of navel gazing and didn't really mater that much. It was just a choice, among many.

I can only hope one day some breakthrough in C.S. will get us past this "Cambrian Explosion" period and things will finally start to settle down. But I am not holding my breath. Instead I am learning Forth ;)

autognosis 3 days ago 0 replies      
The fundamentals of computers have not changed all that much. Every assembly language i've learned is still valid, and their respective architectures are still widely deployed.

I'd suggest not building a house on sand, and learn the fundamentals of how computers and programming language works. Don't learn anything closed-source. It isn't worth the brainspace.

osullivj 3 days ago 2 replies      
Very sad that the author aspires to be Martin Fowler. Fowler is an adept populariser of other people's ideas, and he does that as marketing effort for Thought Works. AFAIK he has not originated any innovation over the last twenty years, whether it be patterns, enterprise architecture, microservices, generators, refactoring or agile. Basically he's a corporate shill trolling round the conference circuit drumming up consulting gigs for Thought Works by banging on about the latest trend. If you want to aspire to be someone in the software world how about Brad Cox, Steve Wozniak, Guido, Carmack, or Kay, Ingalls and Goldberg?
dwarman 1 day ago 0 replies      
This comes up approximately annually. I used to answer at length. But now at 69 brevity seems more productive. I am probably nearing the end of my accidental unplanned drunkard's walk career, one that start in 1967 when I was dropped (unskilled 19 yr old college hippie drop-out) into the inside (literally) of a mainframe and told to "make it work". wandered subsequently through probably every computing field, and lately do audio DSP work inside game consoles. Inside the inside of a current SOC inside a black box.

My conclusion: there is no formula for staying relevant. Perhaps an understanding of the roots and rapid skill acquisition, but beyond that every second I spent learning a new framework just because has been a second wasted. By the timne I was somewhere it was poossibly relevant, it was already dead and replaced or I was too far ahead of the time and had to write my own.

Further, and sadly, after 50 some years in the biz, I still understand the insides of current SOC chips, and I shouldn't - no progress has been made in practical computing thoeretics at all. Lots of embellishment, lots of band-aids, nothing really different. Otherwise I would not be able to do this job.

Yes, really, brief this time. $0.01 instead of a full $.

d23 3 days ago 0 replies      
At 27, I still feel old in the same regard as a lot of the ways the author is talking about. A lot of things I'm seeing reeks of being a fad. I'd rather avoid naming any technology or framework, but my instinct has been to avoid planting my seeds in soil that's churned up every 6 months and keep an eye toward that which has been solid for a decade and is likely to continue. I don't mind learning a new language -- there are a couple I'm hopeful toward and think could be long-term winners. But I'm not about to waste grey matter on things I suspect will be obsolete before I can even reach mastery.
mml 3 days ago 1 reply      
This is the first time in history there are a huge number of "old", nay, wizened, programmers around in comparison to young ones.

Make of that what you will.

As a 40+ programmer, who knows what becomes of those who move into management, I am seeing lots of my cohort falling back into actually making things, as a way to preserve our hard-won value.

This makes me happy. And you whippersnappers better watch yourselves ;)

jrapdx3 2 days ago 0 replies      
A good article that makes valid points about the difficulties of keeping up in rapidly-changing fields. Since I'm "old", and have a foot in the worlds of medicine and programming, it's apparent to me there's not that much difference in the "aging curve" in these occupations.

The parallels include the explosion of new knowledge, or at least variations on the old knowledge, that a practitioner needs to keep up with. In programming it's languages and frameworks, in medicine it's discoveries (basic science), new drugs and techniques, and aspects of the regulatory environment. In either case a few years out of school/training it becomes daunting to keep up.

I should add here a particular peeve, the proliferation of abbreviations and acronyms is way out of control. It's nearly impossible to read an article without encountering an avalanche of incomprehensible ABBRs. What's worse, the same ABBR is often used to mean entirely different things one article to the next. Cross-field usage is a naturally incongruous extension of the confusion, though at times it's humorous.

What the article doesn't emphasize is the blizzard of details to keep up with is just one part of the experience. As years in the trenches becomes decades the value of "time in grade" becomes evident. The ability to size up the demands of a complex problem, to have a clear idea of where to enter the path of its management, and calm assurance growing out of having been down the road before are all won only by virtue of real experience.

Having done what I have for 40 years, it took me only 33 years to realize I didn't know what I was doing, and that's when I got really good at it. Therein is an essential wisdom that time and effort alone confer, and can't be gained in any other way.

edpichler 2 days ago 0 replies      
A guy from Oracle advised me on I chat we had, on early days of the Google Talk. His name was Matthias Weendorf (I don't forget people I am grateful) and the advice was something like this: "Study software engineering, it's more difficult to change than technology, and you will use it for all your life".

I got lucky to have this advice 10 years ago. I did a Master degree on this area and my life changed, my software has high quality, evolve fast and I sleep nice every night with all the "chaos" under control. I can change languages and development processes fast and painless. I think the point is: you have to understand the abstract concepts of technology rather than languages or frameworks. As an example, if you understand Object Orientation, new languages will appear and disappear while your abstract concept will still be completely useful and applicable. If you study software engineering, it does'n matter if you will use scrum, RUP, you will get it fast because you already have all the base.

bungie4 3 days ago 0 replies      
Mid 50's here. I've been coding professionally for about 30 years. All of this is pretty much spot on.

More so, the ability (and desire) to learn the latest/greatest has waned. I fear that we only have the ability to jam so much new knowledge into our heads. At some point, we must discard the old to make room for the new.

I'm just afraid I'm gonna delete the ability to control my bowels.

splicer 3 days ago 0 replies      
I got a bug report from a 76 year old developer the other day.
artellectual 3 days ago 0 replies      
I don't think experience is replaceable. There are certain things in software that doesn't change.

- solving the problem at hand.

- solving it in the quickest time possible

- the solution to the problem should not introduce new problems

I think for experienced programmer such as yourself the knowledge that you "lose" or "decays" don't actually become useless. I think they will serve you in making better future decisions like for example what you are saying now. Realizing whether things are 'fads' or 'foundational ideas' is a big asset for an experienced programmer.

The same way with doctors. Their tools are changing rapidly but the underlying concept is still the same. She may not know about all the new tools but she understands how a heart works, and because of that she can gauge if the new tool is just a 'fad' or will it change how things are done fundamentally.

I think every career path has this in some degree or another.

keefe 3 days ago 0 replies      
I think it's time for this meme about engineering being a young man's game to die. There are plenty of events in the olympics with good competitors in their 30s and early 40s. The point being that we certainly haven't aged out of any physical requirements once we hit our 40s. Sure, I have a little less energy in my 30s than in my 40s, but overall I am more productive and make fewer mistakes.

I do agree that keeping skill levels up across a long career is difficult. Maybe these memes come up because it's a convenient excuse not to put the effort in? It's very easy to get complacent if you are smart, get things done and have a comfortable home life. We have to train to get that brass ring, to stretch the analogy ;)

progx 3 days ago 0 replies      
"The doctor at 40 doesn't seem to be worried about discovering that all his knowledge of the vascular system is about to evaporate in favor of some new organizing theory. The same goes for the lawyer, the plumber, the accountant, or the english teacher."

This is the point you think in a wrong direction. All of this Jobs need a basic knowledge (a developer need it too) and all auf this Jobs need tools or regulations to do the job.

A doctor need knowledge about new medics, new instruments. A good teacher teaches not the same way for 40 years. ...

React or whatever are tools. And yes, the most tools reinvent the wheel and this is not development specific, this is valid for many jobs.

kkanojia 3 days ago 0 replies      
A lot of old guys(40 is old eh!) I meet are into managerial/advisory roles and even though they wont know the underlying details of the framework it does not take them much time to understand it, because there is always something similar they had during their time.

The time I spent mastering, adobe flex, javas struts framework, GWT and the likes. Could seem like wasted time. But in the larger scheme of things it just made me smarter. I know what worked for them and what didn't and that helps me understand the future frameworks better.

zelos 3 days ago 1 reply      
"...invest most in knowledge that is durable. My energy is better spent accumulating knowledge that has a longer half-life - algorithms, application security, performance optimization, and architecture"

That's the key quote, I think. There seems to be far too much focus on 'programming knowledge' being about new frameworks, languages etc. That's just ephemera. Picking up React Native takes what, a couple of weeks? The basics are still the same, and still far more important.

dcw303 3 days ago 0 replies      
I'm about to hit 40 as well, and I completely identify with the pressure to keep up to date with new languages and frameworks, with the fact that I've lost memory of many things I haven't used recently, and that despite the proliferation of the new new thing, there really aren't that many new ideas out there.

But the thing is, I really like that I must always be learning. I just have that kind of brain that is attracted to learning new things, so for me this career has always been a natural fit.

In the last few years I learnt Meteor.js and built an issue tracker from scratch; I made several attempts at gaming projects using C++, C#, Swift, and others; I played Microcorruption and Cryptopals and Starfighter to learn a bunch about assembly, reverse engineering, crypto and security; I learnt Go and built a compiler with it; and right now my attention is moving towards lua to do some pico8 games. I did all this on the side of my boring corporate java developer job, and for no other reason than I wanted to learn new things. (OK, maybe I had big dreams of a startup with the issue tracker, but the others were purely for fun.)

I'm probably never going to be well known for any of those things, and I really haven't built up the chops to be considered an expert in any of them. But I'm content being a dilettante. Perhaps one day I'll get exhausted of exploring new things, but until then it's just fun to just dabble in whatever takes my fancy.

ensiferum 2 days ago 0 replies      
This is why you let the javascript fanboys come and go with their "angularjs". You can focus on tools that do not change so much. Just to name a few, C++ (slowly updates), C, posix, Qt and many other native technologies that have a good "shelf life" of at least nearly 5-10 years with only casual update.

Further on, the core of computer science has even better shelf life. It basically never expires.

Personally I split two things. The stuff that I need to learn just now to get my current work done, and the stuff that matters in the long term. The former can change quickly and I don't fuss about it. If It's the buzzwords and latest tech gimmicks or just a new technology I didn't use before, I learn as much as I need to and as much as sticks naturally over the course of my work, but I don't always actively try to retain it.

The latter part however is the real "gold". Once you know the core computer sciency stuff you can always build on that later on using whatever tools and technologies.

patkai 3 days ago 0 replies      
I'm also a bit older and have had a 10 year academic career followed by 10 years of software contracting. The biggest difference between academia and current web development - not sure of "other" software development like embedded code, systems languages - that in academia: 0. you went to school, did some homework (not that it's all so useful, but at least you have a background in what you are doing, and a common denominator / language with your peers)1. before you start something new, you do a thorough review, or read a lot of review papers, so you do know what was done before and why2. you do get mentorship, even if Phd students / postdocs often complain about the lack of it. But directly or indirectly you do test your ideas on people who know more and who have been there longer3. you start to send preliminary workshop / conference papers for review, and also funding applications4. at this point you at least know why SQL - or whatever else - is there, and in some cases you might even learn some humility

I guess my conclusions are trivial. Many of us have amazing technical skill but our education and experience is not on par. It results in a lot of waste, of time and quality.

staticelf 3 days ago 0 replies      
I understand the analogy with the doctor but I don't think it is true at all. I don't think programming is different from most fields actually.

Perhaps concepts in programming change more rapidly than in other fields but technology ascends there too. For example I've heard dentists discuss new tech and methods they use as if it was a new web framework with different thinking.

Doctors need to learn about new methodologies all the time since science and technology discover new shit all the time and develop new methods to finding and fighting diseases for example. I think most people would be extremely disappointed if they visited a doctor that gave them medical advice that was 40-50 years old and wasn't updated with more modern medicine.

All technology is a means to an end. You don't have to learn the new tools to complete the job if you can have the same outcome and I think many times people are so afraid to become less relevant that they learn stuff they don't actually need.

If you really benefit from learning something, that's when you should learn it and use it.

agentultra 3 days ago 0 replies      
I find that this pace is a symptom of the Javascript culture of popularity. It is a hallmark achievement in the career of a Javascript developer to be the maintainer of a popular library or framework and monetize their popularity by way of training videos, talks, books, and buy-in from companies building upon their work.

It's not that frightening to me, a mid-30's developer, at this point. I find the fundamentals are more important than the fads and it's relatively easy for me at this point to separate the wheat from the chaff. Is Redux or the Elm architecture good? Yes -- it's a left-fold over a state tree; great! I want that.

Are new things coming out constantly? Yes. Some of them are incremental improvements. That's a good feature to have. It means there are a horde of passionate people constantly improving the tooling and libraries available. I wish some of my preferred languages received even a fraction of the attention that JS gets.

We live in interesting times.

eikenberry 2 days ago 0 replies      
> To me, it seems a bit like JSPs of 15 years ago, with all the logic in the presentation code, > but I'm "old", so I assume I just don't "get it".

He seems to ignore that his experience just paid off. It is not that his knowledge of JSP is out of date and not useful, it is that back in the day he learned the anti-pattern and can apply that now. Programming has the same long term advantages as any profession. Most of what I know after 20-some years is not any specific tech, but ways of doing things, recognizing good and bad habits, patterns, etc. The specific techs come and go, but the real knowledge transcends them all and builds on itself. His 3 stages graph shouldn't logarithmic but exponential.

rb808 3 days ago 0 replies      
I recently went for a C++ job which I hadn't done for 10 years. Most of the questions I was asked were the same ones we had in interviews in the 90s. It actually felt really nice, I just wish there were more good C++ roles around - would be nice to live in a world that doesn't radically change every few years.
szines 3 days ago 0 replies      
Nice article. Thank you. We should never stop learning...

Interesting, I found this writing also, which is about new frontend frameworks: https://medium.com/@edemkumodzi/how-to-choose-a-javascript-f...

About How to choose a framework... and it suggest, if you already an experienced dev, who prefer OO patterns and you believe in serious computer science, you should use Ember.js. If you are a designer, Angular is good for you. However, if you are young and you don't have any experience, go with React, because it is easy to learn... like PHP back in time... I'm afraid React will be the new PHP, because we will see a full generation growing up with mixing logic with view, and they follow this kind of patterns... :)

dep_b 3 days ago 0 replies      
I don't know. I'm in this still somewhat hot new thing called mobile and now I'm supposed to understand how to debug C-code and all that "old bullshit". It even drops me into a view sometimes that's straight from my C64 assembler cartridge full of labeled MOV, LDA calls and all that stuff. I really wish I dug a bit deeper then than writing adventures in BASIC back then!

I don't think knowing how a computer actually works will ever go out of fashion. Now there's Falcon framework for PHP for example, full of speedy functions written in C by smart people that actually knew what was happening beyond the stuff they typed into their .php files.

mti27 2 days ago 0 replies      
"And then one day you find, ten years got behind you. No one told you when to run, you missed the starting gun..."

To the author: Hang in there, man! You're just feeling the time crunch, now that you have kids and other responsibilities. Based on your age, it would have been the mid-1990s when your professional career started. Back then, the economy was a lot better and outsourcing hadn't yet taken over at large companies. It's a little more dog-eat-dog economically, but your brain probably still works fine. Just take a breath and keep going.

markbnj 3 days ago 0 replies      
As a working developer and SRE at 55 years of age I can't help being just a little amused at the author. I guess you really _are_ as old as you feel. The points about the competence cycle over the span of a career are dead on, of course.
muzster 3 days ago 0 replies      
I remember those novice days with fond memories. I've observed, in the twilight hours or when I'm playing with my kids, that I am attracted to things that make me feel a novice. However, this is not compatible with my day time job, where my paid expertise is often required. sigh

It would be interesting to see the graph of the careers stages with happiness overlaid.

Source: http://www.bennorthrop.com/Essays/2016/career-stages-program...

whybroke 3 days ago 0 replies      
We work in a surreal field were knowing a bit of Node.js and nothing else is considered superior to knowing a bit of Node.js and alot of .NET

Obviously you may substitute any fashionable/unfashionable language pair in the above.

prewett 3 days ago 0 replies      
I make a point of not learning the new framework du jour. (Back in The Day it was the next UI frame Microsoft was putting out.) If I keep hearing about it for a few years, I figure it might be worth looking into. I starting writing new sysadmin scripts in Python instead of Perl after I kept hearing about it, for instance. Other than that, I tend to learn on a need-to-know basis. I feel like that has led to little churn in my knowledge. But then, I also try to avoid working in areas that have high churn, which has led to my experience being in areas of low churn.
rmason 3 days ago 1 reply      
Only in two professions pro athletics and computer programming is forty years old.
partycoder 3 days ago 0 replies      
Well, first of all, new shiny things are not really new at all. The principles behind them have been around for decades.

Learning 30 different imperative languages (c, pascal, ada and descendants) might not add as much value as learning an imperative language, a functional language, a logic programming language, a language emphasizing concurrency (go, erlang), etc... meaning, learning paradigms and high level design constructs not syntax.

Try to stay in touch with new paradigms, instead of just new applications of them.

sbt 3 days ago 0 replies      
The second advice, investing in durable skills, is key. In addition to what the author mentions, I would point out that more change happens higher in the stack. There is relatively little change at the x86, C, operating system level, entrenched protocols. But once you start getting into the higher level languages, and in particular web, the churn is much greater. So personally I'm trying to stay away from those higher levels.
euske 3 days ago 0 replies      
I recently found that every programmer has to discover what it is like to create a new exciting thing and watch it fades into obscurity.

i.e. Life is all about reinventing your own wheel.

LeanderK 3 days ago 0 replies      
As a CS-Student i can not imagine going into a profession that you can just learn in some university and then just work in it. For me this seems rather absurd, that you can stop learning. You just have to manage that there is always something new to learn and something you know going obsolete. Thats the way life works, at least in the view of an CS-Student.
oldmanjay 2 days ago 0 replies      
Reading over the comments here makes me want to hug everyone and carefully explain that complaining that you want to dedicate less time to your craft and still get the outsized rewards you feel you deserve just for being old isn't going to convince anyone to hire old programmers.
justinhj 3 days ago 0 replies      
I'm 45. Feels like there are 1000 directions I could go to improve my skills, from soft skills to different programming models and industries. As long as someone will pay me to, I will program for money, and when they stop paying me I'll keep on learning and coding for fun anyway.
patsplat 3 days ago 0 replies      
As an "old" programmer myself am a bit disappointed in the takeaway that jsx is a templating language.

State management is the more important topic and the React tool chain has some great options for addressing it.

flamelover 2 days ago 0 replies      
So get a copy of sicp, there is almost nothing new to you. (Yes, I am going to start a fire :), bite me honey)
tempodox 3 days ago 0 replies      
I roughly concur with the OP's thoughts on the matter. Which makes programming more than just a profession to me. It is, if you will, a way of life.
m3kw9 3 days ago 0 replies      
Depends where you work, some will help you gradually learn new stuff. But if you are a contractor, you'd need to learn to keep up. That's why hey are paid more
samfisher83 3 days ago 0 replies      
Only in tech would late 30s be considered old.
nirav72 3 days ago 0 replies      
If this guy thinks he's is old at being shy of 40..I must be really old at almost 45.
br3w5 3 days ago 0 replies      
Is this a Freudian slip? "feeling apart of a community of technical"
DanielBMarkham 3 days ago 1 reply      
I agree mostly with the author. The only quibble I might have is this: We realize that it'll require real effort to just maintain our level proficiency - and without that effort, we could be worse at our jobs in 5 years than we are today.

If by "worse", you mean more forgetful of the details of latest fads? Sure. But definitely not less able to put together solutions (Not if you've been spending your time doing that, of course)

When you're a kid and fresh into programming, everything you pick up has some magical power to do all sorts of awesome and cool stuff you've never done before. It's a grand adventure and you're just collecting all the trinkets you can on the way there. You look to other programmers to see which ones have the most potential. Whatever job comes along, you've already got the solution in your toolkit.

Over time you begin to realize that many, many problems have been solved hundreds of times. You note that there is an ecosystem around tools and frameworks, and as a developer? You are a market for lots of people who want you to use their stuff. That there's quite a bit of social signaling going on around which languages and tools people use. I'll never forget the first time I heard somebody say about another programmer "He's a nice guy, but he's just a VB programmer."

Actually he was one of the best programmers I knew at the time. He programmed in many different languages. It was just that for the work he was doing, VB was the right tool. But that's not the way it looked to the cool kids.

Know what's sad? It's sad when you look back 10 or 20 years and remember a ton of effort and pain you went through to fuck around with WhizBang 4.0 only to see it replaced by CoolStuff 0.5 -- and then you realize that CoolStuff really wasn't all that much of an improvement. And then you realize that CoolStuff is no longer cool. And then you think of the hundreds of thousands of manhours coders spent mastering all of that and comparing notes with each other. Looking down on those poor folks who never made the switch. Makes you kinda feel like an asshole.

I think you lose a lot of detail recovery ability as you get older, no doubt. I keep very little implementation detail active in my memory and only dig it back out as needed. But we are communicating on this wonderful little forum that, last I checked, was built using html tables! Yikes! And using a language that's a derivative of LISP! Yet somehow the world keeps spinning around.

I have no doubt that as you finally smarten up and focus on the important stuff that you will appear to other, perhaps younger programmers as losing it. I just don't think they know what the hell they're talking about.


abritinthebay 3 days ago 2 replies      
Dunning-Kruger right here folks.
EpicEng 3 days ago 4 replies      
Please. Reduction to absurdity isn't a valid argument. Are those folks working at large scales? Are they tuning DB's for applications which have to handle hundreds of thousands or millions of transactions per second? I don't imagine you actually know what you're talking about here.
Artificial Intelligence Lecture Videos mit.edu
518 points by BucketSort  1 day ago   46 comments top 16
jeyoor 1 day ago 2 replies      
On just a quick glance, the breadth of the topics covered here is stunning.

I also liked this quote from Lecture 23.

> A lot of times we ... confuse value with complexity.

> And many of the things that were the simplest in this subject are actually the most powerful.

> So be careful about confusing simplicity with triviality and thinking that something can't be important unless it's complicated and deeply mathematical.

kennethfriedman 1 day ago 1 reply      
Professor Winston (the lecture in these videos), also teaches a higher-level, seminar based AI reading class called "The Human Intelligence Enterprise".

It too, is an incredible class. Here's the schedule & linked papers from last semester: https://courses.csail.mit.edu/6.803/schedule.html

(Disclaimer: Professor Winston is my current advisor)

todd8 1 day ago 2 replies      
I took this class given by Prof. Patrick Winston a while back, 43 or 44 years ago. I liked it so much back then, it served me well in grad school and for many years after. Next week I'm visiting with an AI company on behalf of potential investors.
icpmacdo 1 day ago 1 reply      
Lecture 15 is really really worth watching through, even if your not familiar with the previous lectures


antirez 1 day ago 0 replies      
If you are interested in zooming into neural networks, at Coursera Hinton himself will teach you a great deal of things: https://www.coursera.org/learn/neural-networks
zardo 1 day ago 0 replies      
This is a terrific class. I also recommend: https://ocw.mit.edu/courses/electrical-engineering-and-compu...
inovica 1 day ago 3 replies      
I have a side project to monitor the homepage of every domain and I want to detect the type of site - ecommerce, blog, forum etc. I've just started on it and ultimately I want to be able to automatically extract data, such as product information from ecommerce sites. Would these videos help here or is there anywhere else that someone would recommend. I know I sounds very naive here... and I am, but there might be someone here who can give me a steer. I've started looking at AI/ML... or whatever its now called and getting a bit confused!
matmatmttam 1 day ago 3 replies      
Anyone has lecture 20? - "Lecture 20, which focuses on the AI business, is not available."
o2l 1 day ago 1 reply      
Does anyone think these lectures would be a good place to start for someone with web development experience (LAMP + JS) and zero AI & ML knowledge ?

If yes, what is missing from these lectures ( related to AI or ML or Deep Learning ) which has been discovered or developed recently and should be learned during the start ?

monfrere 1 day ago 1 reply      
I took this class a couple of years ago. It's outdated and overrated imho, with nowhere enough mathematical rigor to be useful. For example, the discussion of support vector machines (and classification in general if I recall correctly) was limited to two dimensions so that you didn't need linear algebra. The class also spends a lot of time on problems like path finding that you should be able to solve with your standard CS algorithms toolkit or just "logic" rather than needing to reach for anything that deserves the name "artificial intelligence" (at least today). Prof. Winston furthermore spends way too much time on vague truisms that may sum up or organize what's in his brain but aren't helpful to students. ("What if the answer doesn't depend on the data at all? Then you've got the trying to build a cake without flour.")

I hate to dismiss something as ambitious as this course and just tell people to blindly follow trends, but my honest advice would be to just skim these notes if you're interested and go take a normal machine learning course instead.

msie 1 day ago 0 replies      
Wow! Taught by Patrick Winston, he was the author of my AI textbook many moons ago! You can still meet many of the legends of Computing Science...
sagivo 1 day ago 0 replies      
you can find all of the videos in this youtube playlist - https://www.youtube.com/watch?v=TjZBTDzGeGg&list=PLUl4u3cNGP...
zelon88 1 day ago 0 replies      
Amazing! I hope you don't mind but I cross-posted to /r/artifical. Thanks for the videos! This will make sure I get nothing done tonight but I should be smarter in the morning!
sotojuan 1 day ago 0 replies      
One day I hope to have enough time to go through a majority of OCW!
sudhirkhanger 1 day ago 0 replies      
What are requirements for this course?
Epiphany-V: A 1024-core 64-bit RISC processor parallella.org
424 points by ivank  4 days ago   230 comments top 40
Coffeewine 4 days ago 4 replies      
This is fascinating:

The Epiphany-V was designed using a completely automated flow to translate Verilog RTL source code to atapeout ready GDS, demonstrating the feasibility of a 16nm silicon compiler. The amount of open sourcecode in the chip implementation flow should be close to 100% but we were forbidden by our EDA vendorto release the code. All non-proprietary RTL code was developed and released continuously throughout theproject as part of the OH! open source hardware library.[20] The Epiphany-V likely represents the firstexample of a commercial project using a transparent development model pre-tapeout.

thesz 1 hour ago 0 replies      

 Cray had always resisted the massively parallel solution to high-speed computing, offering a variety of reasons that it would never work as well as one very fast processor. He famously quipped "If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?"
I cannot see how this thing can be programmed efficiently (to at least 70% of computing capacity, as most vector machines can be programmed for).

adapteva 4 days ago 17 replies      
I am here, if anyone has questions. AMA!Andreas
valarauca1 4 days ago 3 replies      
Two things immediately jump out

 Custom ISA extensions for deep learning, communication, and cryptography DARPA/MTO autonomous drones cognitive radio
The radar geeks are gonna love to get their hands on ~250GFLOP, 4watt processor.

zelon88 4 days ago 3 replies      
Did I read the specs wrong or are they claiming a 12x - 15x performance improvement over the Ivy Bridge Xeon in GFLOPS/watt? In a <2w package? http://www.adapteva.com/wp-content/uploads/2013/06/hpec12_ol...
Tistel 4 days ago 2 replies      
I wonder if the Erlang/BEAM VM could take advantage of it. Erlang would be a beast. if any of the pure functional languages get running on it (for easy parallel), watch out. Nice work!
mechagodzilla 4 days ago 1 reply      
The linked paper mentions a 500 MHz operating frequency, as well as mentioning a completely automated RTL-to-GDS flow. 500 MHz seems extraordinarily slow for a 16nm chip - was this just an explicit decision to take whatever the tools would give you so as to minimize back-end PD work? Also, given the performance target (high flops/w), how much effort did you spend on power optimization?
cordite 4 days ago 1 reply      
But can I run Erlang on it?
sargun 4 days ago 0 replies      
Would anyone be interested in an Epiphany dedicated servers a la Rasberry Pi collocation (https://www.pcextreme.com/colocation/raspberry-pi)?

I've always wanted to play with these units, but buying one doesn't make a lot of sense for me (where would I put it?). I would be super interested in making them accessible to folks.

weatherlight 4 days ago 1 reply      
What are the benefits/advantages of choosing something like this over a traditional Arm/x86 or a GPU? My knowledge in this area is limited. :)
adapteva 4 days ago 0 replies      
PAPER: https://www.parallella.org/wp-content/uploads/2016/10/e5_102...

(access until we resolve the hosting issues, wordpress completely hosed...)

convolvatron 4 days ago 3 replies      
I read through the pdf summary and it doesn't look as if the shared memory is coherent (which would be silly anyways). But I couldn't find any discussion about synchronization support. Given the weak ordering of non-local references it seems difficult to map alot of workloads. My real guess is that I haven't seen part of the picture.
loeg 4 days ago 2 replies      
What's the practical application of a chip like this?
tiggilyboo 15 hours ago 0 replies      
Good to see this here! I actually wrote a paper analyzing this architecture for one of my bachelor classes, been a few years but: http://simonwillshire.com/papers/efficient-parallelism/
witty_username 4 days ago 1 reply      
Prepend cache: to the URL to view Google's` cached version of this website.
rpiguy 4 days ago 1 reply      
Wow from Kickstart to DARPA funding! How did I miss that?
kirrent 4 days ago 0 replies      
For those interested, Andreas did an interview on the Amp hour a while ago. http://www.theamphour.com/254-an-interview-with-andreas-olof...

Congrats to everyone at adapteva. I remember talking to a couple of researchers who were using the prototype 64 core epiphany processor who seemed excited at how it could scale. I wonder how excited they'd be about this.

AnimalMuppet 4 days ago 6 replies      
1024 64-bit cores? Cool. Very impressive.

64 MB on-chip memory? For 1024 cores? That's 64 K per core. That seems rather inadequate... though for some applications, it will be plenty.

thechao 4 days ago 1 reply      
Is there a mirror anywhere?
Animats 4 days ago 0 replies      
So each processor has 64KB of local memory and network connections to its neighbors?

The NCube and the Cell went down that road. It didn't go well. Not enough memory per CPU. As a general purpose architecture, this class of machines is very tough to program. For a special purpose application such as deep learning, though, this has real potential.

algorithm314 4 days ago 1 reply      
The ISA is epiphany or risc-v?
pjc50 4 days ago 1 reply      
Interesting, but for a very specialized market, somewhere in the corner between GPU and FPGA. Closest existing offer might be Tilera?

Site is currently slashdotted so I can't comment on details like how much DRAM bandwidth you might actually have.

jokoon 4 days ago 1 reply      
What I don't understand with computer chips, is how really relevant the FLOPS unit is, because in most situations, what limits computation speed is always the memory speed, not the FLOPS.

So for example a big L2 or L3 cache will make a CPU faster, but I don't know if a parallel task is always faster on a massively parallel architecture, and if so, how can I understand why it is the case? It seems to me that massively parallel architectures are just distributing the memory throughput in a more intelligent way.

protomyth 4 days ago 0 replies      
The website is erroring out for me, so I wonder what the motherboard situation will be like for this chip. It would be really nice to be able to buy and ARM like we can buy an x86.
mhurd 4 days ago 0 replies      
Truly inspirational in showing what largely one person can do even in these times of huge fabs, expensive masks, and difficult, modern design rules


wbsun 4 days ago 0 replies      
The website is down. Maybe a good opportunity to demonstrate the scalability improvement with such 1024-core processor?
bra-ket 4 days ago 1 reply      
how do I connect external RAM to it, and what would be the cpu-to-memory bandwidth in that case
jdmoreira 4 days ago 0 replies      
that's going to provide some interesting race conditions for sure :D
imre 3 days ago 0 replies      
What is the possible applications? i.e how to potentially make use of all the cores? Is it more like GPU programming?
noelwelsh 4 days ago 0 replies      
Tying in to earlier discussion on C (https://news.ycombinator.com/item?id=12642467), it's interesting to imagine what a better programming model for a chip like this would look like. I know about the usual CSP / message passing stuff, and a bit about HPC languages like SISAL and SAC. Anyone have links to more modern stuff?
api 4 days ago 0 replies      
Wish I was still working on genetic programming and digital artificial life. This would be barrels of fun.
tempodox 4 days ago 0 replies      

 Error establishing a database connection
Overload thru request storm?

rjammala 4 days ago 1 reply      
Seems like this url is really popular,I get this connection error:

Error establishing a database connection

erichocean 4 days ago 0 replies      
Any chance of adding 16-bit floating point support in Epiphany-VI?
laxk 3 days ago 0 replies      
403 error now for the entire site.
bikamonki 4 days ago 0 replies      
Error establishing a database connection
the_duke 4 days ago 0 replies      
Page is overwhelmed.

Can anyone provide a summary?

liveoneggs 4 days ago 1 reply      
too bad the entire site is returning a 500 error now
new299 4 days ago 0 replies      
I think my thoughts on the parallella stuff still hold:


Basically this is a recurring theme in computing, but the whole custom massively parallel thing rarely works out.

Opposition to Galileo was scientific, not just religious aeon.co
445 points by rfreytag  4 days ago   208 comments top 40
edko 4 days ago 4 replies      
There is no question that there was religious opposition to Galileo within the Catholic church. However, it was not unanimous, and there were some notable exceptions.

Jos de Calasanz (saint), the founder of the Piarist order, was a friend of Galileo, and had some of the teachers of his congregation study with Galileo, so that the science they learned could be taught in his schools.

Moreover, when Galileo fell in disgrace, Calasanz instructed members of his congregation to assist him. When Galileo lost his sight, Calasanz ordered the Piarist Clemente Settimi to serve as his secretary.

smallnamespace 3 days ago 9 replies      
Stupid question: given General Relativity, and in particular that all reference frames are equally valid, including rotating and accelerating ones, in what sense is heliocentrism more 'true' than geocentrism?

E.g. you can pick a geocentric reference frame and all the math works out (albeit you'd need to define large fictitious forces, etc., but again, they are only 'fictitious' from the perspective of a different reference frame; in the chosen frame they look quite real!).

Isn't the choice between geocentrism and heliocentrism purely one of convenience, and we can simply pick and choose whichever reference frame is most convenient for calculation purposes?

For example, if I'm interested in things happening in daily life, I pick a reference frame at rest with respect to the ground beneath my feet. If I am figuring out satellite orbits, I use a geocentric frame. If I'm calculating a trip to Mars, I use a heliocentric frame (perhaps a rotating one).

If so, why do we still define heliocentrism to be more correct? Is the argument really an implicit throwback to Newtonian absolute space and time, which Relativity rejected?

jkot 4 days ago 7 replies      
Earth-Centric model was actually scientifically better at that time. With Occam's razor you would prefer it.

- Even church agreed that Earth is not static, but is rotating.

- Nobody observed star parallax, major proof for Copernican model was missing until 19th century.

- Ptolemaic model with its epicycles provided better predictions.

- Copernican model is also wrong, planets are orbiting around center of gravity, which is outside of sun..

santaragolabs 4 days ago 4 replies      
This is basically the thesis of Paul Feyerabend who used it as his main argument against there being a scientific method. His book "Against Method" is one of the best works on Philosophy of Science I've read during my university education.
jordigh 3 days ago 2 replies      
I tl;dr'ed the article (sorry), but I wanted to say one thing about epicycles. They are not wrong and certainly can be used to explain motion, but the problem with them is that you can keep adding epicycles to fit any continuous orbit at all (this actually is because a Fourier series can converge to any continuous periodic function). This is my favourite visualisation of this fact:


dragonwriter 4 days ago 1 reply      
It was also personal and political, not just scientific or religious. There were, of course, people that were opposed to Galileo publishing because of his models (whether for religious or scientific reasons), but there were also (and perhaps more critically) people that were against his model being published because of Galileo.
amoruso 3 days ago 0 replies      
The opposition to Galileo was not really religious or scientific. More than anything, it was political. You can't understand what happened outside the context of the religious wars of the time. His findings undermined the authority of the Catholic Church.


There are plenty of examples of political suppression of science in our own time. The Nazis and Communists were two extreme examples.

In our own society, religion doesn't have this kind of power any more. But there are still political pressures on researchers to be PC. I'll let you think up some examples yourself.

woodchuck64 3 days ago 1 reply      
> But seen from Earth, stars appear as dots of certain sizes or magnitudes. The only way stars could be so incredibly distant and have such sizes was if they were all incredibly huge, every last one dwarfing the Sun.

What?? Oh, Wikipedia fills in some crucially missing info in this hypothesis.

> However, early telescopes produced a spurious disk-like image of a star (known today as an Airy disk) that was larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, and thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star.https://en.wikipedia.org/wiki/Magnitude_(astronomy)

eric_the_read 4 days ago 0 replies      
Mike Flynn has a very long and detailed story of Galileo and more generally geocentrism vs. heliocentrism at http://tofspot.blogspot.com/2013/10/the-great-ptolemaic-smac...

Among the things I learned:

* The Copernican model had more epicycles than the Ptolemaic

* Galileo thought tides were caused by those same epicycles.

emodendroket 3 days ago 2 replies      
The idea that science and religion are distinct spheres, or even things that might be in opposition to each other, has been so thoroughly established in our modern consciousnesses that it's easy to forget that in the past they didn't really make such a distinction.
c0ff 3 days ago 1 reply      
Of course there was scientific opposition to Galileo. Scientific ideas get hammered out through debate and disagreement. New ideas in science often take decades or even centuries to fully develop and reach broad acceptance.

That is why it is important that new ideas can be discussed freely, which wasn't the case in Galileo's time.

michaelsbradley 3 days ago 1 reply      
Many persons who were very serious about religious faith in that time period, e.g. Catholic priests, also had great enthusiasm for rigorous mathematics and scientific advancement.

For example, Fr. Paul Gudin, a Jesuit priest, was a mathematician and astronomer. He was interested in and supportive of the work of Johannes Kepler and provided him with a telescope when Kepler was experiencing financial difficulties[+].

[+] http://www.faculty.fairfield.edu/jmac/sj/scientists/guldin.h...

triplesec 3 days ago 0 replies      
This article serves as quite a fascinating reminder of the problems of new scientific paradigms (see quote pulled out below). All the argumentation about the history of the churrch seems a lot less relevant to me than how if YOU were an astronomer at the time, how sceptical would you have been at this weird thing?

Excerpt: "Copernicus proposed that certain oddities observed in the movements of planets through the constellations were due to the fact that Earth itself was moving. Stars show no such oddities, so Copernicus had to theorise that, rather than being just beyond the planets as astronomers had traditionally supposed, stars were so incredibly distant that Earths motion was insignificant by comparison. But seen from Earth, stars appear as dots of certain sizes or magnitudes. The only way stars could be so incredibly distant and have such sizes was if they were all incredibly huge, every last one dwarfing the Sun. Tycho Brahe, the most prominent astronomer of the era and a favourite of the Establishm"ent, thought this was absurd, while Peter Crger, a leading Polish mathematician, wondered how the Copernican system could ever survive in the face of the star-size problem.

bbctol 3 days ago 0 replies      
This may either calm or further motivate people concerned about the issues with replicability in science, the current "reproduction crisis" in biology and psychology. Scientific progress today is certainly messy, slow, filled with political drama, and lacking a good philosophical footing, but it would be a mistake to think it was ever the noble act of discovery we sometimes get nostalgic about.

The whole metaphor of "discovery" in science is incorrect. You don't suddenly "see" the truth once you get better telescopes or a new imaging method. Everything you see is an accurate depiction of universal laws, as filtered through the distorting layers of our own internal models. Every new "discovery" in science must be slowly generated, models emerging and feuding for generations, before future scientists have enough research to look back on the past and deem some visionaries and others crackpots.

lootsauce 3 days ago 0 replies      
By far the best writing on this subject I have found is over at the fantastic Renaissance Mathematicus blog.

Galileo, the Church and Heliocentricity: A Rough Guide.https://thonyc.wordpress.com/2014/05/29/galileo-the-church-a...

Galileo, Foscarini, The Catholic Church, and heliocentricity in 1615 Part 1 the occurrences: A Rough Guide.https://thonyc.wordpress.com/2014/08/13/galileo-foscarini-th...

And part 2https://thonyc.wordpress.com/2014/08/27/galileo-foscarini-th...

Acceptance, rejection and indifference to heliocentricity before 1610.https://thonyc.wordpress.com/2012/08/16/acceptance-rejection...

antognini 3 days ago 1 reply      
Strangely, the article omits a reference to the preferred model of the time, namely the Tychonic system [1]. In the Tychonic system, all the planets revolve around the Sun, but the Sun and moon revolve around the Earth. It was seen to be a very tidy theory that took advantage of the best aspects of the Copernican theory, but resolved the problem of the lack of observed parallax.

[1]: https://en.wikipedia.org/wiki/Tychonic_system

Houshalter 3 days ago 0 replies      
A somewhat better article on this is here: http://lesswrong.com/lw/lq6/the_galileo_affair_who_was_on_th...

There were also politics involved. Galileo insulted the pope and did some other controversial stuff. He wasn't persecuted for his scientific beliefs.

jnordwick 3 days ago 2 replies      
What about the Orthodox?

"The Church" is really an anachronism. You are speaking of the Roman church which is about only half of Christianity at the time: Western Europe.

What about the other half of the Church? Eastern Europe, Russia, Asian, Norther Africa, the Middle East, Greece?

How did they receive Galileo?

I've always been fascinated by this total amnesia over the Orthodox Church as if it didn't exist or was only a few percent.

geodel 4 days ago 2 replies      
I watched a course on Coursera and it made similar argument regarding opposition to Galileo by Church. I think it makes very important point and is noticeable everyday where deeply political arguments are passed off as scientific or logical.
richman777 3 days ago 0 replies      
The amazing thing about the entire discussion is how it really was a scientific discussion that evolved over time.

There are clearly scientific ideas that we take as simple truths now that will be disproved in the future. They are clearly going to be a little more nuanced than planetary motion but it's great to see how the scientific community has and will evolve.

pippopascal 3 days ago 0 replies      
Ultimately, the argument for the helio-centric model is aesthetic. It is more elegant than the geo-centric when describing the dance of the stars.

So, are both systems equally right and equally wrong?

- Ultimately we're talking about a change in reference frame, which is a vector subtraction. They're mathematically transformations of each other.

- Since the Sun doesn't have infinite mass it, in fact, also orbits the Earth

- Neither system is an inertial reference frame. If we assume the Earth is infinitesimal, at the very least the Sun orbits the Jupiter-Sun barycenter (which is almost outside the Sun proper). So if anything we should speak of a "J-S centric model"

- Both are useful. The geo-centric model is quite useful and still used in astronomy (Never-mind tracking satellites, try understand your coordinates in the heliocentric reference frame.)

- The "corrections" of the geo-centric model are higher order harmonics, and can fit any motion and it's an early application of harmonic analysis. In fact, they're not actually corrections to the model, they are motions that naturally arise when describing circular motion wrt a point outside its axis.

- What's wrong with non-inertial reference frames anyway? Consider them "fictitious" or consider them real, we can calculate and consider the non-inertial forces.

nsxwolf 3 days ago 0 replies      
The comments here are very good and I'm glad people care about the real story and its nuances. For too long people have been taught that Galileo dared to contradict the bible, the church threw him in prison, and that's that.
jhbadger 3 days ago 0 replies      
The problem with saying that there was scientific in addition to religious opposition to Galileo kind of misses the point. That would like saying there was scientific in addition to ideological opposition to genetics in the Soviet Union in the Lysenko years. The reason while both Galileo and Soviet biologists were put on trial was because the power structure at the time opposed them, not because of legitimate scientific concerns.
acqq 3 days ago 0 replies      
Oh, please. First, the article quotes the opposition of only one person, and from that it doesn't follow its title at all. The realistic title would be "The Opposition to Galileo also at least once appeared as scientific, not just religious."

Now why I say "appeared" there? Because the arguments quoted in the text "looking through the telescopes it appeared that epicycles existed" isn't "scientifically" meaningful argument. As soon as we accept the relativity of motion, it's clear how meaningless the statement "it looks so from here" is.

Moreover, "scientific opposition" doesn't result in the house arrest by the church.

Both the Bible and that-other-newer-book- which-is-not-politically-correct-to-be-named have the verses that reflect the false understanding of the nature, and that is indisputable. It's true that there are today enough people that don't take these verses seriously, but in fact, the reasonable people did so, like seen in the article, already at least some 400 years ago.

Good for us, because otherwise most of us would be peasants today.

louprado 3 days ago 0 replies      
This reminds me of the story of Hippasus, who was sentenced to death by his fellow Pythagorean philosophers because he discovered irrational numbers.
jomamaxx 3 days ago 1 reply      
An amazingly reasonable thread for what could be just ignorant mud-slinging.
neves 3 days ago 0 replies      
The ironic thing in this debate is that almost all arguments in Galileo "Dialog" were wrong. http://www.pbs.org/wgbh/nova/earth/galileo-big-mistake.html

Do 2 wrongs make a right?

yawaramin 3 days ago 0 replies      
So, here's the funny thing. The Church has been vilified over this for centuries, but as it turns out they weren't actually wrong to believe in geocentrism; in a relativistic worldview either belief is equally correct.
Amorymeltzer 3 days ago 0 replies      
Anyone who is interested in Astronomy and development of historical models of the heavens should check out this season of the Scientific Odyssey podcast (http://thescientificodyssey.typepad.com/). I found it through HN and Chad Davies does a fantastic job of creating a narrative from early civilization, and we are just now discussing Galileo. It's very niche but fairly accessible, and one of my favorites.
gsmethells 1 day ago 0 replies      
The opposition was not scientific. Read the book "The War on Science" and keep thr enlightenment from slipping away.
meetri 3 days ago 0 replies      
Do you have to believe that the sun orbits the earth in order to believe that the earth is the center of the universe? Is it possible that the Earth is the center of the universe given all light in the universe points in the direction of Earth.
lazyant 3 days ago 0 replies      
I found the book "Galileo's Daughter: A Historical Memoir of Science, Faith, and Love" by Dava Sobel fascinating, really clarifies many misconceptions about Galileo.
imaginenore 3 days ago 1 reply      
Let's not dilute the term "scientific". Scentific method is relatively recent, and consists of very specific sets of steps. So no, Galileo's opposition was not scientific.
Numberwang 3 days ago 0 replies      
Scientific opposition is a good thing. Religious is not.
gpvos 3 days ago 0 replies      
The importance of this article for our current times is in the last two paragraphs.
Iv 2 days ago 0 replies      
I am tired of the Catholics trying to rewrite history. I got deep into this issue several times.

First, go to the source. We have the documents from Galileo trials, so first, read what was actually said. http://www.tc.umn.edu/~allch001/galileo/library/1616docs.htmHere is the most relevant part:

 Proposition to be assessed: (1) The sun is the center of the world and completely devoid of local motion. Assessement: All said that this proposition is foolish and absurd in philosophy, and formally heretical since it explicitly contradicts many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology. (2) The earth is not the center of the world, nor motionless, but it moves as a whole and also with diurnal motion. Assessment: All said that this proposition receives the same judgement in philosophy and that in regard to theological truth it is at least errouneous in faith.
Here, "philosophy" more or less means science. So, yes, Galileo was criticized on scientific grounds, which is totally fine as indeed, there were some problems with his theory (like the lack of movement of stars).

But Galileo was called an heretic because he contradicted the literal meaning of the bible. "formally heretical since it explicitly contradicts many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology" : this is your regular creationist saying that the bible has to be interpreted literally and that you are a heretic if you don't. Problem: said creationist can throw you in jail if you don't abide to his worldview.

That's why from this time (from a bit earlier actually) universities found it primordial to gain independence from the clergy and that science and religion diverged from each other.

Is this religious assessment coherent with others? Of course not! Religion is not much about coherency. Copernicus heliocentrism was well accepted by the church as he was less confrontational, richer and more religious.

Did Galileo act like an asshole? Maybe, though church did need some trolling at this time. Was his condemnation political? Most certainly! But the salient point is that the motivation may have been political, the justification was religious. And that was unacceptable to scientists that you could justify dogmatism and rewrite science books for political motives. The refusal of this is what gave us modern science.

imagist 3 days ago 0 replies      
> What were those problems? A big one was the size of stars in the Copernican universe. Copernicus proposed that certain oddities observed in the movements of planets through the constellations were due to the fact that Earth itself was moving. Stars show no such oddities, so Copernicus had to theorise that, rather than being just beyond the planets as astronomers had traditionally supposed, stars were so incredibly distant that Earths motion was insignificant by comparison. But seen from Earth, stars appear as dots of certain sizes or magnitudes. The only way stars could be so incredibly distant and have such sizes was if they were all incredibly huge, every last one dwarfing the Sun. Tycho Brahe, the most prominent astronomer of the era and a favourite of the Establishment, thought this was absurd, while Peter Crger, a leading Polish mathematician, wondered how the Copernican system could ever survive in the face of the star-size problem.

This is a fascinating observation, and given the information they had at the time, I can see where Locher is coming from. Given two possibilities, one involving absolutely enormous stars and one involving a earth that circled the sun, both extraordinary claims, and no sure way (yet) to evaluate which was true, it's human that he supported the more comfortable hypothesis, and he wasn't provably wrong given the information they had.

> That is unfortunate for science, because today the opponents of science make use of that caricature. Those who insist that the Apollo missions were faked, that vaccines are harmful, or even that the world is flat whose voices are now loud enough for the War on Science to be a National Geographic cover story and for the astrophysicist Neil deGrasse Tyson to address even their most bizarre claims do not reject the scientific process per se. Rather, they wrap themselves in the mantle of Galileo, standing (supposedly) against a (supposedly) corrupted science produced by the Scientific Establishment.

The problem with this is that it conflates the public social debate with an internal scientific debate. Galileo vs. Pope Paul V is not the same debate as Galileo vs. Locher. The former is a debate driven by social needs that tries to drive opinion starting from what the pope wanted rather than observation (which is in fact a rejection of scientific process). The latter is two competing scientific hypotheses.

Likewise, picking one of the example debates (do vaccines cause autism?) there are two possible debates. The public debate is driven by social needs--mostly people trying to find meaning in the suffering caused by their child's autism, and people trying to take advantage of that need. This is absolutely a rejection of scientific process: scientific process attempts to explain phenomena, not find explanations that make people feel better. The internal scientific debate is largely not a debate, because the evidence that vaccines don't cause autism is, at this point, so overwhelming.

"Wrapping oneself in the mantle of Galileo" IS inherently an unscientific position: being pro- or anti-establishment is irrelevant to scientific process. The fact that Galileo happened to be anti-establishment at the time is irrelevant to the fact that ultimately his hypotheses were proven correct.

The real problem here is that a large part of the scientific community doesn't recognize that the social debate and the scientific debate are two different debates. Scientific evidence which is persuasive in a scientific context (studies have shown no correlation between vaccines and autism) does not persuade everyone in a public social context. Emotional approaches are also necessary (would you rather your child died of whooping cough? Or as one person with autism said, "It's painful that some people would rather have a dead child than a child like me.").

jabbanobodder 3 days ago 1 reply      
Scientific opposition is expected, that is the method in which science works. Scientist want proof for a claim, the church didn't want to be wrong.
drzaiusapelord 3 days ago 3 replies      
To be fair, the Church's influence in what we would call science back then was incredibly powerful. I think its unrealistic to see 17th century science as this secular institution like we have today. Of course the 'scientists' of the age followed a church friendly narrative. It was in their interests to do so. I think we will never really understand the chill on speech and research the Church had during the medieval and later periods. I would say its significant considering that the ancient Greeks (Aristarchus of Samos) were able to figure out the heliocentric model, arguably because they didn't have a large Christian Church structure working against them.

I find that modern revisionism to make religion seem less villainous is fairly common nowadays. I don't know where this is coming from or why its on social media so frequently, but its concerning. I think splitting hairs to make the Church look good is a questionable narrative and a form of feel good politics for certain religious people and certain types of habitual contrarians the internet is so fond of. I imagine we're witnessesing a pendulum shift towards more religiosity considering how the west has swung the other way for so long.

Regardless, its still wrong and the hundreds years of fighting to secularize science and to progress past religiously acceptable models wasn't done because Locher was a bad guy, but because the Pope and and the religious establishment was, regardless of the individual merits of the many monks and priests, who ultimately had to tow the party line regardless of their own findings, mathematical skills, or opinions. Blasphemy was certainly a serious charge back then. The wonderful thing about secular science is that there's no serious punishment for being wrong or going against the grain, flawed as it may be. The worst you can expect is being punched by Buzz Aldrin and even then you really have to earn that.

A bot crawled thousands of studies looking for simple math errors vox.com
415 points by MollyR  5 days ago   205 comments top 17
c3534l 4 days ago 15 replies      
> and whether online critiques of past work constitute bullying or shaming. The PubPeer comments are just a tiny part of that debate. But it can hit a nerve.

> Susan Fiske, a former president of the Association for Psychological Science, alluded to Statcheck in an interview with Business Insider, calling it a gotcha algorithm.

> The draft of the article said these online critics were engaging in methodological terrorism.

If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science. There is a fundamental misunderstanding of how knowledge is created through the scientific process if the verification step is considered to be offensive or taboo. That anyone in the field of psychology would even be comfortable publically espousing a non-scientific worldview like that means that psychologists are not being properly educated in the scientific method and should not be in the business of producing research since they do not have a mature understanding of what "scientific" implies.

6stringmerc 4 days ago 3 replies      
What a clever and, dare I say it, fantastically useful experiment!

So much less harm than even "door knob twisting" type explorations - no, this was using published works and pretty much running them through a process to verify or not verify accuracy.

Unsolicited? So what! As a practiced writer I make unsolicited judgments on language usage all the time. Are these people that completely write from their own minds and don't use a spell check or grammar check program of any sort before sending their material for editorial review? I'd strongly doubt it, because it's a tool to make communication more accurate. Math and formulas having a similar procedural check sounds quite constructive to me.

It's not bullying to point out errors; it's bullying to use the existence of errors to belittle or insult a person. I don't see that happening here. Sure, it's a little sterile or "cold" in this fashion, but I think that's for the best if such a process / tool can gain acceptance. It just spits out results and I think that's all it should do. Neat to read about.

alanfalcon 5 days ago 8 replies      
I find it very disconcerting that people are trying to fend off criticism of previously published studies by calling it "bullying" or sometimes worse. What do feelings have to do with science?
mratzloff 5 days ago 2 replies      
Here's the GitHub page:


And if you're curious how it works, as I was:

Statcheck uses regular expressions to find statistical results in APA format. When a statistical result deviates from APA format, statcheck will not find it. The APA formats that statcheck uses are: t(df) = value, p = value; F(df1,df2) = value, p = value; r(df) = value, p = value; [chi]2 (df, N = value) = value, p = value (N is optional, delta G is also included); Z = value, p = value. All regular expressions take into account that test statistics and p values may be exactly (=) or inexactly (< or >) reported. Different spacing has also been taken into account.

munificent 4 days ago 1 reply      
> Theres a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute bullying or shaming.

Science is fundamentally reputation-driven. One of, if not the primary incentive that encourages scientists to do science work is the chance of raising their prestige. Citations are one very quantifiable yardstick for this.

If positive social sanctions are a driving force for science, then it's entirely reasonable that negative sanctions should come into play too. If you can well-cited paper and attract fame, then a poor paper should likewise attract shame.

Otherwise you have a positive feedback loop where once a scientist has attracted enough prestige, they are untouchable. We need negative feedback to balance that out.

thampiman 5 days ago 0 replies      
This coupled with the Automatic Statistician (https://www.automaticstatistician.com/index/) will help fix a lot of biases and human errors that creep into scientific research.
radarsat1 4 days ago 0 replies      
The article sadly doesn't report on the false positive rate of statcheck. I assume the paper does?

I mean, it just uses a basic regular expression, I can see it easily performing bad checks. I assume the authors take this into account.

vinchuco 4 days ago 0 replies      
A good distinction between "peer reviewed" vs "computer verified"

>The literature is growing faster and faster, peer review is overstrained, and we need technology to help us out,

This is a problem in every field, not just Psychology.

I want someone to tell me the distribution (or average ratio) of papers read to papers written.

Every thesis written is supposed to add some delta to the state of the art. But there is no method for doing a diff between past and previous versions of human knowledge. How to make science less redundant and more efficient?

I dream of aggregators for everything.

michaelt 4 days ago 0 replies      
I can certainly understand people being nervous about academic debate moving to social media. It would be a hassle for climate scientists if every paper got a brigade of climate change deniers criticising it and you had to respond to those criticisms.

But this example - someone notifying you there's a mistake in your paper, when there really is a mistake? That seems like a strong argument /for/ academic debate via social media, not /against/ it.

serge2k 4 days ago 0 replies      
> some found the emails annoying from PubPeer [since PubPeer notifies authors of comments] if Statcheck found no potential errors

I would.

> Theres a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute bullying or shaming.

It's facts about your work. Learn to handle it or quit pretending to be a scientist.

> The gist of her article was that she feared too much of the criticism of past work had been ceded to social media, and that the criticism there is taking on an uncivil tone

Valid enough point. Criticism and correction can be done in a civil manner, and in an accepted forum.

OliverJones 3 days ago 0 replies      
Remember when writers could do spell-checking and grammar-checking by "running a program" on their text files?

Here we have numbers-checking working the same way.

I bet you this sort of feature gets built in to word processors eventually, and puts wavy red lines under the results it flags.

We've had this sort of real-time "syntax" checking in software engineering for half a generation. It seems wise for other disciplines to consider adopting it too.

It's obviously got to be discretionary, just like spell-check is discretionary in browsers.

We will get a new genre of humor, thought "statcheck fail."

RangerScience 4 days ago 1 reply      
Haha! What if this is actually a marketing ploy for their web-app? Stir up some shit so everyone gets talking, and provide a service.
l0b0 4 days ago 0 replies      
Why does the article focus even for a paragraph on whether egos would be bruised? If the result is a general improvement to the readers' understanding that is, as far as I'm concerned, case closed. Good on them!
yakult 4 days ago 0 replies      
While it is definitely to the benefit of all that the bot emails authors when it finds mistakes, emailing when it doesn't find anything is a dark pattern. Reminds me of those bots that spam me after scraping my linkedin.
ChoHag 4 days ago 0 replies      
Will I never guess what number 7 is?


orionblastar 4 days ago 0 replies      
I wanted to make a program like that, but considered the ethics of it.
Vim Anti-Patterns sanctum.geek.nz
455 points by microsage  5 days ago   236 comments top 33
pjungwir 4 days ago 13 replies      
I never got into the habit of using { and }. I just use H M L (high/medium/low) to get approximately in the right part of the screen, then go line-by-line. You can also do 5H or 10L to get "5 lines from the top" or "10 lines from the bottom". I make pretty good use of vim features, but I like to mix some sloppiness with the precision. I don't often count things before typing commands, because that breaks the second-nature quality of navigation. If something is more than 2 or 3 objects away, I approximate. I do use counting with t, T, f, and F a lot to hop around, including things like c2f). Very frequently that combines well with spamming j.j.j.j.j. I use . with trivial movement (like j or n) at least 10x more than @a. Another way to move fast sloppily is W and B. I guess I'm saying: learn all the cool stuff you can do, but don't feel like you have to find the most precise way to do every little thing. If you're just starting out, relax and don't try too hard. You can always just pick one new thing every couple weeks and try to add it to your habits.

Oh also: he mentions O to insert above the current line. I use that a lot, but on my systems (going back 15 years or so I think) it has always required a pause, like vim is waiting to see if I'm typing O or some longer command. If I type O and immediately start entering text, strange things happen. This doesn't happen with o. Does anyone else experience this? Maybe it's just something weird in my own setup.

EDIT: Some more "moving fast sloppily": 1G goes to the top of the file. G goes to the bottom. Also you can not-move, but scroll the visible area so that your cursor is on the top line (zENTER), middle line (z.), or bottom line (z-). I use that a lot when I am Ctrl-Fing through a file, so I can see more context.

foob 4 days ago 7 replies      
These are all good tips but most of the anti-patterns seem to skew towards beginners (e.g. don't use the arrow keys, don't navigate in insert mode). One that I think is more common among intermediate, and even advanced, users is the misuse of tabs, windows, and buffers.

A lot of people have a tendency to think of each tab as corresponding to a single open file. This is very understandable because it closely matches the paradigm of most IDEs but it's actually an anti-pattern in VIM. Tabs are really meant to be more like workspaces where you arrange multiple windows into a desired layouts. You then have one buffer for each file that you're dealing with and view them in your windows. It's perfectly fine for multiple windows to share a single buffer or to switch out the buffer that is being viewed in any given window. This StackOverflow answer [1] and this blogpost [2] both go into a fair bit more detail.

If you're trying out this approach for the first time then you probably want to add `set hidden` to your configuration in order to avoid automatically closing buffers that aren't currently being viewed in a window. Coupling this approach with fzf.vim [3] makes managing very large numbers of files a breeze compared to using one tab per file.

[1] - http://stackoverflow.com/a/26710166

[2] - http://joshldavis.com/2014/04/05/vim-tab-madness-buffers-vs-...

[3] - https://github.com/junegunn/fzf.vim

lunchboxsushi 4 days ago 7 replies      
I was originally a die hard Emacs user, but I started to feel a bit of strain on the whole holding Ctrl+p, Ctrl+n etc. I was trying to find an editor/plugin that can reduce keystrokes or at least the strain. I ran across EVIL for Emacs which is just a Vim emulation and keybindings for Emacs users. Unfortunately I did not like it at first the entire hjkl felt awkward and all those modes - OMG. But after toying around with it a bit and playing the online interactive vim learning game I started to see the benefits of Modal editing.

I can easily without leaving home row move a few letter to the right or back with hl or f and F. but things started to get a lot more fun when you realize that Vim is a programming editing language and it's beauty is in the commands. This leads to amazing things that I hated before, such as deleting 2 words back from my current position would simple be d3b. instead of shift+ctrl+left arrow x 2 + delete.

Overall it's been about 2 months since I've started using EVIL mode for Emacs and I love it. I'll stand by the saying that Emacs is a great OS and vim is a great for editing text in it's modal editing.

lastly modal mode really felt powerful only after I had re-mapped my CAPS key to ESC. I mean throughout the past decade I don't think I've even used Caps for anything. so I've remapped the machines I work on the have caps as esc.

- for those that think it's not reasonable to do so and the whole point of using vim is so that you can edit machines via ssh then use vim on that machine, my suggestion is to use tramp in Emacs with ssh or plink to get to the server and edit (you will still have the local caps to esc key mapped)

TL;DR - Vim modal editing is amazing and feels has straining than other editing layouts - IMO.

bpierre 5 days ago 0 replies      
Shameless plug: I wrote a small guide about how I switched to Vim, after having tried for years. Its not a universal approach, but maybe it can be useful for people here: https://github.com/bpierre/switch-to-vim-for-good/blob/maste...
taneq 4 days ago 4 replies      
While it's unarguably fun, I can't believe the effort required to become actually good at vim (as opposed to just using a few of the easier features) will ever pay itself off. (These days, at least - if you spend your days editing code in a terminal over a dialup connection, then it's absolutely worth it!)

Maybe there are scenarios where the busywork of text editing really is on your critical path, but even as a fluent coder who uses some verbose languages at times (my current project is C++ and IEC Structured Text, does it get any more blabby?) I still spend far more of my time looking at, and thinking about, code than I do actually typing. Any extraneous cognitive load just takes focus away from what I'm actually meant to be doing.

yes_or_gnome 5 days ago 4 replies      
This is a great resource, but the article is pretty strict on the arrow keys. I would recommend `nnoremap`, over `noremap`, because it only disabled the arrow keys in Normal Mode.

The author's explains his rationale in the next section, it's to prevent users from living in Insert Mode. Fair enough. But, when making several relatively close edits, the ability to tap a few arrow keys in Insert Mode is far easier and less mentally demanding than any key combination that requires the user to bounce around different modes.

sdegutis 5 days ago 4 replies      
> "If you have to move more than a couple of lines, moving one line at a time by holding down j or k is inefficient."

This very point is actually why I moved from Vim to Emacs years ago. After mastering Vim, I realized that Vim strongly encourages you to think a little too much about exactly how to get there or do that thing in the fewest keystrokes, and that it's incompatible with muscle memory. Even years later, I still had to think too much about it. Whereas in Emacs I can just use basic character/word/line-based movements and let my muscle memory do its thing while I let my brain focus on the code itself instead of how to use the text editor.

Philipp__ 4 days ago 6 replies      
Not trying to start a flame war here, but I in recent few years I am seeing Vim being more popular, and I feel it adoption in terms of younger developers (those who weren't programming when original Vi was around) grows. (Ok, please forget about GUI editors, they have their audience, and I am not comparing any of those facts to them) I used Vim for 3 years, and found philosophy and implementation of modal editing amazing. Writing and manipulating code in it went like bread and butter.

But then I told myself, hey, why not try Emacs? So I've been using Emacs for ~6months now and cannot but notice that community is much more niche and humble compared to Vim. Just look at the sheer number of color schemes available for both editors. And I had to agree, Vim was far superior text editor, but that wasn't enough to keep me away from Emacs, since I gave the advantage to other things (everything else) that Emacs does better.

I tried EVIL mode, and it is amazing, but something just felt wrong using it inside Emacs. I wasn't using either Emacs or Vim. I would often shuffle and mix commands, sometimes I would do :w, sometimes C-x C-s. So I decided to ditch Evil until I get more comfortable with Emacs key bindings. I came to Emacs because of Lisp (and general FP audience is much, much, more based around Emacs, makes sense), amazing tools and plugins which I found more stable, better designed, and it is weird to say this but things just worked with Emacs, things like auto-completion and code navigation (ivy, ace-jump-mode) were really fast, hustle free experiences. Disclaimer, I have never touched older versions of Emacs, spent my time in 24, and now in 25, so many of myths and problems that Emacs got known for over the time, I think, aren't there anymore.

And to sum things up, what is really weird to me is that functional programming is on the rise and every year I see it more and more being adopted, but that doesn't help Emacs audience grow. (Maybe because I am young, and I am nerd often found in nerd communities where things like FP are often praised, but in the real world considered a bit awkward or esoteric.) I showed up at the FP workshop few weeks before in local dev/hacker community place, everybody rocking Sublime Text/Vim, but nobody used Emacs, people were looking at me like I was an Alien. Spacemacs is doing good job at attracting people, but maybe Emacs will stay that programmers/nerd phenomenon, the all-mighty lisp OS, that in the end people often reject or overlook. And why is it like so? I do not know. If somebody can sink into and learn Vim, I don't see a reason why it is different story with Emacs.

glormph 5 days ago 1 reply      
Useful stuff. I'd add typing J to move the next line to the end of the current line. A move I normally do by being on the next line and then type 0dwiBackspace.

Only just figured that one out by mistakingly having the capslock on when moving around.

dopeboy 4 days ago 2 replies      
> Its always useful to jump back to where you were, as well, which is easily enough done with two backticks, or gi to go to the last place you inserted text. I

And here I was undoing and redoing the entire time.

JackMorgan 4 days ago 0 replies      
For anyone who wants to build muscle memory in Vim I'm writing 10 Minute Vim, a book of pre-made exercises for practising advanced macros, RegEx, and navigation. It's already helped me learn a number of new commands in just a short time.


sevensor 5 days ago 1 reply      
This is good solid advice -- I have a problem with vim advice that golfs more unusual circumstances to save a keystroke or extensively remaps the keys. (Generally the only thing I remap is F1 to ESC, because I use laptops that place them inconveniently close together.) Devoting too many cognitive resources to text entry is also an anti-pattern!
tremon 5 days ago 1 reply      
Note that d2wi and c2w are not exactly equivalent: d2w will consume trailing whitespace after the second word, while c2w will not. FAFAIK, there is no exact equivalent of d2wi using c. I usually end up using c2aw, but that will also consume any word characters preceding the cursor (not a problem for me since I usually navigate using b and w).
nocman 4 days ago 6 replies      
"Hitting Escape is usually unnecessary; Ctrl+[ is a lot closer, and more comfortable."

I agree with a lot of the things in this article, but wow, I could not possibly disagree with this quote more. Ctrl+[ is WAY more uncomfortable than using the Escape key.

Granted, I have many years of vim usage that have made hitting Escape a habit, and that probably plays a big part in it, but Ctrl+[ is downright painful for me (and yes, I sat and tried it for a while in vim to see what it would be like).

I can get from the home row to Escape and back with very little effort, though I understand that is not the case for many people. Perhaps it is due to the fact that I use my ring finger to hit Escape rather than my pinky (which would be a lot more work, I think).

yumaikas 4 days ago 0 replies      
I'll just drop my 2 cents here as a vim user. I've tried emacs a number of times, but I've gotten too familiar with what vim has on offer.

I think where vim often wins over emacs is the 110 vi modes that every IDE eventually gets. Vi is an idea that can prosper in many environments.

Emacs is kinda like smalltalk. To get much benefit from it, you have to buy into it whole hog, or not at all. I can write C# in VS with vim keybindings, go in sublime text, and then just hack on a Lua snippet in vim itself. Emacs has ways to work with all those, but that requires a new skill set that I don't need at the moment. Maybe after I graduate from college, but right now isn't the time for me.

stewbrew 4 days ago 3 replies      
While these tips are good per se, in retrospect I don't think it's that a good idea to tie your muscle memory that much to a single piece of software. I'd say enjoy the cursor keys. Moving around in insert mode is ok too.
oolongCat 4 days ago 2 replies      
Instead of

I prefer

selecting with v helps me avoid making mistakes.

foo101 4 days ago 2 replies      
Moving to the far left-corner of the keyboard to reach Escape has been a major annoyance for me. On the other hand, the Caps Lock key on my keyboard is almost useless. I never use that key. Is there a way to map Caps Lock to Escape in Vim?
deathgrindfreak 4 days ago 1 reply      
I find that marking my place with "ma" and returning to it with "'a" is a huge productivity boost. Of course the "a" is just a label, you can use any character (this allows multiple marks as well).
SCdF 4 days ago 3 replies      
So I have used ST3 for years now, and I realise that I'm slowly moving toward Vim. I started using Vim shortcuts in Chrome, and now I use Vintageous in ST3. Vintageous is good, but it is incomplete. It can't even do the first examples on this article (ctrl+f / ctrl+b) correctly.

Thinking I should just bite the bullet. Would mean I could work on a server using Mosh + Tmux as well, which should be rather nice.

What would the current canonical guide be for getting up and running with Vim, with plugins, auto-complete[1], inline linting, multiple carets etc?

[1] Just the ST3-style of parsing out tokens in the same file

dvcrn 4 days ago 0 replies      
I can recommend vim-hardtime [0] for anyone wanting to force themselves to learn the "vim-way" of navigating. It disables n subsequent presses of hjkl where n is any number you want.

For me I set it up to not allow more than 2 steps in any given direction to remind me to use jumps instead.

[0]: https://github.com/takac/vim-hardtime

bcheung 4 days ago 1 reply      
<C-c> works likes escape almost all the time. Mapping the caps lock key to Ctrl makes hitting <C-c> super easy. I usually use that instead of escape.

In general I would agree 1 character at a time is an anti-pattern but it needs to be balanced with the cognitive load of counting how many words or deciding what is or is not a boundary when there are symbol characters.

godelski 4 days ago 1 reply      
Movement is an important part of vim, but so is auto-competing. Probably the biggest thing that helped me greatly improve speed in vim was learning the C-x keys and that they are context aware. Having to repeat an entire or similar line/s becomes quick without having to go back to that place. Why move in the file when vim can find the text for you?
mastazi 4 days ago 0 replies      
> If you happen to know precisely where you want to go, navigating by searching is the way to go, searching forward with / and backward with ?.

It's worth noting that, after searching with ?, you can still move back and forth through occurrences using N and n, in the same way you would after using / for a search.

tomc1985 4 days ago 0 replies      
While I prefer vim for text-mode editing, it is so frustrating that it just has to use its own damn keyboard shortcuts and that the ones burned into muscle memory from 20+ years of using pretty much every text editor under the Windows/DOS sun don't work without significant hacking.
bagol 4 days ago 0 replies      
My favorite is mapping '()' to 'vi)', '[]' to 'vi]', '{}' to 'vi}'. So I can read is as "grab parentheses" instead of "visual inside parentheses"
vlunkr 4 days ago 1 reply      
> dont forget you can yank, delete or change forward or backward to a search result.

Did not know this! (although I've been on vim for ~a month) That's a great trick. Much easier than trying to count words.

lucb1e 4 days ago 0 replies      
Page loads very slow and uses Wordpress. In case it goes down: http://archive.is/JaOXd
clifanatic 4 days ago 0 replies      
> Vim isnt the best tool for every task

Begone with your heresy!

choicewords 4 days ago 0 replies      
My eyes. Good content, but very hard to read.
rhinoceraptor 5 days ago 4 replies      
I have a programmable keyboard, so I just map escape to be right above tab, where tilde usually is.
lathiat 5 days ago 0 replies      
This is a really good guide.
jnordwick 4 days ago 2 replies      
I started off using Emacs a lot (like tons and tons of Emacs with tons and tons of custom modes, functions, and craziness - network pong anybody?). For almost a summer, my xinitrc at the school labs literally opeed up a full screen Emacs; it was my window manager too.

The simplicity of vim (and pretty colors) drew me in. Plus as I learned more sed/ed, I understood vi more. That, and a slow connection from off campus really sucks. I learned that too pretty well. Well enough to hack together some vim scripts, but nothing near my Emacs level. I feel like vim mode hacking is a beast you need to be specially equipped to handle (and I can write APL in any language so it isn't the syntax).

Then Eclipse and IntelliJ came around and I only really used vim for quick one off stuff (if I didn't use printf, echo, or cat). The only time I used vim was for C/C++ or something esoteric, like KDB+/Q/K, that didn't have their own dev environment (unlike say VHDL or Matlab where I could sit in their ugly cocoons).

Now I'm growing tired of the very buggy, slow Java environments that seem to require a gaming rig to compile HelloWorld.java (or sorry, HelloWorld.xml with SpringBoot). And I see all these poor Emacs clones playing catchup with 2000. You have editors written in JavaScript inside a web browser with modules transpiled to JavaScript (for some reason JavaScript is too low-level to write text editing packages in now -- God help us all). That, and they don't support a tenth (I'm being generous) of the functionality that Emacs does and they probably never will.

What is so hard about an extensible text editor? Just getting the basic hooks down for self-insert and movement without having to go to swap?

I remember when Emacs was called "Eight Megabytes and Constantly Swapping". I now see Atom routinely take up over 800 MB. And it still can't play Pong.

Now with Rust and other languages, I'm back home in Emacs, but the keystrokes do tend to bother me a little. I liked the HJKL movement keys in vim - I just hated the modality and think I spent more time trying to figure out "my cursor is here, but I need move it over there - so first I need to jump on that log, shoot the rocket to knock down flowerpot then run quickly while the line is falling to catch it and K it to the next line" -- like some sort of weird text editing puzzle (I wonder if you could make a vim code golf puzzle set).

Emacs has these bindings that feel like finger yoga, even when I've remapped Caps, control, half my F-keys, etc. What I really need to do is remap my toes to my hands, I think.

It would have been really nice to see C-[hjkl] style movement (with maybe CS-[HJKL] be the distance mods or something). It's too late now. You of course you can remap those keys, but too much of that behavior is baked in to people).

Maybe one day when I'm old and gray I'll do the Knuth thing and start a new text editor, but before that I'll probably need to redo the monitor, mouse, and keyboard and that is just too much right now.

Lawsuit: Yahoo CEO Marissa Mayer led illegal purge of male workers mercurynews.com
419 points by prostoalex  3 days ago   358 comments top 25
iamleppert 3 days ago 16 replies      
Just reading this brought back memories of working at LinkedIn, and why I'll never again work for a company who has institutional performance review processes. That pretty much excludes all big tech companies and I'm perfectly fine with that. The cookie-cutter performance review process is impersonal and has absolutely nothing to do with helping people do their best work.

In my case, I had a manager who simply didn't like me because I'm gay and used the performance review process, and eventually put me on an action plan and forced me to quit.

Most of the big tech companies will put you on something called a PIP, which is a "Performance Improvement Plan". It basically means they are preparing to fire you, but they give you an option: quit now, and you can have some severance, or you could try and stay and complete the PIP, but still run the risk of being fired for any reason, and in that case, you get no severance. It's exactly what happened to me, and I decided it wasn't worth the stress to try and stay and fight it so I just quit.

It was the most demoralizing experience ever, and really showed me that these processes are in place so managers can just get rid of people they don't want or like.

rdtsc 3 days ago 3 replies      
> We believe this process allows our team to develop and do their best work. Our performance-review process also allows for high performers to engage in increasingly larger opportunities at our company"

Ok so has that worked out well for Yahoo? Clearly it's been enough time by now to do an evaluation of Yahoo practices looking back and say something like: "Yeah thanks to these great management practices we have reconquered market share / are the new exciting place where everyone wants to work / or we lead this revolutionary research"? It ended up being owned by a phone company in the end.

So if Yahoo is a failing company, what they did there, will be associated with failure. It seems they effectively moved the cause of promoting women in technology fields backwards. When someone will say "we should find a way to promote women more" ... "Oh, right, Yahoo was heavily into that, yeah that was ugly, the lawsuit and all...".

It is a bit like the crazy person advocating for your favorite language or framework, it's nice to have a fan, but because they are crazy, they are pushing everyone away with their behavior.

> as well as for low performers to be transitioned out.

"Transitioned out" ... there is an almost a positive ring to it. "We've reached out to them, found their pain points and helped them transition out to a new stage". Is that how everyone talks now? Or is it just me who finds it grating.

pcurve 3 days ago 5 replies      
..less than 20 percent female. Within a year and a half those top managers were more than 80 percent female,

Even if they were not deliberate about it, they must have talked about how this might be perceived by employees.

As bad as this issue is, I think hiring friend/referral/former colleague, especially en mass is much bigger issue that's rarely talked about because it's not necessarily illegal.

However, I cannot overemphasize how demoralizing it is, especially when they aren't proven to be any better.

joeax 3 days ago 5 replies      
I was once a fan of Mayer when she first took over Yahoo, until she led a crusade against remote workers. As a remote worker myself I can tell you this crusade sent ripples throughout the industry. A lot of tech companies (especially outside the bay area) want to emulate the cool kids like Yahoo, Google, and Apple, and even the company I work for started questioning its WFH policies.

So no, I don't feel sorry for her one bit.

aq3cn 3 days ago 1 reply      
Wow, so many different news about the same tech company within a month and now this one too. They could not have pulled a better stunt to compete with what Apple and Google is doing in their keynote.








I almost felt sorry for them when I read articles like that it's the saddest deal in history as they could got more much money earlier. Now, I feel good that they are gone. fuff .. gone ..

I need to take care of my Flickr account now.

dreta 3 days ago 4 replies      
Theres a reason why people make tons of money playing the stock market against companies that artificially force gender, or racial equality.

Practices like these are going to ruin your business the same way being sexist, or racist will. Either you hire the best person for the job, or youll be beaten by companies that do.

hueving 3 days ago 4 replies      
This is ultimately the result of things like 'implicit biases' training that tell you to accept that you are unfairly subconsciously advantaging majorities and that you should explicitly disadvantage them to make up for your biases. Disgusting.
meddlepal 3 days ago 3 replies      
Yahoo is in full self-destruct mode this week.
argonaut 3 days ago 3 replies      
I'm extra-suspicious of this lawsuit because of the opportunistic timing. He was fired in January 2015, more than 1.5 years ago. And yet the lawsuit is only filed now during Verizon's process of closing its acquisition of Yahoo. Whether or not he has a case, his motives now have an antagonistic taint because he no doubt timed this to maximize the PR damage and his chances of getting a quick settlement.
staticelf 3 days ago 2 replies      
Marissa Meyer seems like a really bad CEO.
grawlinson 3 days ago 2 replies      
With all the hubbub surrounding Mayer/Yahoo, why hasn't the board or shareholders fired her?
leaveyou 3 days ago 6 replies      
I'm deeply offended by this and personally I will cease to use Yahoo Mail. I urge any self-respecting males to stop using any of Yahoo services as a form of protest for this despicable act of discrimination. And I'm serious; too much is too much.
drawkbox 3 days ago 0 replies      
We live in a "free" and democratic society but when you work at mid-large companies they are all basically idealistic dictatorships with brown shirts running the ranks. It really is a conundrum in some places.
zhai88 3 days ago 0 replies      
Worked for a high tech company in the US that hired mostly ethnic Asian people especially immigrants from China, Japan, Korea because most were single or came over alone and would work 12 hours, weekends, etc. THey did not protest, maybe could not protest because of whatever visa they were on. CEO was from China, family members in senior positions at the company etc.
dschuetz 3 days ago 3 replies      
Does anyone here even use Yahoo's products or services? I wonder why Yahoo is still significant and why there are stockholders dumb enough to stay. Yahoo is a sinking ship, imho.
Moshe_Silnorin 3 days ago 0 replies      
This seems to be a self-policing crime.
jtedward 2 days ago 0 replies      
>Yahoos diversity reports indicate that the percentage of women in leadership positions at the company rose slightly to 24 percent in 2015 from 23 percent in 2014.

This is buried in the very last line of the article.

inmemory_net 3 days ago 0 replies      
As an example of the rot in Yahoo, the Yahoo Finance Message Boards don't work in IE 11.
roflchoppa 2 days ago 0 replies      
does anyone have links to go to court documents that have been filed?
cmdrfred 2 days ago 2 replies      
phyzome 3 days ago 5 replies      
melindajb 3 days ago 3 replies      
bajsejohannes 3 days ago 0 replies      
Better title: "Lawsuit: Yahoo CEO Marissa Mayer accused of illegal purge of male workers". An accusation is very different from a verdict.
Steeeve 3 days ago 0 replies      
There's a bizarre amount of negative Yahoo news lately, and a lot of it references Mayer directly.

I wonder if Verizon isn't trying to shed her and her departure bonus.

JoeAltmaier 3 days ago 3 replies      
This whole thread is deeply disturbing. The bald statement in the headline is deeply disturbing. The fundamental assumption that women can't possibly be deserving of management positions. That any objective measure of performance would certainly favor men. That any man that loses his job to a woman has been treated unfairly.
Graph-Powered Machine Learning at Google googleblog.com
362 points by tim_sw  2 days ago   66 comments top 11
bnchrch 2 days ago 14 replies      
While there is a lot to get excited about with ML both as a consumer and as a software developer I can't help feeling a pang of sadness.

Big data and in this case the relationship (graph) between big data points are whats needed to make great ML/AI products. By nature the only companies that will ever have access to this data in a meaningful way are going to be larger companies: Google, Amazon, Apple, ect. Because of this I worry that small upstarts may never be able to compete on these type of products in a viable way as the requirements to build these features are so easily defensible by the larger incumbents.

I hope this is not the case but I'm getting less and less optimistic when I see articles like this.

nicolewhite 2 days ago 0 replies      
I'm a big fan of the smart replies in Allo and Inbox so this was a fun read. I did something similar in grad school where I and some other students manually labeled a handful of sentences and then used graph-based semi-supervised learning to label the rest for the purposes of using it as a training dataset. It would be neat to hear what technology they used for the graph-based learning; perhaps Cayley? We used Python's igraph at the time but it was pretty slow. It would also be interesting to try this in Neo4j.
BinRoo 2 days ago 0 replies      
The algorithm is well known. The main contribution is that they were able to scale up the algorithm to a huge graph. Google doesn't go into much detail about that because it's proprietary. So whats the point of this blog post? Oh yea, free Allo/Inbox advertisement.
harigov 2 days ago 2 replies      
This is really cool. The underlying graph optimization algorithm seems similar to how graphical models are trained. Is that correct? Can someone please help me understand the difference?
mark_l_watson 2 days ago 0 replies      
Great stuff. I have been using machine learning since the late 1980s. Semi supervised learning is new to me, and it is exciting to be able to mix labeled and unlabeled data.
zitterbewegung 2 days ago 1 reply      
Anyone know if they are going to release code in Tensorflow that works with public data? I have been using the inbox reply feature in inbox and its very useful.
onetwo12 2 days ago 0 replies      
I see this as a knn approach in which the distance is a function of the strength of the edge between the vertices.

In the retrofitting paper cited in the comments there is a process of smoothing, that is feeding back the message or information to update the states of the graph (example in the modern ai book). It doesn't seem anything new.

sandGorgon 2 days ago 0 replies      
is there something in Expander that makes this happen... or can something like Apache Spark be used as well ?
b3n 2 days ago 0 replies      
As a software engineer who knows nothing about ML, where's the best place to learn?
prohor 2 days ago 0 replies      
Is there implementation released maybe? Similar like they did with TensorFlow.
yalogin 2 days ago 1 reply      
Unrelated but none of the google blog links open up on safari in iOS. The link opens but text doesn't show.
Shenzhen I/O zachtronics.com
429 points by yumaikas  3 days ago   125 comments top 28
alcari 2 days ago 0 replies      
This was released about 26 hours ago.

Steam says I've played 15 hours.

Send hel--more microelectronics.

It's TIS-100, but with a native multiply! and unsynchronized broadcast! and digit get/set!

Still supremely difficult.

Paul_S 2 days ago 2 replies      
Why doesn't real embedded development work this way?! This is awesome and I want it to be real.

This is the distilled greatest moments of embedded development because in reality I spent today ~2 hours doing a code merge, ~2 hours of work logging and other bureaucracy, ~1 hour of helping testers and ~2 hours of reading schematics and hunting down hardware issues with an oscilloscope and spectrograph.

Filligree 2 days ago 0 replies      
There was a bug in the game, which collaborated with a bug in NixOS to stop it from working at all.

I mailed the backtrace to support. Three minutes later, he'd mailed me back saying he'd release a patch shortly. The problem was simple, admittedly, but Nice going!

contingencies 2 days ago 3 replies      
Title of the post is wrong. Shenzhen () is a city in China spelt Shenzhen not Shenzen. (Edit: Looks like someone just fixed it.) An interesting game concept. Honestly though, even there virtually nobody codes ASM anymore... as a hardware startup founder in China I just bought the game anyway. :)

(Edit: This game is really quite good. I never could be bothered with assembly, but now I am entertained. I don't think it's super accessible to people who haven't at least dabbled in assembly before though, but it's certainly a good way to learn.)

danso 2 days ago 1 reply      
Feels like an auto-buy for me. Any developer that can make Assembly into a fun, polished game (TIS-100) is probably going to hit this one out of the park.
veganjay 2 days ago 1 reply      
FYI - TIS-100 is on sale (50% off) on humble bundle:https://www.humblebundle.com/store/tis100

I only heard about this game recently after I read the HN submission: "My Most Important Project was a ByteCode Interpreter", which led to an article about other simulators, which led to CoreWars, which led to an article about programming games, which mentioned TIS-100 :)

qwertyuiop924 2 days ago 1 reply      
Let's see. A new game from Zachtronics, which is a clear spritiual successor to TIS-100, one of my favorite games from the comapany, an actual printable manual, and looks like it's really cool?

I hate to use memes in most cases, but...


k_sze 2 days ago 1 reply      
I wonder if it will make you debug and identify shady counterfeit components from time to time, since it has a backstory of being based in Shenzhen.

(I'm Chinese and I love to make fun of my own people.)

Udo 2 days ago 1 reply      
I just happened to watch a Shenzhen I/O unboxing on Scott Manley's channel: https://www.youtube.com/watch?v=UpJU3wIf-v0

Looks very, very compelling :)

yzh 2 days ago 1 reply      
This really reminds me of this super fun flash game Manufactoria: http://pleasingfungus.com/Manufactoria/ A Turing machine game.
stephenmm 2 days ago 4 replies      
Looks interesting but its not clear to me what it is... Anyone have more noob friendly introduction?
makmanalp 2 days ago 1 reply      
Interesting and cool to see Shenzhen getting a shoutout in popular culture.

edit: popular meaning "relating to the populous", not "widely supported"

bogomipz 2 days ago 5 replies      
This is really interesting. Could someone who is familiar with this answer whether this might be a decent resource to teach students assembly language?
emeraldd 2 days ago 1 reply      
This is going to eat my weekend! Sheesh those time issues are tight. I'm about 90% certain that the only way I've made this stuff function is via carefully crafted race conditions ...
sowbug 2 days ago 1 reply      
I almost missed that yes, it is available on Linux. When did the penguin icon get replaced with "Steam Play" on the Steam site? Are all Steam Play games Linux-compatible?
green7ea 2 days ago 1 reply      
I really liked TS-100 but it refreshed the screen 60 times a second even when nothing was changing which made my laptop really hot (my normal text editors don't do that ;D). I think they used unity which probably limited their optimization options. Having said that, I'm buying this game, Zach makes the best geek games.
0x54MUR41 2 days ago 0 replies      
It's cool.

I think the games that developed by Zachtronics are mainly puzzle game. I have never played TIS-100 before. It's interesting game, though.

I have played other game from Zachtronics. It's called SpaceChem [1]. SpaceChem is a puzzle game which you play as a reactor engineer. The main task is transform materials into chemical products. At first glance, it's very hard to construct chemistry reactions. You know there is a pattern.

I really would recommend the games from Zachtronics.

[1]: http://www.zachtronics.com/spacechem/

jordache 2 days ago 2 replies      
how realistic is this game's portrayal of circuit building?
ah- 2 days ago 0 replies      
This needs a vim mode.
imranq 2 days ago 1 reply      
As a side note, I've been working through all the MAKE electronics books posted here a little while ago. Will this game help me in designing real life circuits?

Still impressive, didn't know you could make assembly programming into a game!

amelius 2 days ago 2 replies      
Is this an educational game?
shostack 2 days ago 0 replies      
Can someone comment on how playable this would be for someone with no background in embedded systems or electronics?
guiomie 2 days ago 0 replies      
This looks fun. Too bad its not on ps4, and I highly doubt steam works on my chromebook c300.
em3rgent0rdr 2 days ago 0 replies      
I love Zachtronics games!
niedzielski 2 days ago 1 reply      
Is this open source or are there any notable similar simulations with source available?
itgoon 2 days ago 0 replies      
Saw this, bought it...and now it's past midnight.
maplechori 1 day ago 0 replies      
Great game, already bought it.
jbverschoor 2 days ago 0 replies      
Suprt cool
Could Twitter Be Better Off as a Nonprofit? npr.org
333 points by happy-go-lucky  23 hours ago   299 comments top 39
kharms 22 hours ago 7 replies      
This would be fantastic. Twitter's utility is in its open (inherently less profitable) nature. As a data source for academic study it's unparalleled.
sverige 19 hours ago 11 replies      
I've never been able to understand Twitter's popularity. To me, it's like a website's comment section without the attached content. And most of the stuff I've seen on Twitter is only slightly better than YouTube comments, and often worse. I know there are lots of Twitter users here -- seriously, I'm not trying to be funny or sarcastic, I just don't get the appeal. What am I missing?
dfeart3453465uf 22 hours ago 5 replies      
Twitter is the odd one out. While others built walled gardens, twitter kept the gate open.

We now have a general catalog of human discourse for the last decade. Record of great tragedy, revolutions, elections past and future.

It would be shame if this was bought and locked up by some gardeners.

slackoverflower 19 hours ago 4 replies      
Twitter's best bet is cut costs by bring employee count down to 100, 50 sales related, 50 engineering and move their infrastructure to AWS/Google Cloud which take the company to profitability. I think once they become profitable, it opens a lot more opportunities to explore new markets. Right now they are in a scramble to get users. Being profitable and focusing on one thing at a time will do them wonders. Should start by shutting down Periscope and have a new tab in the app for Live.
luhn 18 hours ago 0 replies      
The article sites Mozilla as a "precedent in tech for a nonprofit spin out," but that's really not relevant at all. The Mozilla Foundation took over the Navigator codebase that Netscape open-sourced, the Mozilla Foundation having been created for that sole purpose. What happens with Twitter's software is moot because Twitter's value comes from its community.
jv22222 9 hours ago 0 replies      
zymhan 21 hours ago 0 replies      
I only heard the jab on air, I didn't realize they were actually being serious

"In the second quarter it lost more than $100 million so perhaps it already is a nonprofit."

imh 21 hours ago 1 reply      
It seems weird to say that twitter discussion was so significant in furthering different causes. From the outside (I don't use it), I've still heard plenty about those events. From the inside, I'd bet it seems like they were the original source for the events, or at least the cause of it getting so big. On reddit/imgur/9gag/4chan/etc, there seems to be an idea that whichever one you are on is the source of whichever meme is getting popular. I think it's the same kind of thing. Maybe the discussion grows "organically" and that causes it to grow on each of these platforms, instead of the platform causing the growth.
andy_ppp 8 hours ago 1 reply      
No. Twitter would be better off using machine learning to segment their users (the way Hello is trying to do manually). For example they should know I love Nike trainers, Arsenal football club, I'm a developer, I'm thinking about buying a house etc. and surface this information when I'm writing a tweet such that it knows if I'm talking about Arsenal or my trip around Japan and gives a richer UI based on this. Making my stream more interesting i.e. when a game is on and I'm writing about it there should be way to message only Arsenal people...

All of this should be automatically added/tagged up and while you can remove the metadata it'll be right most of the time. The advertising potential is incredible.

laurencei 21 hours ago 5 replies      
I'm not going to pretend you can run Twitter on an DigitalOcean box for $40pm with one guy.

But when I see things like $100m in losses - I can't help but feel there is a real opportunity for Twitter to streamline its engineering and operational costs?

Is there a breakdown available of where/how they spend their money?

M_Grey 22 hours ago 0 replies      
It would be better for everyone who uses Twitter, better for people who use it for social sciences, but probably not better for Twitter itself as a company.
Raed667 21 hours ago 1 reply      
I can't forgive Twitter for what it has done with TweetDeck after buying it.

They have taken a great tool and striped it so that it fits their blurry vision of what the service should look like.

mathattack 21 hours ago 3 replies      
Hard to imagine Twitter working as a non-profit when so much compensation is tied to equity.
zitterbewegung 20 hours ago 0 replies      
I think a better question is could you make a nonprofit serve the purpose of twitter and GNU social has been successful in this regard. https://en.m.wikipedia.org/wiki/GNU_social
ghaff 21 hours ago 1 reply      
As a non-profit, it would still need to breakeven. So far, it hasn't demonstrated this ability, so arguing that its problems stem from pressure by investors to make outsized returns seems weak. (And the article even admits that Twitter's problems aren't all about investor pressure.)
dominotw 20 hours ago 0 replies      
This seems like the only way to end censorship on twitter.
Iv 6 hours ago 0 replies      
I see nothing in this article indicating why it would? Shareholders want profits. Why would they allow twitter to go non-profit?

If you want a non-profit twitter, make a non-profit alternative and hope for Twitter to die.

Scuds 16 hours ago 0 replies      
At the very least twitter should have never gone public. The stresses of a public company don't jive with the needs of a platform like Twitter. Thousands of employees looking to monetize our tweets aren't what Twitter needs, in any case.
meira 21 hours ago 6 replies      
Twitter betrayed all developers that relied on their API, and also helped overthrow legitimate governments and put middle east and ukraine in chaos. Their failure is well deserved and should help other startups to not fuck with everybody while pretending to help (Google and Facebook, are you the next?)
olivermarks 21 hours ago 5 replies      
There's no reason a craigslist style nonprofit version of a service similar to Twitter couldn't launch. The open Facebook alternatives never took off though - very hard to get traction against deep pocketed and connected rivals.

Doesn't look as though anyone is going to buy Twitter which is embarrassing for them

intended 11 hours ago 1 reply      
Yes, Reddit too.

These websites are more like gardens than amusement parks. A collective shared space, which is hard to monetize.

peatmoss 21 hours ago 2 replies      
It's worth mentioning that the more principled alternative is GNU Social: https://gnu.io/
seany 22 hours ago 1 reply      
It would only be better if they unbanned everyone and stopped trying to bend the messages sent over the platform. At the moment it seems like it might be better for it to just burn down, so something more open can replace it.
idlemind 16 hours ago 1 reply      
As a useful public resource, it could be. Along the lines of ICANN?
shmerl 18 hours ago 0 replies      
Diaspora* is surely better off that way.
Rustydave 21 hours ago 5 replies      
How about charging users?. Like monthly plans etc.
uptownhr 22 hours ago 0 replies      
should adopt a decentralized system. should put them back on top and give another reason people should use it again.
ilaksh 16 hours ago 0 replies      
See the #1 HN article today about returning the web (or the internet) to its decentralized origins.

The existence of Twitter and many other technolpolies that dominate with what could basically be a shared protocol is what's holding back decentralization.

Actually I think all of the giants will eventually fall hard because of (re)-decentralization. That includes Twitter, Facebook, Google, Microsoft, Amazon, the United States, etc.

lifeisstillgood 20 hours ago 1 reply      
I think we are seeing a (vastly accelerated) version of turn of the century industrial politics.

The ultimate (good) destiny if Google, Facebook and Twitter is as public utilities. This valuable data open to all and their connective abilities as useful and common to all as the road network.

Should it be a non-profit. No it should be a utility

h4nkoslo 20 hours ago 0 replies      
"Nonprofits", especially when they have associated revenue streams beyond pure donations (eg government contracts), have really odd organizational incentives.

In the context of something like Twitter where selective censorship / megaphone promotion is becoming a core part of their operations, it looks like reorganizing as a nonprofit is just a tax-advantaged way for the board to act how they want without being even theoretically obliged to operate for the benefit of the people that supplied them with the capital to build up their service.

They've been extremely aggressive in purging high-value users on extremely flimsy pretences in what cannot possibly be a revenue-optimal way (unless somehow you think celebrities fighting amongst themselves is bad for user engagement), but if they have a purported goal of something vague like "improving communication" that becomes a non-issue.

spikels 17 hours ago 0 replies      
Twitter should be replaced with a protocol like email or TCP/IP.
malloreon 13 hours ago 0 replies      
same with facebook please
Zigurd 20 hours ago 0 replies      
No, because you would not build it that way. You could say the same about Facebook. Diaspora already exists, as well as some other distributed social networks. If any of these alternatives prospers it may be because the idea of Twitter should be implemented as something other than an investor-owned venture.
bogomipz 21 hours ago 1 reply      
I am curious what the mechanics of this would be - take thee company private and then apply for classification as a 501(c)(3)?

Looking at the guidelines I'm guessing the type would they be a "private operating foundation"?


Just as one example 501c's are heavily regulated where politics are concerned. Given that Twitter is utilized heavily by politicians as well as political campaigns, would this even be a viable option for them?

_audakel 21 hours ago 0 replies      
"In the second quarter it lost more than $100 million so perhaps it already is a nonprofit."
mavdi 21 hours ago 4 replies      
puppetmaster3 21 hours ago 2 replies      
Good point. Apple is a non-profit: it does not give dividends.
zxcvvcxz 21 hours ago 0 replies      
It already is, is it not?
meerita 21 hours ago 0 replies      
Twitter needs to be acquired by Google.
Tim Berners-Lee just gave us an opening to stop DRM in Web standards defectivebydesign.org
313 points by mynameislegion  3 days ago   212 comments top 14
eveningcoffee 3 days ago 6 replies      
Everyone who is ignorant towards the standard DRM in web browsers does not see the forest behind the trees.

It does not stop with movies or music.

If DRM is deeply integrated into the web then everything will get affected by it. Already today some publisher go to great lengths to try to disturb people from copying simple text and images. It will get only worse.

Currently the openness of the web has been very beneficial to the people willing to make an effort to learn the web technologies. I think that this has opened the field for many talented people. You can just inspect the page and try to learn how it is made by reverse engineering it. This will go away and you will get the inaccessible binary blob instead.

gcp 3 days ago 4 replies      
As others have pointed out, this amounts to nothing. At worst there'll be no standard, at best there'll be a standard not under W3C control.

That being said, Netflix was a big pusher for EME, as far as I know not because they wanted it, but because the studios they license from demand DRM. Yet, they seem to have lost most of their "movie studio" catalogue and are now focusing on originals.

Netflix guys, what about allowing us to see the originals even if we don't have a CDM installed? That would kill DRM/EME faster than hollow FSF & EFF victories. FSF/EFF guys, doesn't this sound like a more promising campaign to you?

lucb1e 3 days ago 2 replies      
> DRM's dark history from the Sony rootkit malware to draconian anti-circumvention laws demonstrates that integrating it into Web standards would be nothing but bad for Web users.

This is where I get scared. What if DRM does not become a web standard? What is the alternative that companies will want to use instead?

That is for me the only reason why standardization might potentially be a good thing. Not because DRM is good, but because the alternative might be worse.

Everything in the past has been broken anyway. From CSS to AACS to HDCP[1]. I was hoping Firefox (and perhaps Chromium, but Google would probably not be so kind as to open source that part of the code) would have the DRM code built in so that we can spoof the whole thing with simple modifications. Better than having to reverse engineer Sony malware.

[1] https://en.wikipedia.org/wiki/Illegal_number

grogenaut 2 days ago 0 replies      
EDIT: I'm not trying to bag on them, I just think they need to work on their messaging if they want to be effective:

That website seems about as in touch with people not of the same mindset as the back pages of norml's website (once you got past the parts written by a pr person). It's got a rotating banner to "cancel netflix" which links to a 2013 post about how netflix will make you use only certain browsers. Makes the site either seem disused... or "tinfoil" as I think most consumers love netflix.

(note I only used norml as an example because their site used to (and may still be) well articulated argument on the front which quickly devolves into what many people would see as weakly argued reasons for letting me get high. it's why, in any movement, you put your articulate people out front even if they're not the real driver).

Waterluvian 2 days ago 1 reply      
Isn't this just an arms race that they can never win? Regardless of source, encryption, format, etc. If a frame of a movie eventually makes it to my video card's buffer, I can get at it, right? There is no end-to-end encryption from source into my brain.

I can only see this just being a colossal inconvenience for users, developers, and many many innocent applications.

oliv__ 3 days ago 1 reply      
"Defective by Design"

Sounds like pretty much everything that's manufactured these days...

contingencies 2 days ago 3 replies      
Vote with your money. Don't buy Apple or Google devices, don't pay for Netflix or similar DRM streaming systems, don't buy Kindle books, don't buy Steam games. Buy unlocked media only, and don't forget to create some of your own.
throw2016 2 days ago 0 replies      
I think DRM is anti culture. Human history has been about sharing. We are a product of the whole. Cultural wealth has been passed down hundreds of years. Now the story tellers and singers do not want you to repeat their stuff, which put in perspective is not a very cultural thing to do.

And the only reason they can do this is because interests can congregate and technology can be abused but it seems morally and ethically questionable. You are not stealing anything, you are watching or listening to a product of our culture. You do not take anything away from anyone.

Its just a small period of 70 years before the internet when mass media and content creators could colloborate to 'manufacture trends', hits and disproportionate wealth.

Before that artists went broke and risked everything just to get their stuff published and out to readers and viewers. Obviously this is not how it should be but the whole 'jetset star lifestyle' may not always be possible simply because you are an artist.

The problem is now that kind of 'trend manufacturing' is much harder to pull off. But the entire industry from studios to artists have got throughly spoilt, got used to those disproportionate returns and are now throwing all their toys out of the pram.

Artists create but the rest of the world is also busy creating stuff. Engineers, industrial designers, scientists, programmers, eveyone is creating stuff. Can anyone just be 'entitled' to extraordinary wealth just because they create. Maybe its their cost structures, business models and expectations that need to change.

DRM is just a tantrum backed by money, its rent seeking of the worst kind and our democractic institutions and systems are so compromised by special interests they will continue to get their way.

ubersoldat2k7 2 days ago 0 replies      
There seems to be lots of confusion of what EME is because people bring up images, text and games. First of all, EME is targeted to video and, to less extent, music streaming.Streaming video content to web browsers is, currently, a mess. There are many DRM schemes and each does its own crazy shit to try and make it work on everyone's browser. It's also expensive. So expensive that only big companies will target big platforms.

Also, EME doesn't affect only web browsers, it also affects SmartTVs which are limited to a few DRM products.

What EME and CENC try to achieve is to add simplicity to this process and for open source products to be able to compete with closed source ones. Even small DRM products are moving on this direction because it's impossible for them to target all platforms. On this regard, even an open source DRM scheme could be achieved and compete.

DRM, EME and CENC will happen, and this only hurts open source products like Linux and Firefox. But it will happen.

wooptoo 3 days ago 3 replies      
That's a bit disheartening. Instead of having a basic standard to start with, we will now have none.

The issue that FSF and others appears to have is with the Content Decryption Module which is a binary blob at the moment.Standardising/opening up the CDM spec could have been done afterwards.

If the W3C were a bit sneakier they could have played a bait-and-switch game on the content providers and push for a standard/opensource CDM at some point.Why couldn't there be an open-source CDM?

INTPenis 3 days ago 2 replies      
Pardon my cynicism but there is no stopping this. Money talks, money is power, activists lack both the power and organisation of large corporations.

I foresee a near future where only a few in society will be able to use the internet safely. There will be subcultures, small segregated pockets of people who refuse the big corporate alternatives on the internet.

We're already seeing this today, think about it. I'm speaking from a Swedish perspective but when piracy on the www was relatively new in the 90s you'd go to "your guy" with the CD-burners and they would give you the movie, game, software you needed.

Only a few people knew enough to keep up with the trends, the BBS, the FTP sites and the newsgroups. Though there was little to none legal problems there were instead technical problems to piracy.

Then we had the piracy golden age, from about 98 to 2015, or today even. When everyone and their grandmother pirated. It was so easy, and torrents made it even easier.

But now the biggest ISP in Sweden has started handing over personal information of their subscribers to foreign companies who are sending monetary demands to the customers if their IP is found on trackers. So instead of being taken to court, just pay the money right?

That's just the start, it will only get worse because corporations have all the power.

But let's look at another example less sinister than piracy. Let's look at simple tracking and web security. Even there you have to be relatively computer savvy to keep up with the new tools, Adblock is out, uBlock is in, Noscript author is under fire, alternatives are often hosted on github.

See what I mean? Safe web browsing is being restricted to a few people savvy enough, or interested enough, to keep up with that scene.

So already, today we're seeing what the future holds for the internet. Any privacy conscious, safe browsing will be pushed to minority subcultures using different platforms, tools and networks than the rest of the population.

The internet will be just another TV or Radio, with indie broadcasters fighting to remain free in a vast sea of big corporations.

We'll most definitely always have open source browsers but the question is how well these browsers will support the new DRM internet that I foresee in our futures.

So pardon my cynicism when I see no positive outcome for DRM on the web. I see instead a majority of content under DRM protection, some of it being copied by a small minority in society and spread through other smaller networks of people who refuse the mainstream web standards.

How this is achieved is just a technicality. It is inevitable because there's money in it and as long as there's money in it corporations will pour money into lobbying to change the rules in their favor.

pc2g4d 2 days ago 0 replies      
I'm just not so sure the EME is so horrible. If companies want to deliver their content encrypted then they will do so.
oldmanjay 2 days ago 0 replies      
The politics of desire always make for fascinating attempts at rationalization
Jaruzel 3 days ago 5 replies      
Unfortunately, it's peoples very nature to avoid paying for things if they don't have to. Whereas I don't support the wholesale DRMing of everything, I do support the Content Creators right to be remunerated for their work.

Without DRM, people will steal stuff without regard for the creators survival. This was seen most visible in the Pop Music industry. Piracy was so rife, that indie musicians were considered too big a risk for the labels, who turned to low-risk-low-cost 'music factory' style churning out of the same low quality pap that the popular charts is now peppered with.

If we don't protect artists (by this I mean, musicians, game designers, visual artists, program makers etc.) from the people trying to steal from them, there will be no quality content going forward, and the only form of entertainment will come from the mega-corps trying to peddle their wares in the guise of ad-laden media.

So, in my view, a standard cross-platform secure DRM model for the web is required. If you want to consume it, you should be prepared to pay for it.

Foundations of Data Science [pdf] cornell.edu
408 points by Anon84  3 days ago   120 comments top 12
yummyfajitas 3 days ago 11 replies      
There's a great quote by some bodybuilder. "Everybody wants to get big, but no one wants to lift heavy weights."

Paraphrasing this to data science: "Everybody wants to have software provide them insights from data, but no one wants to learn any math."

The top two comments here illustrate this perfectly. Anyone who is serious about learning data science will read this book and will not shy away from learning math. You can also learn about data pipelines, but that's not a substitute for what's in this book.

There are also a variety of other algorithmically focused machine learning books. They are also not a substitute for this book.

daemonk 3 days ago 4 replies      
The high theory stuff is great, but a significant portion of the job is being a data janitor. Being experienced and fast at manipulating data structures, recognizing patterns in text datasets, understanding common formats used in the field and just having domain knowledge in what you are analyzing should be more emphasized in my opinion.
onetwo12 3 days ago 1 reply      
You can read similar to chapter two, high dimensional spaces, in https://jeremykun.com/2016/02/08/big-dimensions-and-what-you...

Also, chapter three about SVD, is in https://jeremykun.com/2016/04/18/singular-value-decompositio...

and https://jeremykun.com/2016/05/16/singular-value-decompositio...

the advantage is that you have the python code available.


The book seems to be interesting.

itissid 3 days ago 1 reply      
I was surprised to see so little attention paid to regression. Regression is quite powerful and is a powerful thing in a DS's toolkit.

For example:- What is the distribution of the residuals, how does it change over time as data comes in. How Gaussian they are(or not), analyzing weird/oddities especially around the tails- What kind of features offer the most significant signal to the model and which ones are not.

These skills are even applicable to SVM and other classification analysis.

mmcclellan 2 days ago 2 replies      
"Background material needed for an undergraduate course has been put in the appendix."

So just being honest, the Appendix is still rather terse and advanced for me. Does anyone have suggestions for prerequisite readings that would help getting someone prepared for this text?

Jcol1 3 days ago 1 reply      
What sort of foundational mathematics is required to fully consume this book?

I assume multivariate calculus and linear algebra?

mizzao 2 days ago 0 replies      
Despite being familiar with most of the material here and agreeing that it is generally useful to know, I still don't know if I'd call this book "Foundations of Data Science". It feels more like "Assorted topics in algorithms, machine learning, and optimization": data science from the perspective of a computer scientist.

Notably missing are causal inference, experiment design, and many topics in statistics--causal inference being one of the primary things we'd want to do with data.

qqzz6633 3 days ago 0 replies      
Wow, book from Avrim Blum. Definitely worth reading.
Luftschiff 2 days ago 2 replies      
Would working through this book be sufficient to be able to start doing data science/analysis work?

Some context: I did my undergraduate degree in Economics (in a pretty math intensive university), have been working in marketing for the last 2 years and want to go back to do work in something more analysis centered.

blahi 3 days ago 2 replies      
That seems more about computer science and graphs than what the average analyst would be doing.
graycat 2 days ago 0 replies      
It's a book, a real book, quite long.

It has a lot of topics on applied math.

Some of its main topics arelinear algebra,probability theory,and Markov processes.

Really, the book just touches onsuch topics. Usually in collegeeach of those topics is wortha course of a semester or more.So, what the book has on such topics is much less than sucha course. E.g., for linear algebra,the book gets quickly to thesingular value decompositionbut leaves its treatment of eigenvaluesfor an appendix and otherwise leavesout about 80% of a one semestercourse on linear algebra.Similarly for probability andMarkov processes.

Some of the topics the bookhas or touches on are unusual with, likely,few other sources in book form.E.g., early on the book hasGaussian distributions onfinite dimensional vectorspaces where the dimensionis larger than is common.

So, for the topics rarelycovered in book form,the book could be a goodreference.

For topics such as from linear algebra, a reader might getmisled without an actualcourse in linear algebrafrom any of the longpopularbooks, e.g., Halmos,Strang, Hoffman and Kunze,Nering or more advancedbooks by Horn, Bellman, orothers.

Usually in universities,probability and Markov processesquickly get into graduate materialwith a prerequisite in measuretheory and, hopefully, some onfunctional analysis, e.g., todiscuss some important cases ofconvergence.

So, the book seems to havesome good points and someless good ones.A good point is that the bookis a source of a start onsome topics rarely in bookform. A less good point isthat the book gives verybrief coverage of topics otherwise usually coveredin full courses frompopular texts.

A student with a good mathbackground could use thebook as a reference andmaybe at times get some valuefrom the coverage of some of thetopics rarely covered elsewhere.But I would suspect that students without courses inlinear algebra, probability, etc.would need more backgroundin math to find the book very useful.

E.g., early in my career, Ijumped into various appliedmath topics using very brieftreatments. Later whenI did careful study of goodtexts with relatively fullcoverage, I discovered thatthe brief treatments had beenmisleading. E.g., no one wouldtry to learn heart surgery ina weekend and then try toapply it to a real person.Well, for applied math, maybelearning singular value decomposition,etc.in a weekend might not beenough to make a seriousapplication.

It is good to see a book onapplied math try to be a littlecloser to real, recentapplications thanhas been traditionalin applied math texts.I'm not sure that thebeing closer is crucialor even very usefulfor making real applications,but maybe it will help.

dandermotj 3 days ago 2 replies      
Does anyone know if this is available as a hard copy?
The eye of hurricane Matthew passes directly over a weather buoy noaa.gov
384 points by matt2000  5 days ago   150 comments top 11
gilgoomesh 5 days ago 11 replies      
That's a beautiful graph but NOAA's insistence on non-metric units frustrates me endlessly. Knots for speed are bad enough but "inches" for pressure is multiple levels of wrong.
dluan 5 days ago 2 replies      
If anyone wants to see where it is, the hurricane's moved further up north away from the buoy's location.


jpalomaki 4 days ago 1 reply      
I was surprised some time ago when I found out that the modern wind speed measuring devices work without moving parts. I had expected them to all have these spinning ping pong ball halves.

To do it without moving parts, they are for using for example ultrasound. These devices have pairs on transducers, placed 10-20cm apart and the device then measures how long it takes for the sound to pass through that distance.


matwood 5 days ago 1 reply      
Being in the eye of a hurricane is an odd feeling. Growing up I went through Hugo, and when the eye finally passed over, it went from chaos to an eerie dead calm. We went outside and to see the destruction, but knew there was not much we could do yet. A tree had clipped the corner of the house so my parents did what they could do to protect the exposed inside against the rain of back half of the storm. Beyond that, we just prepared hold tight for another few hours.
isomorphic 5 days ago 2 replies      
One of the things I take away from this graph isn't just the peak wind speed, but the amount of time you'd be under hurricane-force winds.

A couple minutes of 90mph winds is one thing... hours of 90mph winds is entirely another thing.

avs733 5 days ago 2 replies      
I would be immensely curious to see the raw data given how the point spacing changes (assuming those aren't actual data measurements). The drop and recovery around the eye are so staggeringly smooth.
sathackr 5 days ago 2 replies      
Wonder why we don't see winds at the reported speeds(130mph+) in the data?

Maybe it's a limit of the measuring device?

webkike 5 days ago 0 replies      
I love this graph
dandelany 5 days ago 3 replies      
Call me a cynic, but my first thought on looking at this was "we can't do any better than a datum per hour?!" :) Must be a bandwidth constraint.
sp527 5 days ago 0 replies      
It's biuriful tears up

And now back to our regularly scheduled React component.

How We Built the Worlds Prettiest Auto-Generated Transit Maps medium.com
409 points by ant6n  2 days ago   59 comments top 21
Chris_Newton 2 days ago 0 replies      
Thats a fascinating case study. Thanks for sharing.

Not so long ago, I was designing visualisations for a different type of underlying graph structure, but one that had some similar elements in terms of close edges that could be drawn together, a desire to minimise edge crossings, and the like. I spent quite a while studying Becks famous London Underground map and its modern derivatives, and then experimenting with some of the same ideas mentioned in the article here. I too found that rendering clean, practical diagrams to show messy, real-world underlying data can be surprisingly difficult! I have a lot of respect for the Transit App team if theyve successfully implemented algorithms that can produce output as beautiful as those examples in their general case.

red_admiral 2 days ago 2 replies      
Looks lovely. But their comparison with apple and google (https://medium.com/transit-app/transit-maps-apple-vs-google-...) is missing something: "works on a desktop computer". I have a surface 3 pro with chrome/firefox/edge installed, I'd love if there were some html5 web view onto this for planning trips in advance, where I quite like the larger screen size and the ability to screenshot/print just in case.
wlievens 1 day ago 0 replies      
Awesome material. I really love automated renderings that attempt to approach handmade quality.

A couple years back I attempted to automate our production planning. The effort failed alas (complexity and feature creep killed it) but I did make nice visualizations along the way.

Example: https://www.dropbox.com/s/jxiareohe0wi4z9/planning-example-1...

clemsen 2 days ago 1 reply      
I would love to get more details on the linear-integer solve methodology, as it sounds impressive. Was the problem formulated to also work as a linear problem where the binary or integer variables were first treated as positive non-integer variables, and then checked using branch-and-cut (That's how I would do it)?Or did you do something differently?

 Fortunately, we found a different plan of attack: one which allowed the integer-linear-programming solver to explore the problem space more efficiently and find optimal solutions faster. What previously took an hour, now took 0.2 seconds.

moseandre 2 days ago 0 replies      
This is a tour de force in data science style computational geometry. I really loved the story. And very nice maps. Wow. :)
aninhumer 2 days ago 2 replies      
It's an interesting problem, and it's cool that they've found a decent solution...

But I can't help wondering how often these kind of complicated transit junctions actually occur in practice? There aren't really that many complicated metro systems in the world (to my continuing disappointment), and even they often still have fairly simple junctions.

Maybe it would be cheaper and more effective to just hire a graphic designer to hardcode solutions to the worst examples?

mcam 1 day ago 1 reply      
This is fascinating stuff, and nicely polished work.

We would like to develop something similar, but a non map overlay version, using our transit data at Rome2rio. Basically code to auto generate something like this for all of our 4,000 transport operators: http://content.tfl.gov.uk/standard-tube-map.pdf

We've been talking about it for a while but don't have the constraint layout expertise to do it internally.

Anyone interested in working for us to tackle this problem?

maxerickson 2 days ago 2 replies      
They are deriving a database from OpenStreetMap. Has anybody found where they share that data? It's likely that the OSM data license (the ODBL) requires them to publish the derived database.
Overv 1 day ago 0 replies      
Why did you jump straight to MILP for ordering the lines of segments? It seems like it would be more obvious to first try ordering the routes by id for each segment. That would guarantee that routes are always ordered the same way relatively to each other and would eliminate the problem in the before/after image with much less effort.
cbhl 1 day ago 1 reply      
Have you had any reports where the algorithmically snapped-to-OSM route maps diverge from the actual bus/train routing, and if so, how do you handle that?
kuschku 2 days ago 1 reply      
In this example image: https://d262ilb51hltx0.cloudfront.net/max/2048/1*bSjX6T0OaMX...

Why doesnt the integer linear ordering put the orange line completely inside the loop?

That would remove 3 crossing sections, and look better.

ryanbertrand 2 days ago 0 replies      
Great write up! I just downloaded the app and found a nice Easter egg. You can say you are the first app to support Hyperloop? :)
buckhx 2 days ago 1 reply      
Pretty neat. I like the idea of using pixel space. They could break it out to tiles to parallelize things instead of handling things globally if that's a bottleneck for them.
bunderbunder 2 days ago 2 replies      
Beautiful, yes, but I'm a form follows function guy, and at least from looking at their screenshots of the Chicago Loop, I'm left thinking their version of the map is quite a bit less useful than the (admittedly unsightly and cluttered) official one[1].

Some things that are iffy or missing from a functional perspective:

Any indication that the red and blue line stations are connected by tunnels at Lake/Washington and Jackson.

For that matter, the fact that the Jackson blue line station exists in the first place - it's obscured by the B in the street label for Jackson Boulevard. Same goes for the LaSalle blue line and State red line stations, and the Washington blue line station is also iffy. That's over 1/4 of the stations in the map's area hidden under street labels.

Which stations have elevators? Most of the ones downtown don't.

Which directions are the trains traveling in? With the exception of the green line, all of the elevated trains go only one way around the loop.

Color matters, especially on a system where all the trains are identified by color. Why did they use a dark mauve to indicate the pink line? In the real system it's indicated by a bright bubblegum pink. They've created a big opportunity for confusion with the purple line.

A human touch might be able to make some better arrangement choices. The CTA map crosses the blue line over the green and pink lines a little outside of the loop, at the point where it diverges from the other two. That's a much better choice than trying to do it right in the middle of the already jumbled mess that is the confluence of all of the trains in Chicago's transit system (save one small spur line out in the suburbs) along a three block section of Lake street.

Clearly indicating that the purple line operates differently from the others is also useful, and might save someone who's unfamiliar with the system from a lot of time spent waiting on the wrong side of the station while watching a bunch of brown lines pass by.

I'll grant the loop section of the L system may well be the most fiddly, nit-picky light rail mapping problem in the world, and the datasets they were working with might not have given all the detail they needed. (On the other hand, that thing with hiding stations under street names feels pretty egregious to me.) I guess what I'm really going for here is, when it comes to drawing maps, I still think involving a human hand in the process can make an enormous difference in the quality of the final product.

[1]: http://www.transitchicago.com/assets/1/clickable_system_map/...

coldcode 2 days ago 1 reply      
Amazing process to build such an impressive result. Man I wish I could work on problems like this. This would making come to work fun again.
cbhl 1 day ago 0 replies      
Are there any plans to make these programmatically-generated transit maps available as posters or dead-tree paper maps?
huhtenberg 1 day ago 1 reply      
ant6n, there's something weird going on with images on your homepage -