hacker news with inline top comments    .. more ..    16 Aug 2017 News
home   ask   best   2 years ago   
Ask a Female Engineer: Thoughts on the Google Memo ycombinator.com
560 points by cbcowans  5 hours ago   895 comments top
hedgew 5 hours ago 19 replies      
Many of the more reasonable criticisms of the memo say that it wasn't written well enough; it could've been more considerate, it should have used better language, or better presentation. In this particular link, Scott Alexander is used as an example of better writing, and he certainly is one of the best and most persuasive modern writers I've found. However, I can not imagine ever matching his talent and output, even if I practiced for years to try and catch up.

I do not think that anyone's ability to write should disbar them from discussion. We can not expect perfection from others. Instead we should try to understand them as human beings, and interpret them with generosity and kindness.

APIs as infrastructure: future-proofing Stripe with versioning stripe.com
297 points by darwhy  6 hours ago   33 comments top 17
organsnyder 6 hours ago 1 reply      
I work on a SOA team at a large healthcare enterprise. We currently write mostly SOAP APIs (yeah, I know, 2017 and all that...), and follow a typical pattern as what Stripe describes: Whenever we do a version bump (which is extremely frequent, since a WSDL breaks a contract the moment you sneeze at it), we create an XSLT transform from the new version back to the old. So if you're calling version 2, but the current version is 10, there will be transforms back for 10->9, 9->8, 8->7... all the way back to 2. It works well enough.

We're in the early stages of deploying a new RESTful stack, and versioning is a hot topic (along with getting people out of the RPC mindset and into a resource-based paradigm). While version bumps should be much less common, we'll probably end up doing something similar to our cascading transformations. Essentially, the old version becomes a consumer of the new version, and as long as the new version continues to hold to its API contract, everything should work with minimal fuss. Of course, that's assuming that we don't change the behavior of a service in ways that aren't explicitly defined in the API contract...

ianstormtaylor 5 hours ago 1 reply      
Always excited to hear Stripe talking about versioning :D

For anyone else who's interested, they've written/talked about this a few times over the years, to fill out the picture:

- http://amberonrails.com/move-fast-dont-break-your-api/

- https://www.heavybit.com/library/video/move-fast-dont-break-...

- https://speakerdeck.com/apistrat/api-versioning-at-stripe

- https://brandur.org/api-upgrades

- https://news.ycombinator.com/item?id=13708927

It sounds like their YAML system has changed to be implemented in code instead, which maybe allows the transforms to be a bit more helpful/encapsulated. If anyone from Stripe is here, it would be awesome to know if that's true and why the switch?

sb8244 5 hours ago 0 replies      
I'm working on designing an API now that is based on Stripe's previous post from how they do versioning.

In general, the concepts employed by Stripe really encourage better design choices. All changes, responses, request parameters, etc should be documented and then handled automatically by the system. We took this approach in our design, although we don't do it with an explicit "ChangeObject" like Stripe does; it's a great idea though.

Hoping to be able to put out a blog post once we start implementing the system and getting feedback on what works and doesn't work well.

mrhwick 5 hours ago 0 replies      
I wrote a library that will assist anyone wanting to do something similar for versioning using Django Rest Framework: https://github.com/mrhwick/django-rest-framework-version-tra...
hn_throwaway_99 5 hours ago 1 reply      
I think this is one area where GraphQL really excels. It essentially can handle all of this versioning for you (as clients specify EXACTLY what they want) - you just need to make sure that as you evolve your schema that existing fields are left as-is and you only add new fields (not an easy task, but no harder than what you have to do in Stripe's protocol).
dperfect 4 hours ago 1 reply      
Does anyone know if there's a publicly-available Ruby gem for doing what's described in this blog post - i.e., cascading transformations with a nice DSL? If someone from Stripe reads this, I think you could count on some decent community interest for this framework if you ever consider open-sourcing it.
kirbypineapple 4 hours ago 1 reply      
From a Stripe developer perspective this sounds like a really clean way to handle API versioning.

From a consumer of Stripe's API's perspective, doesn't this make debugging or modifying legacy code a real pain? Let's say I'm using Stripe.js API's from a few years ago; where do I go to find the docs for that version? Do I need to look at the API change log and work backwards?

tomschlick 6 hours ago 1 reply      
To me this is one of the best things about the Stripe API. It's basically database migration files but for your API requests.

Does anyone know of packages that do this already? I have been contemplating creating one in PHP/Laravel for a long time but haven't had the time yet...

mi100hael 4 hours ago 0 replies      
I'd love to see more specifics on the documentation automation. Keeping docs straight sounds like the biggest challenge with a system like this.
atonse 5 hours ago 2 replies      
This is awesome. As someone who's built many APIs, I have always wondered how Stripe managed all those versions. I knew their code couldn't just be littered with if/thens.

This is a really smart way to do it.

One question is, over the years, wouldn't you add a lot of overhead to each request in transformation? Or do you have a policy where you expire versions that are more than 2 years old, etc? (skimmed through parts of the article so my apologies if you already answered this)

aidos 5 hours ago 0 replies      
That's a great system - I'm totally going to borrow that technique when it next makes sense.

Hey, @pc with all the spare time your team has accumulated by using this api model maybe you could put it to good use. Might I suggest it's time to divert most of your tech resources into creating the next Capture the Flag? Because those were just awesome!

I'm joking, in case it's not obvious (but I would absolutely love another Stripe CTF).

goodroot 5 hours ago 0 replies      
Most excellent write-up. I used Stripe for inspiration when writing a post that wound up here titled 'Pragmatic API Versioning', since been renamed 'How to Version a Web API'. It felt like the most clever way to reconcile change and stability, when compared to other major APIs.

It was a delight to get a peek behind the curtain. :)

di 5 hours ago 2 replies      
Github also versions their API via headers, but uses the `Accept` header instead: https://developer.github.com/v3/media/
miheermunjal 4 hours ago 0 replies      
I wish every company would be this open on their strategies. Between this and netflix, its great to see the cutting edge in action
SoMisanthrope 4 hours ago 0 replies      
Brilliant article. Whatever versioning schema that you use, you should at least start with _something_ at the outset of an application IMO
celim307 5 hours ago 0 replies      
I really like the rolling versioning approach. Makes a lot of sense.
Silhouette 6 hours ago 0 replies      
Thank you to Stripe for the efforts they make in this area. If you're building critical infrastructure around someone else's system -- and it doesn't get much more critical than the way you collect money -- then you really want this kind of stability.

I wish other payment services treated their long-time clients with the same respect (looking straight at you, GoCardless).

Selection bias is the most powerful force in education fredrikdeboer.com
183 points by gfredtech  6 hours ago   68 comments top 15
terrahutte 0 minutes ago 0 replies      
My school has a large Chinese international student body. The school is marketed as being diverse, but really those students are from wealthy families and can easily afford the 40k+ tuition. In the end, the campus isn't diverse, it is segregated. But at least the uni has more money for research.
jstewartmobile 1 hour ago 2 replies      
Parents, teachers, and students are so focused on "education" as a pedigree and job pipeline that if they don't see an immediate connection between a topic and a paycheck they're like "Who cares?! Why are you wasting my kid's time on that?"

When the whole thing is an exercise in competitive pedigreeing, of course it's going to be gamed. If it were more about human development, we'd be focusing on the deltas instead, and they'd be harder to fudge.

glangdale 2 hours ago 4 replies      
This is a good article. I wish I could force everyone who pontificates on schooling in Australia to read it.

We have an enormous private school system (part govt-funded, which is gross) that can get rid of students as they please. Then they go on to verbally dump shit on the public system, where they get to dump their problem students. Nice.

We also have a real fetish for selective schools, which are essentially selection bias driven to a huge extent. Any suggestion that this isn't a good idea is met by an chorus of "you must hate smart people".

bluGill 4 hours ago 3 replies      
Sending a troubled kid to a great private school probably will cause the kid to do better: because all the other students are doing well the bad students peers are working hard so the bad student sees that example and is likely to succumb to peer pressure and do better.

For this to work the troubled kid needs to be kept out of the gangs and whatever else outside of school will lead to bad examples. However selection bias already does this: parents who care enough to send their kid to a private school are involved enough with their kid that they would probably do this anyway. In short these are kids that might have been troubled, but they still would have been at the top of the troubled group.

spunker540 4 hours ago 1 reply      
I think the real problem described is that people are judging schools based on the performance of their students. There are other objective factors that won't be as affected by selection bias such as: breadth of extra-curricular options in sports and in the arts. Number of AP classes offered. Teacher to student ratio. Budget per student. Does the school have newer computers, working instruments, modern facilities?
hyperion2010 4 hours ago 0 replies      
That tricky thing when dealing with humans: are they learning because of our efforts to teach them or in spite of them?
guyzero 25 minutes ago 0 replies      
This has long been my assertion about elite undergraduate engineering schools. The teaching is usually just OK but their main advantage is that they get to "skim the cream" from the applicant pool.
leongrado 4 hours ago 3 replies      
I'm from the midwest and most people who weren't aiming for Ivy Leagues just took the ACT.
gnicholas 3 hours ago 1 reply      
> People involved with the private high schools liked to brag about the high scores their students scored on standardized tests without bothering to mention that you had to score well on such a test to get into them in the first place. This is, as Ive said before, akin to having a height requirement for your school and then bragging about how tall your student body is.

Not quite kids don't get shorter over time, but they can easily get worse grades/scores. It's true that having a screening test makes it more likely that your students will score well on other tests, but it's not a guarantee (as it is with the height example).

ngrilly 3 hours ago 0 replies      
This is simply the best article I've ever read on this topic.
Nelkins 2 hours ago 1 reply      
A professor at the University of Rochester (my alma mater) has begun a project to better evaluate the value of a liberal arts education, and to try and quantify what impact it truly has on the career trajectories and future earnings of graduates[1].

[1] https://www.rochester.edu/pr/Review/V79N6/0304_lennie.html

grandalf 2 hours ago 0 replies      
I think a corollary of the author's point is that by the time someone is college age, selection bias is the most powerful force in education.

I'd be curious to see a study of younger people to see if perhaps there are some assimilation effects.

mbillie1 1 hour ago 0 replies      
Nothing of substance to contribute but I'm immensely pleased to see Freddie DeBoer writing again.
RhysU 4 hours ago 4 replies      
A simple thought experiment.

Consider bussing kids from 'good' places to bad and from 'bad' to good. See after how long their outcomes become equivalent. Do so at different grade levels to measure the convergence time versus age of displacement.

And now for the actual experiment: how do you impactfully present such results? I assume no outcome. Just the existence of an outcome.

I posit that priceless data would be worthless in American society.

jgalt212 4 hours ago 0 replies      
He was wearing my Harvard tie. Can you believe it? My Harvard tie. Like oh, sure he went to Harvard.


I Bought a Book About the Internet from 1994 and None of the Links Worked vice.com
324 points by slyall  10 hours ago   242 comments top 44
whatever_dude 8 hours ago 4 replies      
I've had a similar problem. In updating my portfolio site recently, I noticed a vast majority of links were dead. Not just live projects published maybe 3 years or more ago (I expect those to die). But also links to articles and mentions from barely one year ago, or links to award sites, and the like. With a site listing projects going back ~15 years, one can imagine how bad things were.

I had to end up creating a link component that would automatically link to an archive.org version of the link on every URL if I marked as "dead". It was so prevalent it had to be automated like that.

Another reason why I've been contributing $100/year to the Internet Archive for the past 3 years and will continue to do so. They're doing some often unsung but important work.

CharlesDodgson 7 hours ago 6 replies      
I miss the optimism of the early web, when you could create a simple web page, join a web ring and going online was an event.It's richer and deeper now, but the rawness and simpleness of it all was enjoyable and novel.
ChrisSD 7 hours ago 14 replies      
Related to this, I had trouble finding examples of pre 1996 web design. The internet archive has a lot from 1997 onwards. The oldest live examples of sites from that era, that I know of are:




The BBC also donated its Networking Club to the Internet Archive: https://archive.org/details/bbcnc.org.uk-19950301

kutkloon7 9 hours ago 3 replies      
This is a very important reason why books, in general, contain better information that websites. On websites, people care a lot less about the correctness of the information. You can just update stuff later (of course, this doesn't always happen).

Also, sites are a very volatile medium. I often bookmark pages with interesting information to read later, and it inevitably happens once in a while that a site went down and I just can't find the information anymore.

jacquesm 8 hours ago 4 replies      
Linkrot is a real problem. Especially for those sites that disappear before the archive can get to them.

On another note, the more dynamic the web becomes the harder it will be to archive so if you think that the 1994 content is a problem wait until you live in 2040 and you want to read some pages from 2017.

amelius 8 hours ago 2 replies      
Solution: https://ipfs.io

> The average lifespan of a web page is 100 days. Remember GeoCities? The web doesn't anymore. It's not good enough for the primary medium of our era to be so fragile.

> IPFS provides historic versioning (like git) and makes it simple to set up resilient networks for mirroring of data.

sjclemmy 4 hours ago 2 replies      
I've got a book about javascript from 1995. It mentions closure once and says something like "... but you'll never need to use that feature of the language."

How I laughed.

indescions_2017 8 hours ago 1 reply      
See also: Best of the Web '94 Awards. Presented at the First International Conference on the World-Wide Web, Geneva, Switzerland, May 1994.


What's cool isn't how fast some of these technologies become obsolete, such as various Java applets and cgi-bin connected webcams. It's the static content that can survive until the end of time.

Like Nicolas Pioch's Web Museum. Bienvenue!


drewg123 7 hours ago 2 replies      
> [The Rolling Stones] actually streamed a concert on the platform in November of that year, using an online provider named MBone for some reason..

The MBone was not a "provider", it was an IP multicast network. This was the only way to efficiently stream video content to thousands of simultaneous clients before the advent of CDNs. https://en.wikipedia.org/wiki/Mbone

mfoy_ 8 hours ago 2 replies      
A similarly really annoying thing is when you find old technet articles, stack overflow questions, or blog posts that seem potentially really useful, but that have broken images, broken links, etc... so the content (possibly extremely useful at the time) is completely useless now.

It really stresses the importance of directly quoting / paraphrasing the content you want in your plain text, and not relying on external resources for posterity.

neoCrimeLabs 6 hours ago 0 replies      
Somewhat Unrelated.

I noticed that the wayback machine no longer lists historical sites if the latest/last revision of robots.txt denies access. Has anyone else experienced this?

In the late-90's I helped build one of the first fortune-500 e-commerce web sites. The website was shutdown years ago, but it view viewable on the wayback machine as recently as a year ago. The company in question put a deny-all robots.txt on the domain, and now none of the history is viewable.

It's a shame -- used to use that website (and an easteregg with my name on it) as proof of experience.

interfixus 8 hours ago 1 reply      
I read an internet article from 2017 and none of the stuff worked without access to all sorts of third party scripts and crap.
garethsprice 8 hours ago 4 replies      
I Bought a Book of Restaurant Recommendations from 1957 and None of them would Serve Me Dinner Any More
jedberg 6 hours ago 2 replies      
I owned (and still do own) this book! I would spend many hours as a teenager going through the links and accessing all the cool stuff in the book. This really brings back memories!

And yes, the way I got on the internet in those days was to dial into a public Sprintlink number, then telnet to a card catalog terminal in the Stanford library, and then send the telnet "Break" command at exactly the right time to break out of the card catalog program and have unfettered internet access. Good times.

umanwizard 3 hours ago 0 replies      
My uncle Pat wrote this book (and multiple others in the same series). I'm amazed Vice is talking about it over twenty years later and I'm sure he will be too once I show him the link!

I had lots of fun reading them as an Internet-addicted kid -- but several of the links were dead even before it was officially published.

panglott 7 hours ago 2 replies      
"It was possible to get on a text-based version of the internet for free in many areas, using only a modem and a phone line. An entire section of Free $tuff From the Internet offers up a lengthy list of free-nets, grassroots phone systems that essentially allowed for free access to text-based online resources."

Makes me want to try to read a Markdown-only Internet browser, which treats native Markdown documents as the only kind of Web page.

RJIb8RBYxzAMX9u 2 hours ago 0 replies      
I don't think this is necessarily a bad thing. For one, the 80/20 rule applied back then just as it does today, so most of what's lost is crap to begin with. It's no different than in the real world: surely nobody's lamenting that you can't pick up a copy of Fordyce's Sermons anymore (and I presume that it'd be long forgotten if not for Austen). While some valuable resources were undoubtedly lost, most live on re-posted elsewhere, like memes, but in a "good" way.

Secondly, the book is more analogous to a map or dictionary, and it ought to be a descriptive source, not a prescriptive one. Some language purists may disagree, but I could care less :-). And similar to an old, outdated, map, you'd expect that the details may have changed, but the landmarks are most likely still accurate. NASA's still nasa.gov, MIT's still mit.edu -- well, IIRC www.mit.edu used to point to their library's portal, and web.mit.edu their main page; I see that's changed -- and CompuServ still...exists.

joannaz 1 hour ago 0 replies      
This is why projects to do with Internet archiving such as https://archain.org/ interest me, as imagine all of the dead links and data lost 23 years from now!
komali2 6 hours ago 2 replies      
The article indicates that the "free" stuff on the internet was hidden away in weird places - ftp servers and the like. No google to find it for you, the only way was by word of mouth, or I guess via published book.

Answers a question I always had about "Snow Crash" by Neal Stephenson. The main character, Hiro Protagonist (I still giggle at that name), sometimes did work as a kind of data wrangler - "gathering intel and selling it to the CIC, the for-profit organization that evolved from the CIA's merger with the Library of Congress" (Wikipedia).

I always wondered what made that feasible as a sort of profit model, and I guess now I know - that was the state of the internet in 1992, when the book was published. Seems like a way cooler time period for Cyberpunk stuff, I'm almost sad I missed it :(

memracom 5 hours ago 0 replies      
Did you try looking them up at archive.org? I expect that many of them will work there.

The web is ephemeral unless somebody archives it. Many companies offer an archive service for your sites for a fee, and archive.org does it to provide a historical record,

twic 2 hours ago 0 replies      
> The Rolling Stones [...] actually streamed a concert on the platform in November of that year, using an online provider named MBone for some reason

"For some reason"! That's the flipping multicast backbone you're talking about there! One of the great lost dreams of the internet!


glangdale 2 hours ago 0 replies      
The reason the book didn't mention spam all that much is that it was from 1994 and probably mostly (or entirely) written before April 12, 1994, which is when the infamous "immigration lawyer spam" hit every newsgroup. It wasn't the first spam, obviously, but it did seem to mark a line where spam (USENET and email) became more and more prevalent.
Havoc 5 hours ago 0 replies      
Yup. Recently promised a colleague a pdf. I knew what I was looking for, who wrote it and and which site it was on (regional site of my employer). It even featured highly on google (showed up on related searches).

Zilch. Nada...couldn't find it anymore. Gone. Something I had easily chanced upon before I know couldn't find with directed searching. They must have restructured their site.

littleweep 8 hours ago 5 replies      
I mean, this isn't all that surprising. Not unlike buying a twenty-year-old visitor's guide to a city and finding that a number of the shops and restaurants have closed, the stadiums have different names, etc.
belak 6 hours ago 1 reply      
This reminds me of the Final Fantasy IX strategy guide. It integrated with Square Enix's Play Online service but now that FFIX was removed from Play Online, none of the links work any more and the guide is pretty much useless. I'm sure we'll start to see more of this in the coming years. It's not really sustainable to keep a website running forever.
dep_b 5 hours ago 1 reply      
Just yesterday I was helping an uncle changing the Flash-based menu of his site about classic race cars to a newer one so it would actually show on phones. It was all .htm files that included the Flash menu every time, apparently he worked with some kind of 15-year old copy of Dreamweaver that would add it to the top of every page he created like a template.

I could have switched it to a PHP include but that would either break all existing links, take a bit of work to make .htm files execute PHP or to make them forward permanently to their PHP versions. Or I simply could do the only sane thing: loading menu.php on every page within an IFRAME and changing his 15 years old Dreamweaver template.

The internet has been saved! A bit at least!

stesch 7 hours ago 1 reply      
Can't see the article with NoScript. :-)
JepZ 5 hours ago 0 replies      
Well yeah, it sucks. I think the problem here is, that an URL was supposed to be a stable, unique identifier for a ressource (like a UUID). But at the same time humans have to enter them and therefore they have to be nice, shiny and up to date with the latest trends, which causes them to change constantly...

Maybe we should built a DHT containing UUIDs for all pages als alternative, stable URIs :D

vhost- 6 hours ago 0 replies      
I've been an arch linux user for a very long time. I very much praise the documentation and it always gets me 90% of the way there when trying to solve a problem or configure something. But when I need that extra 10% I can usually find someone on the forums with a similar issue and the solution is usually a huge rabbit hole of links and some are broken, which gets really frustrating because I have to hope that it was cached by archive.org.
pc2g4d 5 hours ago 0 replies      
This problem has concerned me for some time. One solution would be for websites to declare their license (Creative Commons, proprietary, public domain, etc.) and then web pages can embed the content of pages linked to when the license allows it.

A web with content-based addressing and versioning built into the protocol could also deal with this situation more gracefully, but again there are copyright issues.

bluejellybean 7 hours ago 0 replies      
I have been wanting to find or build a tool that checked if a link was dead before redirecting my browser. If the link turned out dead, just redirect to the appropriate internet archive[0] link. Problem somewhat solved as long as the archive doesn't go bust.


gigavinyl 4 hours ago 0 replies      
I know this is sort of side-stepping the issue but isn't this what the Wayback Machine is for?
kazishariar 3 hours ago 0 replies      
Whatever happened to Project Xanadu? Wasn't that 'apposed to address these?
SubiculumCode 7 hours ago 1 reply      
The solution to link rot (and to privacy invasion) is to download the internet.
rffn 7 hours ago 2 replies      
Lycos and Yahoo are still around. The book really does not have them?
basicplus2 1 hour ago 0 replies      
I think it should be considered best practice to not only provide links but host a copy of what you are linking to.
aorth 6 hours ago 0 replies      
I can't even find my own tweets sometimes. We're so screwed.
ElectricPenguin 6 hours ago 0 replies      
The AOL disk in the back of the book doesn't work either.
peterwwillis 8 hours ago 4 replies      
Man, I really miss FTP. I remember when you would just FTP to the site you were using and grab a binary from their /pub/. Mirrors were plentiful, and FXP could distribute files without needing a shell on a file server.
ashwn 6 hours ago 0 replies      
LOL the link to Vice didn't even work for me.
j45 6 hours ago 0 replies      
I run into this a lot and use diigo.com to cache the sites I visit to ensure anything I found useful enough to bookmark continues to exist in my life.
anigbrowl 7 hours ago 0 replies      
Yeah I'm afraid to try this with my book which came out around the same time XD
whipoodle 8 hours ago 0 replies      
Much of the appeal of software and the internet is that they can change quickly. We would also like them to never change.
based2 7 hours ago 0 replies      
Show HN: A stop-motion video of an engine howacarworks.com
698 points by AlexMuir  13 hours ago   173 comments top 52
alexwebb2 10 hours ago 6 replies      
This is neat, but based on the title, I thought this was going to be a bit more informative about _how_ engines work, and how each subprocess contributes toward the end goal.

What I saw instead was a subset of subprocesses in isolation from each other, presented in an admittedly artistic fashion. It's impressive, and maybe the purpose is more to whet one's appetite for more information rather than be informative in itself, but that's not really what I was expecting or hoping for.

AlexMuir 13 hours ago 7 replies      
It took 2,500 photos and 4 days to shoot, followed by about 8 days of photoshopping & grading.
burntwater 12 hours ago 2 replies      
As a hearing-impaired person, I just wanted to thank you for clearly mentioning that it's subtitled. That shows it's more than just an afterthought, and seals the deal for me!
laurencei 12 hours ago 2 replies      
IMO what is really clever about this is that it is a "sales" video - except you dont realise you are being sold to until the end, by which time you've enjoyed the video so much, the pitch at the end is reasonable.

And you've shown what the value is long before I asked myself the question "how much" - which I usually ask early in the process - but not here.

At least that is how I found it... great work.

Would be interesting to see conversion figures for something like this.

teh_klev 10 hours ago 1 reply      
This was originally a "Show HN":


And this was Alex's follow-up a year on:


Very well done Alex!

noonespecial 11 hours ago 2 replies      
Top notch! It looks so easy! But you forgot the part where you spend an hour banging, cursing, and blasting that one bolt with a torch because it. just. will. not. budge, only to have it snap off and realize you're going to spend tomorrow drilling it out and tapping that hole...
nimrody 11 hours ago 2 replies      
Beautiful work!

It's nice to see something that was designed with maintainability in mind. Designed to be disassembled, repaired and re-assembled later. Impressive engineering.

So different from most consumer products sold today which never use screws and are not designed for repairing. If it breaks down you're expected to buy a new one...

inthewoods 12 hours ago 0 replies      
What I found surprising about the video was that it made an engine seem, somehow, less complex and daunting. I'm sure that's a bit of an illusion created by the way the engine is taken apart (and the fact that it's not in a car and therefore you can rotate it as you need it, etc), but amazing work.
Spacemolte 13 hours ago 2 replies      
Wow, that was really awesome, great work!A small piece of critique, the last part of the video was really garbled/messy? and i kept trying to focus on the car parts, but was unable to due to it jumping around etc. So that part messed a bit with my eyes.
radiorental 12 hours ago 1 reply      
Inspired by this viral video from a few years back perhaps? https://youtu.be/daVDrGsaDME
mrspeaker 11 hours ago 1 reply      
I've been meaning to play the hilarious-looking "My Summer Car" (http://www.amistech.com/msc/) recently as I have been hankering to learn about engines... perhaps it's wise to do the video course first!
kodeninja 31 minutes ago 0 replies      
This looks AWESOME, @alexmuir! Preordered :)!
catshirt 12 hours ago 0 replies      
dang. this is so awesome. really makes the machine feel so much more accessible.

and your ad is one of the best i've seen since MasterClass ads in my Facebook feed. i felt like the ad was basically free content. i was learning!

ryandetzel 12 hours ago 1 reply      
Wait, this course is only $20? This seems like a lot of content for $20...

Really great video too

tambourine_man 11 hours ago 0 replies      
Congratulations, I can imagine the amount of work that went into making it.

I like to take things apart, and it made me a bit nervous, as each piece was separated, that I would never be able to put it back together :)

Dowwie 13 hours ago 2 replies      
This is beautiful. Love the synthwave soundtrack :)

Where is the Reddit post of this? You're going to front page, for sure.

cdnsteve 9 hours ago 0 replies      
All the visual effects starting a 2:03 nearly made me sick, literally. The actual content was great.
subpixel 9 hours ago 1 reply      
Kudos for sticking with this project, which has not made you a ton of money overnight, since at least 2012.

I don't mean to underplay the work involved in programming and marketing this project, but just not giving up is perhaps the hardest part of things like this.

LeonM 10 hours ago 1 reply      
Really nice work!

I did have a slight giggle when the promo at the end says you explain everything about 'modern cars', while you are working on a car introduced 27 years ago.

Of course I understand that disassembling a new car does not make financial sense, I'm not trying to be negative here.

otto_ortega 7 hours ago 0 replies      
Congratulations! This is a really nice project, it seems to have the right combination between real knowledge about the car and audio-visual effects to keep people engaged.

I just subscribed to the video course and I can see the preorder offer is a no brainer, skimming through the PDF provided I can see there is enough value on it to easily make it worth the $20 by itself.

So as a suggestion: Highlight the PDF and its content on the preorder page, there is only one mention about it but it doesn't specify its contents.

awongh 12 hours ago 1 reply      
Very cool.

The motorcycle equivalent is this: https://www.youtube.com/watch?v=MkHJuU01-Wk&index=43&list=PL...

I watched about 3/4 of these ^^ videos, really learned a lot about how a combustion engine works.

martin-adams 11 hours ago 0 replies      
Very nice indeed. If you liked this video, you may also like this rebuild of an engine over 11 months.https://www.youtube.com/watch?v=daVDrGsaDME
creeble 9 hours ago 0 replies      
Fantastic video, and timely -- just spent a few days cursing at a Mazda Z5 engine (in a 97 Protege) myself!

Really, it's a pretty great engine, but with 233k miles a little grumpy.

chaostheory 8 hours ago 0 replies      
I really like the parallaxed, knolled layout of a car's parts that you did for your ad for your 'Ultimate Video Course' (middle of https://www.howacarworks.com/basics/the-engine).How long did that take to finish?
jcoletti 7 hours ago 0 replies      
Very cool, well-shot video. As others have said, I was more interested in a detailed explanation of the inner-workings. After finishing the video and exploring others, I found my way to the video course preorder page. $20 paid! Great concept, best of luck. Plan to start watching this weekend.
j-me 11 hours ago 1 reply      
Very cool video! Purchased.

I'm really interested in seeing where you go with the 3D modeling. As a coder/DIY mechanic (one of many I'm sure), I'm pretty psyched by how this tech could be used.

I also want to say that I appreciate your price point. I think it's at a good point where it might be less than the potential value of the product, but attracts those who would otherwise dropout of purchase or seek other means to obtain the media.

dave7 4 hours ago 0 replies      
This looks awesome!

How is the course delivered? Downloadable or streaming only? Can I watch it on Linux?

kikkoman23 6 hours ago 0 replies      
Pretty cool. Learning more about cars is something I need to definitely brush up on, especially when it comes to things under the hood.
kazinator 9 hours ago 0 replies      
Very nice; and playful. How some of it is done is not obvious, like the pistons "taking off" out of the cylinders without the support mechanism for taking the shots being obvious.

Nice tip of the hat to Luxo Jr. at the end there.

aembleton 4 hours ago 0 replies      
Just 16! I've pre-ordered, that's a bargain.
NIL8 10 hours ago 0 replies      
Great job! I wish there were sites like this for more... things.
nichochar 7 hours ago 0 replies      
This is impressive marketing, I purchased the class, am happy about it, AND liked the way it was marketed to me.
kerbalspacepro 10 hours ago 0 replies      
I don't want to sound like a fake commenter, but this somehow sold me on the course concept even though the part of a car I least car(e) about is the ICE.
AlexMuir 9 hours ago 0 replies      
OT: Youtube hasn't tracked a single view from this being embedded on the site. Does anyone know if Youtube doesn't count embedded views?
edpichler 12 hours ago 0 replies      
What a beautiful work, very satisfying to watch this introduction, as people commented, just at the end you realize it's a promotional video of the course, and not just a piece of art.
sharpercoder 13 hours ago 0 replies      
Great video! Really enjoyed watching it. Great lighting. Towards the end, the images get flashy (don't do this please filmmakers, it makes my head hurt).
lordshiny 5 hours ago 0 replies      
Was sold almost immediately. Beautiful work!
vincnetas 9 hours ago 2 replies      
And the first thing i thought when seeing this: and thats why electric motor is the future.
lucaspottersky 9 hours ago 1 reply      
nice ad. however, the tech side doesn't seem to have received as much love.

#1 PayPal returned me to an invalid URL after finishing the payment

#2 I've paid & logged in, nevertheless the website still shows me links to "buy the course".

rootsudo 8 hours ago 0 replies      
Nice! A Mazda Miata Engine!
westmeal 12 hours ago 0 replies      
The sound design was excellent! Well done.
peoplee 12 hours ago 0 replies      
For someone like me who spent hours watching engine videos on youtube, this was an instant buy. Well done!
d-roo 12 hours ago 1 reply      
Really cool idea but my eyes actually hurt watching the end though and had to stop the video.
pmarreck 12 hours ago 1 reply      
The raw complexity here is astounding.

Something something electric motors are far simpler. ;)

bhudman 13 hours ago 0 replies      
Wow. I imagined it took tons of patients (well - 4 days worth). Amazing.
aerovistae 10 hours ago 0 replies      
The foley on that video must have taken some time.
bitL 12 hours ago 1 reply      

How did you make those flying parts? Photoshopping out the holders?

alkz 12 hours ago 0 replies      
shut up and take my money! :)
mschuster91 13 hours ago 0 replies      
That's just awesome. Back when I had a VW bus, I disassembled it and did most maintenance myself - but never down to THAT level of detail. Especially, I had to spend around 40 bucks on new screws because I misplaced the old ones (or they broke off due to old age)
TheOtherHobbes 12 hours ago 0 replies      
Wow. That was superb.
perilunar 13 hours ago 0 replies      
matt_wulfeck 9 hours ago 3 replies      
Very cool. It's amazing how complex the combustion engine seems, especially compared to an electric motor. No wonder cars have a limited lifetime of typically some 150-200k miles.

It's going to be really interesting to our purchasing and maintenance patterns for EVs.

Gates Makes Largest Donation Since 2000 with $4.6B Pledge bloomberg.com
369 points by adventured  13 hours ago   214 comments top 26
braydenm 9 hours ago 4 replies      
Although there are many cynics, it's quite remarkable the impact on the world donations can have. Here's what their foundation has actually been doing with the money: https://www.gatesfoundation.org/Who-We-Are/Resources-and-Med...

I wouldn't be surprised if private donations will eventually be responsible for the eradication of Malaria (1000 deaths daily, much more suffering and cost to society).

If you're in tech you're likely to be in a great position to create value beyond your company. For example, donating equity from your startup or a fraction of your income to the charities that can prove they are having the most cost effective impact on the world:https://founderspledge.com/https://www.givingwhatwecan.org/pledge/

SEJeff 11 hours ago 5 replies      
I'm surprised that people are asking how / why he is giving money away and no one mentioned his Giving Pledge:


Bill Gates and Warren Buffet pledged to give half of their net worth away during their life or death to charity. They're practicing what they preach.

pmoriarty 7 hours ago 3 replies      
A different view on Gates' charity work: [1]

Some highlights:

"The first question concerns accountability... The Foundation is the main player in several global health partnerships and one of the single largest donors to the WHO. This gives it considerable leverage in shaping health policy priorities and intellectual norms..."

"Depending on what side of bed Gates gets out of in the morning, he remarks, it can shift the terrain of global health..."

"Its not a democracy. Its not even a constitutional monarchy. Its about what Bill and Melinda want..."

"In 2008 the WHOs head of malaria research, Aarata Kochi, accused a Gates Foundation cartel of suppressing diversity of scientific opinion, claiming the organization was accountable to no-one other than itself."

"As Tido von Schoen Angerer, Executive Director of the Access Campaign at Mdecins Sans Frontires, explains, The Foundation wants the private sector to do more on global health, and sets up partnerships with the private sector involved in governance. As these institutions are clearly also trying to influence policymaking, there are huge conflicts of interests... the companies should not play a role in setting the rules of the game."

"The Foundation itself has employed numerous former Big Pharma figures, leading to accusations of industry bias..."

"Research by Devi Sridhar at Oxford University warns that philanthropic interventions are radically skewing public health programmes towards issues of the greatest concern to wealthy donors. Issues, she writes, which are not necessarily top priority for people in the recipient country."

More in the article...


dopamean 10 hours ago 2 replies      
Pledging to give away so much of your wealth has interesting side benefits. You get to keep your money out of the hands of the government and if you're donating to your own charity then you also get to keep your family in control of the money.

I definitely don't mean to diminish the contribution of the Gates Foundation though. I often hear that they're one of the good ones.

brianbreslin 11 hours ago 2 replies      
So what is his net worth made up of if he's donated all but 1.5% of his MSFT stock? His remaining MSFT stock is worth about $8B. What's the rest of his $82B made up of?

- Edit- Nevermind, found it on here: https://en.wikipedia.org/wiki/Cascade_Investment

alexandercrohde 6 hours ago 0 replies      
Hooray for Bill. When I was young the joke was how he was a crazy man trying to monopolize/take over the world. Seems ironically he may be the single man who has given the most to the world.

Hopefully other billionaires can take inspiration from him and recognize that helping the species is a more fulfilling game than "How many 0s in my net worth."

dalbasal 11 hours ago 1 reply      
No mention of what project s the funds are going towards.

There have been some good words from the foundation regarding the (health, primarily I believe) programs in Tanzania. I wonder if this is towards scaling those projects.

Anyone have the scoop?

escot 8 hours ago 1 reply      
> Gates remains the richest person on earth after the donation with a fortune the Bloomberg Billionaires Index valued at $86 billion as of 10:40 a.m. Tuesday in New York. His donation once again puts Amazon.com Inc. founder Jeff Bezos close to the top spot, with a net worth of $84.5 billion.

Keeping a bit of wiggle room.

grandalf 5 hours ago 3 replies      
I have a lot more respect for someone like Elon Musk who invests his money back into new ideas than someone who simply gives most of it away to charity.

Malaria, low literacy rates, etc., are the byproducts of failed political systems and corruption.

Musk's impact on electric vehicle technology will drain a great deal of despotism from the middle east as dependence on oil wanes, far more effectively than any philanthropic contribution he might have made would have.

There are a number of technologies that can drastically change the dynamic between the elites (officials) and everyone else worldwide. Our most gifted thinkers and entrepreneurs should be inventing the next printing press or cotton gin, not attending charity functions.

downandout 2 hours ago 1 reply      
Can anyone explain how he only owns 1.3% of Microsoft now, and yet is worth $86 billion? Almost all of his net worth came from Microsoft shares. Based upon the MSFT market cap of $561 billion, he is only worth about $7.3 billion.

What accounts for this monstrous difference? He has cashed some out over the years, but not ~$80 billion worth.

zitterbewegung 12 hours ago 4 replies      
I am not sure if I recall correctly but wasnt there a reason where gates wouldnt donate all of his wealth at once and needed to do it in stages ?

Also, maybe they cant utilize all that cash at once. Therefore it would be best to be illiquid until you need the liquidity.

2_listerine_pls 10 hours ago 7 replies      
He is donating it to The Bill & Melinda Gates Foundation, which is controlled by himself. Isn't it just a different form of ownership?
kareldonk 5 hours ago 0 replies      
When you've hoarded so much cash and denied others a better life, it's easy to 'donate' like this.
nepotism2018 3 hours ago 0 replies      
I hope it won't make the news like the money raised for Grenfell Tower Fire survivors
joeblow9999 5 hours ago 0 replies      
Don't forget that Zuck has also pledged 99% of his wealth.
freddealmeida 9 hours ago 0 replies      
Is there any data on what he has given in actual dollars from his pledge to this one?
gigatexal 6 hours ago 0 replies      
Likely didn't cost him much (no disrespect to his huge donation meant) since MSFT shares rose something like 50% in the last few years.
jtx1 7 hours ago 3 replies      
4010dell 9 hours ago 1 reply      
canoebuilder 7 hours ago 11 replies      
revelation 10 hours ago 2 replies      
Read: has donated shares of Microsoft to his own foundation.

Maybe we can give Gates the benefit of the doubt but for everyone else, this is just a tax scheme.

anonymousiam 9 hours ago 0 replies      
So he donated a small percentage of the stock he holds to a charity that he and his wife control to avoid taxes on dividends. Big deal.
balozi 9 hours ago 0 replies      
Even Gates doesn't trust the government with his money. If he did he would render unto Caesar and have him build roads and fund foreign despots. Like the the rest of us.
kensai 9 hours ago 0 replies      
Bezos reacted! That was fast!! :D


givan 8 hours ago 1 reply      
Instead of using the money to research renewable or more efficient energy production, reduce pollution etc his foundation uses the money for vaccines and health services which surprisingly has as target, as he says in a ted video (link below), to decrease human population which he sees it as the major problem of today.


"The world today has about 6.8 billion people. That's headed up to about 9 billion. Now if we do a really great job on new vaccines, health care, reproductive health services, we could lower that by, perhaps, 10 or 15%"

humanrebar 12 hours ago 6 replies      
Hypothesis: From an analytical perspective, he must believe that the upside of Microsoft is limited enough, at least over the time horizon he's concerned about, to start cashing in Microsoft stock. He's a smart guy. He'd be trying to convert his holdings at the top of a value curve, right?
SVG can do that? slides.com
46 points by funspectre  1 hour ago   6 comments top 5
kevinb7 5 minutes ago 0 replies      
While SVG can do a lot, there are certain things that it isn't optimized for. In particular animating lots of shapes simultaneously. The animation of the globe exploding into a bunch of triangles from the slides is a good (bad?) example of this. Also, there can also be inconsistencies is in the rendering of SVGs between browsers.
flamedoge 0 minutes ago 0 replies      
Very cool. SVGs can be very sensible choice for certain applications that canvas doesn't cover well.
Jack000 0 minutes ago 0 replies      
coming soon, full SVG websites featuring:

- loading bar until website is fully loaded

- animated buttons that bounce and flash

- full screen 2 second transitions from page to page

- all in one page, no urls!

feels like 2010 again : ]

nimish 17 minutes ago 1 reply      
I pray for screen readers.
khanhkid 10 minutes ago 0 replies      
It's awesome. last time, I using SVG to import image to pdf files (using TCPDF). The quality is clearly & beautiful. Now, Thank you for your introduction about another things SVG can do on the website.
MongoDB has filed confidentially for IPO techcrunch.com
71 points by abhi3  1 hour ago   21 comments top 9
leothekim 1 hour ago 1 reply      
They've made their share of mistakes, but they've stuck it out and have definitely improved their product over the years. I wish them all the best for their IPO. What would be really good to see As a measure of their business health is an indication of how their cloud business is doing with respect to their enterprise sales business. A healthy cloud business would signal less volatility in the face of high revenue garnered from fickle enterprise sales relationships that have been their bread and butter until the past couple years.
hendzen 10 minutes ago 0 replies      
Both Hortonworks & Cloudera are the most similar recent IPOs. Hortonworks is below the offering price, Cloudera has been basically flat. And this is in the context of an insane bull market for tech stocks and the market in general.

So we will see how this turns out. It's been a couple years since they last raised funding so its possible they didn't really have another choice. Chances are if the numbers on the S-1 were truly great they wouldn't have done it confidentially.

olegkikin 59 minutes ago 1 reply      
A side note: MongoDB now dominates the famous framework benchmark in the read benchmarks.


a13n 49 minutes ago 1 reply      
Wow I had no idea they were a for profit corp, rather than an open source project. And my startup uses mongo. Mind blown! Good for them.
nodesocket 1 hour ago 2 replies      
Wow interesting. Anybody have their S-1 filing? Interested to see how their hosted/managed offering Atlas[1] is doing vs support contracts.

[1] https://www.mongodb.com/cloud/atlas

notyourwork 1 hour ago 2 replies      
I have never heard of a confidential IPO, how does this differ from any other IPO that is in the news?
api 42 minutes ago 0 replies      
This is huge -- we haven't actually had that many open source based IPOs!
perseusprime11 22 minutes ago 1 reply      
Will meet similar fate like Twitter, Snapchat...
arrty88 46 minutes ago 1 reply      
Get ready for another SNAP!
The Daily Stormer, Online Speech, and Internet Registrars stanford.edu
35 points by walterbell  1 hour ago   11 comments top 5
ryanlol 1 minute ago 0 replies      
This article neglects to mention that anyone can relatively easily become a domain registrar.
lstroud 3 minutes ago 0 replies      
Free speech is not a sometime thing. Just ask 1926 Germany.
SirLJ 4 minutes ago 1 reply      
First they came for the Socialists, and I did not speak outBecause I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out Because I was not a Jew.

Then they came for meand there was no one left to speak for me.

ocdtrekkie 23 minutes ago 2 replies      
I've been really uncomfortable with this whole thing. I get why offensive people get booted out of communities and social networks, but I don't feel like infrastructure/platform providers should be in the business of content moderation.

From what I read, it was suggested that it would be hard to even figure out what The Daily Stormer could've done 'wrong' according to Google Domains' Terms of Service.

sigmar 24 minutes ago 1 reply      
>If we ask these small companies to take on content removal obligations, we should not expect nuanced decision-making or robust appeal processes. We should expect legal and important sites from across the political spectrum to go down because someone complained about them.

This is a ridiculous statement. Domain registrars are already required (by ICANN) to receive, investigate, and respond to abuse complaints.

The HDFS Juggernaut shodan.io
94 points by josephscott  7 hours ago   39 comments top 4
pweissbrod 6 hours ago 1 reply      
Hadoop (and hence HDFS) is a stack of services designed to work together to serve a file system and manage jobs. The hadoop stack has a pluggable authentication/authorization by design. And yes, the default is "no security".

Given the distributed nature, HDFS runs on multiple machines. In linux distributed service security fits well with kerberos. Normally if you want a "secure" HDFS you must "kerberize" the services such that any hadoop operation requires a valid/authorized TGT.

To most people kerberizing a hadoop cluster is a major barrier to getting hadoop running. I dont see this changing but certain vendor hadoop distros break down some of the barriers.

Sometimes it is OK if you run a cluster insecure. Please dont do it if youre handling my financial or medical records though. As Mr.T once said 'dont write checks that yo ass cant cash'

sbarre 7 hours ago 8 replies      
Why do all these products have "insecure by default" configurations, anyways?

Didn't we learn anything from register_globals?

iamjochem 5 hours ago 2 replies      
even if node-to-node communication in a cluster (hadoop or otherwise) itself is not secured, is it not reasonable to secure external access to the cluster itself (i.e. with a firewall)?

from an outsider perspective (I've never used/run hadoop) I cannot see much reason for exposing the cluster to the outside world - either a web-app acts as an intermediary or access can be provided via VPN/ssh-tunnel/etc

... just curious why a fully/publically exposed cluster would be a "requirement"? or does it come down to the fact that firewalling an AWS environment is as painful (if not more) than "kerberizing" a [hadoop] cluster? (I kind of assumed AWS has firewalling functionality that is fairly plug'n'play ... a quick search does really back that up though)

Danihan 6 hours ago 2 replies      
This was back in May, I wonder how it has changed / if anyone parsed some of this data..
Dung Beetles Navigate via the Milky Way nationalgeographic.org
129 points by komuW  9 hours ago   31 comments top 13
pavlov 5 hours ago 3 replies      
Ancient Egyptians considered dung beetles sacred, and believed that they were responsible for rejuvenating the sun during the night. Egyptians also had a keen spiritual and scientific interest in astronomy.

Now it's revealed that dung beetles can perceive the galaxy. Coincidence? I think not.

Obviously dung beetles are descended from a race of astronavigators who taught the Egyptians everything. They are the ancient astronauts. [Cue theremin music]

njharman 5 hours ago 2 replies      
This is less surprising if you imagine evolution to be a reinforcement machine learning system. The dung beetle actors are given all the sensory inputs of their environment. Those that used the inputs (which happened to be our galaxy) where better at satisfying goal and thus had higher selection rate for next iteration of system. The actors, much like machine leaning, AI, don't have any logic nor any reasoning. They simply are a ludicrously complex, but deterministic state machine of inputs -> mess -> outputs. The mess being seeming unintelligible, not rational, with lots of "cruft".
zoom6628 17 minutes ago 0 replies      
Dung beetles 1 Humans 0There are so many things left to discover and understand on the planet. Great time to be a scientist, or a maker.
vermontdevil 5 hours ago 0 replies      
Saw this in the twitter thread where this topic was started

A dung beetle goes into a bar.He doesn't order a drink.He just takes a stool.

olegkikin 6 hours ago 3 replies      
But how do they know it's not based on just a few bright stars? Milky way is pretty hard to see with our large human eyes. Insect eyes are good for panoramic views, but much less efficient at low light acuity.
rexfuzzle 9 hours ago 0 replies      
For context: TED talk where navigation via the sun is discussed and shown- https://www.ted.com/talks/marcus_byrne_the_dance_of_the_dung...
komuW 8 hours ago 0 replies      
I have found the link to the original paper: http://dx.doi.org/10.1016/j.cub.2012.12.034
joelrunyon 5 hours ago 0 replies      
Here's the tweet thread that brought this to the forefront today - https://twitter.com/GeneticJen/status/897153736669356032
mcone 4 hours ago 0 replies      
lai 4 hours ago 0 replies      
That dung beetle helmet is hilarious.
Moshe_Silnorin 2 hours ago 0 replies      
What wonders we see in nature.
samstave 9 hours ago 0 replies      
"little cardboard hats"

I never thought a dung beetle could sound so cute.

ythn 9 hours ago 0 replies      
Wow, they put the beetles in a planetarium? Nice
The Rising Return to Non-Cognitive Skill [pdf] iza.org
101 points by luu  7 hours ago   35 comments top 8
zitterbewegung 6 hours ago 3 replies      
I had to look the defninition of Non-Cognitive skills. Below is an example.


Once I understood what they were I realize that I have encountered the behavior in my life anecdotally. Also, I realize the ones that I am deficient in.

EDIT: The study has a definition that I missed when reading it. See tboyd47's comment below.

exelius 5 hours ago 1 reply      
In my experience, non-cognitive skills are definitely a big boost to your career. However, the real need is for people with both cognitive AND non-cognitive skills: if you can understand something intrinsically through your cognitive skills, you can leverage your non-cognitive skills to lead a group of people with high cognitive skills.

That leadership piece is what's in demand; and IMO that requires strength in both areas.

thanatropism 6 hours ago 5 replies      
I refuse this terrible term and invite everyone to use (following zitterbewegung's udemy link) to use "metacognitive skills" instead.
tboyd47 6 hours ago 0 replies      
Fascinating study & highly relevant to most people here (since many of us "hackers" are in highly cognitive job roles)


> Thus, the labormarket appears to increasingly value individuals possessing high non-cognitive relative tocognitive skills over time.

jackcosgrove 6 hours ago 1 reply      
The buzzword a year or two back was "grit".

From my own experience non-cognitive skills are becoming rarer, so increasing returns make sense.

aerovistae 5 hours ago 1 reply      
> Workers with an abundanceof non-cognitive skill were increasingly sorted into occupations that were intensive in:cognitive skill; as well as abstract, non-routine, social, non-automatable and offshorabletasks. Such occupations were also the types of occupations which saw greater increases inthe relative return to non-cognitive skill.

Isn't this a contradiction? "Increasingly sorted into occupations that were intensive in cognitive skill" and simultaneously "greater increases in the relative return to non-cognitive skill"? So are we just saying they saw the heaviest rise in skills in general?

I didn't read past the abstract, maybe the body is clearer.

bitL 5 hours ago 2 replies      
How do you increase your stress tolerance? Imagine you are working on bleeding edge stuff at some top company with crazy pace and you don't want to end up with burnout. What would you do in order to improve this aspect? I noticed ramping up fitness training actually decreases my stress resistance as I stress my body and consequently brain much more and am more prone to getting mentally exhausted than when I am not training at all.
Boothroid 3 hours ago 0 replies      
So then surely we need some measures to prevent discrimination against those lacking in non-cognitive skills?
Rustgo: Calling Rust from Go with near-zero overhead filippo.io
249 points by FiloSottile  12 hours ago   50 comments top 8
masklinn 11 hours ago 1 reply      
> But to be clear, rustgo is not a real thing that you should use in production. For example, I suspect I should be saving g before the jump, the stack size is completely arbitrary, and shrinking the trampoline frame like that will probably confuse the hell out of debuggers. Also, a panic in Rust might get weird.

> To make it a real thing I'd start by calling morestack manually from a NOSPLIT assembly function to ensure we have enough goroutine stack space (instead of rolling back rsp) with a size obtained maybe from static analysis of the Rust function (instead of, well, made up).

> It could all be analyzed, generated and built by some "rustgo" tool, instead of hardcoded in Makefiles and assembly files.

Maybe define a Go target to teach Rust about the Go calling conventions? You may also want to use "xargo", which is specially built for stripping or customising "std" and to work with targets without binary stdlib support.

danenania 7 hours ago 4 replies      
I haven't tried Rust yet, but I've been building libraries in Ruby, Node, and Python that call into a shared Go core, and my experience has been that the best approach is to simply compile static executable binaries for every platform, then call out to them in each language via stdin/stdout. I tried cgo, .so files, and the like, bit this was a lot more trouble and had issues on both Windows and alpine-flavored linuxes.

Is there some issue with this approach that I'm missing? Is the additional process overhead really enough that it's worth bending over backwards to avoid it?

uluyol 11 hours ago 0 replies      
Building the rust code into a syso file might make the (user) build process easier here. This is used for the race detector (based on tsan) and there's an example of building and using one in the dev.boringcrypto branch. This would require a package author to create syso files for all GOOS, GOARCH combinations they care about. Although GOOS might not matter depending on whether any syscalls can be made from rust.


drej 10 hours ago 0 replies      
This is crazy. I love it.
dm319 11 hours ago 1 reply      

 Go strives to find defaults that are good for its core use cases, and only accepts features that are fast enough to be enabled by default, in a constant and successful fight against knobs
Made me chuckle.

kmicklas 6 hours ago 3 replies      
If you're already writing Rust, why would you even bother writing Go?
artursapek 9 hours ago 3 replies      
Is rust actually faster than go? I had no idea.
smegel 4 hours ago 0 replies      
What has this got to do with Rust? Nothing. He could have called any C library and it would have been exactly the same. I am pretty sure there is a crypto library or two written in C.

He is just writing a more direct manual version of CGo in assembly that bypasses a lot of what CGo does, to be much faster.

> Before anyone tries to compare this to cgo

The only meaningful message in this blog is it possible to write a faster CGo, that's it. Comparing it to CGo is the only useful possible outcome, but...

> But to be clear, rustgo is not a real thing that you should use in production. For example, I suspect I should be saving g before the jump, the stack size is completely arbitrary, and shrinking the trampoline frame like that will probably confuse the hell out of debuggers. Also, a panic in Rust might get weird.

So when you actually fix all those things you might be back where CGo was at the beginning.

This guy comes across as a classic "but i wanna be cool" hacker who discovers that when you bypass all the normal protections in a library and make some kind of direct custom call, things can be faster.

I guess so what?

Ev Williams helped create the open web, now hes betting against it (2016) theatlantic.com
62 points by Tomte  8 hours ago   17 comments top 5
overcast 7 hours ago 1 reply      
What's amazing is that none of his ventures were ever profitable that I know of. He made all of his cash off of acquisitions.
retube 3 hours ago 1 reply      
> and a communication tool so powerful that it could abort war.

Ha. If anything the clash of cultures that the web facilitates is more likely to start wars.

jtraffic 5 hours ago 0 replies      
I wonder how this article would be different had it been written after Medium introduced its new funding model. For one thing, a bunch of those publications, like The Awl are no longer there.
igravious 6 hours ago 1 reply      
(A similar sentiment sparked the creation of public broadcast media in the 1970s.)

Something to ponder. May be the only alternative option. But on a global scale, cuz the internets know no national boundaries.

Also, notice how so not focused on the actual tech this article is. How it's all about ideals and grand sweeping narratives. It is this that turns text-boxes into something with cultural value. It's simultaneously bathetic and comical.

okket 7 hours ago 1 reply      
"Shitty pop reference headlines for 100, please"
An Introduction to Quantum Computing, Without the Physics arxiv.org
95 points by lainon  10 hours ago   17 comments top 6
daxfohl 2 hours ago 1 reply      
The article on QC I got the most out of: http://www.scottaaronson.com/blog/?p=208

Very visual example of how Shor's algorithm works to solve factoring. Nothing more than basic arithmetic required.

The big takeaway for me was, it's not just "try every combination at once" as per pop lit on the subject. QC doesn't really do that. To get QC to work any better that traditional for any task, you need to get lucky and stumble across an algorithm that QC can excel at for that task. Just from reading Scott Aaronson's article, it seems likely that most tasks simply don't have a QC optimization, so perhaps QC won't change much at all. (Well, except cryptography, which may change everything...)

s-macke 4 hours ago 0 replies      
Consider these two simulators, if you want to try the examples in the paper:

1. https://www.research.ibm.com/ibm-q/

This is IBM Quantum experience. Click on "experiment" to start. It has a nice tutorial.

2. http://algassert.com/quirk

I like this one much better, because you can see the internal state of the machine at any moment. And it has much more options and is much faster.

Strilanc 7 hours ago 1 reply      
After quickly skimming it it seems well done. Leans towards mathematical rigor, which may not be ideal for some people. On the other hand, it'd be pretty hard to avoid math: quantum computing is linear algebra incarnate.

The only thing that caught my eye as off was totally minor. They say the many-controlled-Z gate used by Grover's algorithm can be done in O(n^2) constant-sized gates with an argument-by-reference, but with that type of argument you might as well give the tight bound of (n).

Koshkin 4 hours ago 0 replies      
> Without the Physics

Well, is there much "physics" in (theoretical) quantum physics anyway? It's pretty much all math - just like in this paper!

epx 7 hours ago 0 replies      
Hope I am retired before it's mainstream :)
CacheThrasher 8 hours ago 6 replies      
Thanks for this, I have been looking for a good introduction to Quantum Computing. I haven't actually read it yet, but this looks promising.
High-process-count support added to master dragonflybsd.org
70 points by tiffanyh  7 hours ago   21 comments top 5
le-mark 5 hours ago 5 replies      
My question is: why am I paying cloud providers for virtual machines with some imaginary virtual cpu count (or dynos or whatever) when I could be paying for M process running my server executable (capped at N threads per process)? Why can't I just write a server, bundle up the exe and assets, and run it somewhere? Why do I have to futz about with admining, patching, and hardening the OS when that's not what I care about?

Edit, to clarify, it just seems like an os with the capability to host a large number of user processes as here would really allow an order of magnitude reduction in hosting cost. Ie if a machine can host 1,000,000 paying accounts vs 10 vps/containered apps.

tiffanyh 6 hours ago 1 reply      

 xeon126# uptime 1:42PM up 9 mins, 3 users, load averages: 890407.00, 549381.40, 254199.55
Seeing load averages of ~900,000 blows my mind.

justin66 3 hours ago 0 replies      
> With the commits made today, master cansupport at least 900,000 processes with just a kern.maxproc setting in/boot/loader.conf, assuming the machine has the memory to handle it.

They are just four bits away from hitting a really big number.

theandrewbailey 6 hours ago 0 replies      
Just make sure that your process destruction doesn't involve a lock in kernel space. 900,000 threads waiting for a lock... yikes!


lallysingh 6 hours ago 3 replies      
6 digit PIDs? These aren't actually stored in decimal, right?
Learnings from One Year of Building an Open Source Elixir Application achariam.com
166 points by achariam  12 hours ago   17 comments top 7
agentgt 5 hours ago 1 reply      
> Elyxel was designed and built with performance in mind. Styles and any additional flourishes were kept to a minimum. My choice of Elixir & Phoenix was driven by this consideration as well. Most of the pages are well under 100 kilobytes and load in less than 100 milliseconds5. I find it's always helpful to keep performance in the back of your mind when building something.

Thats much appreciated but I was kind of hoping you (the author) would go into more details about request time. Most people can (and should) do the above. However I would be more interested in what Elixer + CRUD is like particularly for TTFB. Like does the author do streaming (I don't necessarily mean websockets or comet)?

After all if the TTFB is really slow the CSS optimizations and what not matter little.

In traditional request per thread (or whatever is analagous) web framework paradigms the request is a single thread and often waits for the database to finish before moving on to display the page. I would imagine Elixir has a better answer at least for read only pages.

deedubaya 10 hours ago 2 replies      
One of the interesting things about the Elixir/Phoenix community is that building your own authentication system seems to be encouraged. Even if you're using one of the plugs available (guardian?) you still have to do a ton of manual lifting.
brosky117 11 hours ago 0 replies      
I really enjoyed this article and the one about the ambient notification cube! Well-written. It's nice to hear someone else talk about how hard it it to stay consistent with worthwhile projects while maintaining a day job. Congrats on sticking it out!
ploggingdev 9 hours ago 1 reply      
Interesting post, more so because I am working on a project that has many characteristics of a link aggregator.

While implementing the ranking algorithm, which is very similar to the one mentioned in the article, I decided to run a periodic job every 60 seconds that updates the rank for each submission and stores it in the database so querying the ranked data is more efficient than recalculating the rank on every page request. Are you doing something similar or did you take a different approach?

Ranking all stories works well if the total number of submissions is a small number, but I imagine the approach is a little different for large sites like HN. Ranking all submissions periodically seems like a waste since people rarely view submissions beyond 10 pages. One approach is just to rank submissions from the past n days, where n depends on the average daily submission volume.

For the part that displays time since a submission was made, I implemented the HN model, where it displays only minutes, hours and days. Python code here : https://dpaste.de/5d1w

> Elyxel was designed and built with performance in mind. Styles and any additional flourishes were kept to a minimum. My choice of Elixir & Phoenix was driven by this consideration as well. Most of the pages are well under 100 kilobytes and load in less than 100 milliseconds5. I find it's always helpful to keep performance in the back of your mind when building something.

Once you start to scale, the bottleneck is rarely the application layer. For the typical crud web app it's likely to be the database.

codegeek 10 hours ago 0 replies      
Great writing. Elixir/Phoenix has been on my radar for learning something exciting considering it is faster out of the box and can handle tons of concurrent connections.

For the lazy, here is the github link from this article:


darkmarmot 11 hours ago 1 reply      
Having a sign up button or perhaps context or text on the site might've been helpful...
overcast 5 hours ago 0 replies      
It would be awesome if elyxel.com wasn't so esoteric.
FTC says Uber took a wrong turn with misleading privacy, security promises ftc.gov
66 points by artsandsci  7 hours ago   19 comments top 3
johnmarcus 4 hours ago 2 replies      
This isn't an uber problem - it's a startup tech problem - no one wants to take security seriously because it's a cost sink that only averts risk, does not actually make a company revenue. I have seen ssn's store insecurely, open api's with customer data, old frameworks and languages that no longer receive security patches. Nearly every startup says they take security seriously, because that's the right answer to say, but very very very few actually do. This is just another minor blip in the otherwise very large system of data available everywhere. Your data is not safe, trust me on that.
hkothari 4 hours ago 0 replies      
The craziest part to me:

"As a result of the failures described in Paragraph 18, on or about May 12, 2014, an intruderwas able to access consumers personal information in plain text in Respondents Amazon S3Datastore using an access key that one of Respondents engineers had publicly posted toGitHub, a code-sharing website used by software developers. The publicly posted keygranted full administrative privileges to all data and documents stored within RespondentsAmazon S3 Datastore."

https://www.ftc.gov/system/files/documents/cases/1523054_ube... Page 5

tareqak 6 hours ago 2 replies      
From the article:

For a particular six-month period, Uber only monitored access to the account information of a select group. Who? Certain high-profile users, including Uber executives.

What was the upshot? In May 2014, an intruder used an access key an Uber engineer had publicly posted on a code-sharing site to access the names and drivers license numbers of 100,000 Uber drivers, as well as some bank account information and Social Security numbers. The FTC says Uber didnt discover the breach for almost four months.

The proposed settlement prohibits Uber from misrepresenting its privacy and security practices. It also requires Uber to put a comprehensive privacy program in place and to get independent third-party audits every two years for the next 20 years. You can file a public comment about the settlement until September 15, 2017.

The complaint: https://www.ftc.gov/enforcement/cases-proceedings/152-3054/u...

Links from complaint:

Agreement Containing Consent Order (19.87 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...

Decision and Order (57.66 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...

Complaint (35.88 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...

Complaint Exhibits A and B (1.2 MB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...

Analysis of Proposed Consent Order To Aid Public Comment (56.14 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...

Press release: Uber Settles FTC Allegations that It Made Deceptive Privacy and Data Security Claimshttps://www.ftc.gov/news-events/press-releases/2017/08/uber-...

Settlement agreement quote:

Under its agreement with the Commission, Uber is:

prohibited from misrepresenting how it monitors internal access to consumers personal information;

prohibited from misrepresenting how it protects and secures that data;

required to implement a comprehensive privacy program that addresses privacy risks related to new and existing products and services and protects the privacy and confidentiality of personal information collected by the company; and

required to obtain within 180 days, and every two years after that for the next 20 years, independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.

On Melissa ONeills PCG random number generator lemire.me
68 points by ozanonay  9 hours ago   55 comments top 12
FabHK 6 hours ago 0 replies      
A few notes:

The author writes "Meanwhile, at least one influential researcher (whose work I respect) had harsh words publicly for her result", and then quotes some of these words:

 Note that (smartly enough) the PCG author avoids carefully to compare with xorshift128+ or xorshift1024*. 
However, the author fails to note that said "influential researcher", Sebastiano Vigna, is the author of xorshift128+ and related PRNG.

In the linked test [2] by John D. Cook (who uses PactRand, a test similar to the (obsolete) DIEHARD), xorshift128+ and xoroshir0128+ fail within 3 seconds, while PCG ran 16 hours producing 2 TB of pseudo-random numbers without any suspicious p-value detected.

On the other hand, Vigna claims that the xoroshiro family does "pass" PactRand.

I've submitted an answer to StackOverflow a while ago [1], recommending xoroshiro and PCG, thus I'd be concerned if PCG turns out to be flawed. It's actually quite hard to get academics in the field to give an authoritative recommendation (I've tried) - their response is typically along the line "It's complicated"...

[1] https://stackoverflow.com/questions/4720822/best-pseudo-rand...

[2] https://www.johndcook.com/blog/2017/08/14/testing-rngs-with-...

Edit: remove italics due to asterisk in PRNG name, & add link to John. D Cook's test.

Animats 6 hours ago 2 replies      
Here's the site for the random number generator.[1] It's basically a simple linear congruential random number generator (well known, but not very good) fed into a mixer. The mixer is new.

Most of the analysis is about the LCG or the final output. The suggested mixer is just

 output = rotate64(uint64_t(state ^ (state >> 64)), state >> 122);
That's simple, and the insight in this paper is that something that simple helps a lot. I would have thought that you'd want a mixer where changing one bit of the input changes, on average, half the bits of the output. The mixer above won't do that. DES as a mixer would probably be better, but it's slower. The new result here is that something this simple passes many statistical tests.

This isn't crypto-grade; both that mixer and a LCG generator are reversible with enough work.

[1] http://www.pcg-random.org/

mjb 7 hours ago 0 replies      
> I wonder whether the academic publications are growing ever less relevant to practice.

I think there are two topics here. One is whether academic research and work is becoming less relevant to practice. The other is whether the formalism of academic-style publishing are becoming less relevant to the modern world where more and more venues for publishing, rating, and discovering work.

On the former, I believe that academic work is as relevant as ever. There are some areas (like systems) where I'm doubtful about relevance from the point of view of a practitioner, but other areas (like hardware and ML where work remains extremely relevant). I haven't noticed a trend there over the last decade, except in some areas of systems where the industrial practice tends to happen on cluster sizes that are often not approachable for academia.

On the latter, academic publication does indeed seem to be getting less relevant. There are other (often better) ways to discover work. There are other ways to tell whether a piece of work is relevant, or credible. There are other, definitely better, ways to publish and distribute work. In some sense I think this is a pity: as an academic-turned-practitioner I like academic-style publications. Still, I think they are going to either change substantially or die.

This article raises another very good point: sometimes the formalism of academic publication makes the work harder to understand, less approachable, or less valuable. That's clear harm, and it seems like this professor was right to avoid that.

sulizilxia 1 hour ago 0 replies      
I love O'Neill's work on PCG, and loved the talks by her I watched online.

As a tenured professor I want to say two things about this piece:

1. I think academic publishing will be forced to change. I'm not sure what it's going to look like in the end, but traditional journals are starting to seem really quaint and outdated now.

2. As far as I can tell from what she's written on the PCG page, the submission to TOMS is a poor example, because no one I know expects to be done with one submission. That is, no one I know submits a paper to one journal, even one reputable journal, and is done. They submit and it gets rejected and revise it and resubmit it, maybe three or even four times. After the fourth or fifth time, you might give up, but not necessarily even then.

I have mixed feelings about the PCG paper as an example, because in some ways it's great: an example of how something very influential has superceded traditional academic publishing. In other ways, though, it's horrible, because it's misleading about the typical academic publishing experience. Yes, academic publishing is full of random nonsense, and corruption, but yes, you can also get past it (usually) with just a little persistence. In still other ways, it's a good example of what we might see increasingly, which is a researcher having a lower threshold for the typical bullshit out there.

lowmagnet 7 hours ago 3 replies      
I like this because she is a professor at Harvey Mudd. They took steps to make CS more inclusive, with great results. I appreciate her attitude on accessibility, which is in keeping with that institution's philosophy.

That she ran into a paper wall doesn't bother her because she's openly publishing is even better.

Houshalter 58 minutes ago 1 reply      
I once tried to develop my own fast random number generate using nothing but bitwise operations. On the theory they were the fastest/simplest. I had a program generate thousands of random combinations of bitwise functions. And then used statistical tests to see which ones produced the most "random" seeming behavior.

It worked as far as I can tell. But I don't trust the statistical tests. Who is to say there isn't a very obvious pattern in the numbers that I didn't test for or notice? How do you prove a random number generator is good?

fwdpropaganda 5 hours ago 1 reply      
Physicist here.


> And it is not even entirely clear what really random would mean. It is not clear that we live in a randomized universe

At the quantum level it really is clear that we live in a really random universe. What's the meaning of really random? The outcome of a quantum process.

On-topic. Yeah, you have to know your audience. As OP mentions, just because the paper wasn't published doesn't prevent anyone from thinking about it and even building on it. On the other hand these scientific publications have styles and target audiences, and maybe she got rejected not due to lack of relevance or rigor, but because the paper didn't match the publication's non-scientific criteria for publication.

bmm6o 6 hours ago 1 reply      
It sounds like an interesting result, I look forward to reading the paper more carefully. That said, it's clearly not written for an academic journal. Section 2.4.3 is entitled "The Importance of Code Size", and explains why shorter code is better. I think you can argue that some academic papers are excessively concise, but this is a 58-page paper about an RNG. It is clearly not a journal paper and has a ton of extraneous content. I have to sympathize with the commenter that the author has made a trade-off and written a paper that's less rigorous than it should be (for peer review). I wonder why she didn't write 2 versions.
the_stc 6 hours ago 2 replies      
As a general comment, I dislike deliberately obtuse writing in papers. In my current work, I came across a very in-depth survey of our industry (sex work). Excellent study, very helpful. But some of the sentences seemed to over-complicate the math. Example:"Consider the set P {p1, p2, ... pN} representing providers and the set C {c1, c2, ... cN} representing customers". I am pretty sure this kind of stuff is filler or pretends to make things look more rigorous than they are.

On the other hand, maybe spending more than a line explaining what the birthday paradox is should be cut out and put in a backgrounder paper or appendix so that the paper can focus on the actual novel ideas.

lisper 8 hours ago 1 reply      
This is an interesting article, but it's more about the changing landscape of academic publishing than it is about random number generators.

[EDIT] The actual paper is here: http://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf

drallison 6 hours ago 0 replies      
Melissa O'Neill gave a talk describing the PCG random number generator at Stanford in EE380. https://www.youtube.com/watch?v=45Oet5qjlms
dsacco 6 hours ago 1 reply      
I have a few comments:

1. The paper itself[1] is extremely readable by the standards of most cryptography research. On one hand, this is great because I was able to follow the whole thing in essentially one pass. On the other hand, the paper is very long for its result (58 pages!), and it could easily do without passages like this one:

Yet because the algorithms that we are concerned with are deterministic, theirbehavior is governed by their inputs, thus they will produce the same stream of randomnumbers from the same initial conditionswe might therefore say that they areonly random to an observer unaware of those initial conditions or unaware of howthe algorithm has iterated its state since that point. This deterministic behavior isvaluable in a number of fields, as it makes experiments reproducible. As a result, theparameters that set the initial state of the generator are usually known as the seed. Ifwe want reproducible results we should pick an arbitrary seed and remember it toreproduce the same random sequence later, whereas if we want results that cannotbe easily reproduced, we should select the seed in some inscrutable (and, ideally, nondeterministic) way, and keep it secret. Knowing the seed, we can predict the output, but for many generators even withoutthe seed it is possible to infer the current state of the generator from its output. Thisproperty is trivially true for any generator where its output is its entire internal stateastrategy used by a number of simple random number generators. For some othergenerators, such as the Mersenne Twister [35], we have to go to a little more trouble andinvert its tempering function (which is a bijection; see Section 5), but neverthelessafter only 624 outputs, we will have captured its entire internal state.

That's a lot of setup for what is frankly a very basic idea. A cryptographer being verbose in their writing might briefly remind the reader of these properties with the first sentence, but they'd still likely do that with much more brevity than this. I understand wanting to make your research accessible, but for people who understand the field this detracts from getting to the "meat." It might make it harder to get through, but a 10-30 page result is preferable to a nearly 60-page one that assumes I know nearly nothing about the field. If I don't know these details very well, how can I properly assess the author's results?

2. The author's tone in her writing is something I take issue with. For example, passages like this one...

Suppose that, excited by the idea of permutation functions, you decide to alwaysimprove the random number generators you use with a multiplicative step. You turnto LEcuyers excellent paper [25], and without reading it closely (who has time toread papers these days!), you grab the last 32-bit constant he lists, 204209821. Youare then surprised to discover that your improvement makes things worse! Theproblem is that you were using XorShift 32/32, a generator that already includesmultiplication by 747796405 as an improving step. Unfortunately, 204209821 is themultiplicative inverse of 747796405 (mod 232), so you have just turned it back intothe far-worseperforming XorShift generator! Oops.*

...go a bit beyond levity. If you're trying to establish rigorous definitions and use cases to distinguish between generators, functions and permutations, this isn't the way to do it. This isn't appropriate because it doesn't go far enough to formalize the point. It makes it intuitive, sure, and that's a great educational tool! But it's a poor scenario to use as the basis for a problem statement - research is not motivated by the failure of an engineer to properly read and understand existing primitives, it's motivated by novel results that exhibit superior qualities over existing primitives.

3. The biggest grievance I have with this paper is the way in which it analyzes its primitives for cryptographic security. For example, this passage under 6.2.2 Security Considerations:

In addition, most of the PCG variations presented in the next section have anoutput function that returns only half as many bits as there are in the generator state.But the mere use of a 2b/2-to -1 function does not guarantee that an adversary cannotreconstruct generator state from the output. For example, Frieze et al. [12] showedthat if we simply drop the low-order bits, it is possible for an adversary to discoverwhat they are. Our output functions are much more complex than mere bit dropping,however, with each adding at least some element of additional challenge. In addition,one of the generators, PCG-XSL-RR (described in Section 6.3.3), is explicitly designedto make any attempt at state reconstruction especially difficult, using xor folding tominimize the amount of information about internal state that leaks out.17 It should beused when a fast general-purpose generator is needed but enhanced security wouldalso be desirable. It is also the default generator for 64-bit output.

That's not a rigorous analysis of a primitive's security. It is an informal explanation of why the primitive may be secure, but it so high level that there is no proof based on a significant hardness assumption. Compare this with Dan Boneh's recent paper, "Constrained Keys for Invertible Pseudorandom Functions"[2]. Appendices A and B after the list of references occupy nearly 20 pages of theorems used to analyze and prove the security of primitives explored in the paper under various assumptions.

Novel research exploring functions with (pseudo)random properties is inherently mathematical; it's absolutely insufficient to use a bunch of statistical tests, then informally assess the security of a primitive based on the abbreviated references to one or two papers.


1. http://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf

2. https://eprint.iacr.org/2017/477.pdf

Show HN: REST-Like API for Ethereum Smart Contracts etherest.io
66 points by josephros  8 hours ago   14 comments top 7
daveytea 6 hours ago 2 replies      
This is great and potentially solves a challenge i'm having right now working with geth. However, I'm not comfortable sending private keys to your API end point. Do you guys have a solution to solve this concern? I'm sure others would probably feel the same...
mhluongo 7 hours ago 1 reply      
Reminds me of BlockCypher- they pitch themselves as the "AWS" of blockchains [0], and released an Ethereum API not too long ago [1].

This looks interesting, but we're talking about money here. If you're looking for something similar to this consider a solid infrastructure provider first. We built an app on Coinbase's shoddy early API, only to have it go down in the middle of a YC interview -_-

[0] https://www.blockcypher.com[1] https://www.blockcypher.com/dev/ethereum/#contract-api

serveboy 3 hours ago 0 replies      
It certainly gets one thing right : what a fantastic name! Props
ringaroundthetx 7 hours ago 0 replies      
That is very cool

I need an endpoint that gets me all address holders of a contract, its fine if its a little slow

etherscan used to have one but they deprecated probably for the performance issue

k__ 6 hours ago 1 reply      
Is it a REST API in a Server that accesses Ethereum or is it running on Ethereum?
snissn 5 hours ago 1 reply      
Is it open source?
freeslugs 7 hours ago 1 reply      
soon we will see graphql endpoints
The dwarfs of our vocabulary oup.com
33 points by Petiver  7 hours ago   9 comments top
badloginagain 4 hours ago 1 reply      
Shouldn't it be dwarves?
Ask HN: Is Georgia Tech's Online Master in CS Worth It?
383 points by soneca  11 hours ago   168 comments top 46
vikascoder 9 hours ago 7 replies      
Current working professional and an OMSCS student here. It highly depends on the context. Biggest pros are:1. This is perhaps the cheapest Computer Science masters in the United States from a premier school. The degree is exactly the same as offered to the residential program and the credits acquired are all legit and transferable to other universities. I had friends who transferred from OMSCS to a regular school and skipped one full semester due to the credits earned.2. An OMSCS qualification holds way more water than if you do random MOOC qualifications on Coursera and others.3. The coursework is the same as the residential program. So if you dont believe in studying an MS at all, then this program is nothing special. Its a Masters in Computer Science. So It's pros and cons are the same as a regular MS.4. If you are international, then having an OMSCS degree is equivalent to having a Gatech MS degree. It is a superb add-on to your profile and also qualifies you as a graduate level tech specialist for future Visa processing.5. If you are international and looking to stay and work in your own country, then your mileage may vary depending on your circumstances. OMSCS provides no visa support and no career counselling. It does have an online portal for jobs but its more geared towards residents.6. Other than that, it forces you to think and study new areas of research while you work so its extremely enriching.7. The program is more or less extremely well run with regular assignments, proctored exams, 1-1 sessions with professors and what not.8. Some companies reimburse your tuition, so its virtually free (at least for me)

Cmon guys, a US Masters for 7000 USD? Are you kidding me? Its totally worth it. In fact I feel blessed that such a thing even exists. GaTech has been a trailblazer in this regards.

opensandwich 2 hours ago 0 replies      
I have finished the OMSCS program and in some ways I have mixed feelings about it. My background has been primarily in mathematics/statistics and I didn't come from a "tradition CS" educational background.

Did I learn a lot?

I learnt a ridiculous amount. For the time+dollar investment it is amazing. The program is definitely not easy either.

It has been amazing to learn the concepts in ML (Dr. Isbell) and AI (Dr Starner) courses and then a few weeks later think "I think I can actually use these concepts in my workplace".

Why the mixed feelings?

Not all courses had the same quality to it. From the top of my head, AI, ML were probably the best 2 courses. Other well ran courses I would add was computational photography, edutech, introduction to infosec (besides the rote learning...), however some of the other courses I had a relatively negative experience.

The degree does suck up a lot of time and I would say it is the real deal.

Knowing what I know now I can't say 100% that I will "re-do" OMSCS - to be fair on GaTech I'm not sure whether the challenges that I feel above are due to an online program and I personally would be more suited to an in-person program but the experience has definitely been better than Udacity's nanodegree and any MOOC which I have sat.

Overall I would say if you do it for the sake of learning and that alone - OMSCS is worth it. For any other reason please don't do it.

bkanber 10 hours ago 2 replies      
Worth it -- based on what metric?

My wife did an online master's degree (at a legit university that also had an online program). You have to be very good at self-pacing, diligence, and learning autonomously. You have to be so good at it, in fact, that the type of person who would succeed in an online master's program is the same type of person who would succeed in self-learning without the master's program.

So if your only goal is to learn, then I say no, it's not worth it.

However, you're in Brazil and not a lifelong programmer. Credentials may work against you if seeking a job in the US. Many US companies look at South America as the "nearshore" talent, much better in quality than devfarms in India, but also still cheaper and -- because of that -- slightly lower in quality than US talent.

In that case, spending $7k and completing the program and getting the degree may help you get a $7k higher salary in your first (or next) job. It may give US companies more confidence in your abilities, as you received a US graduate school education.

So from a financial perspective and the perspective of job opportunities inside the US as a foreigner, then I think it may be worth it. If you don't care about getting US jobs then still probably not worth it.

Best of luck!

ordinaryperson 10 hours ago 3 replies      
At 5K, the price is right (my in-person master's was 22K, although my employers covered most of it) but be aware it's not the missing piece to catapult you into superstar developer earning 170K/year.

Honestly I think your time is better spent working on real projects. In my CS master's program I met many students with no real-world experience. One was a paralegal before school, and after he graduated he became...a paralegal with a CS master's. Experience > degrees, every time.

There's value in the program (algorithms and data structures being the most applicable), but just go in with your eyes open knowing that the degree is not a glass slipper that'll turn you into Cinderella overnight. Too many IMHO falsely believed my program was a jobs program and really struggled to find work in the field.

If you can do it at night while working FT, great but don't take 1-2 years off work. It sounds appealing to be done ASAP but you're unlikely to make up that 60-120K/year in lost wages. Unless you're fabulously wealthy.

Good luck.

lemonghost 10 hours ago 2 replies      
I'm halfway through the OMSCS in the machine learning specialization. It has been a great experience so far and definitely worth it for me.

A couple of things to consider: As you mentioned, it is more focused on Computer Science than Software Engineering/Development. There are a couple of Software Engineering/Architecture/Testing courses but I haven't taken them so I can't comment on how relevant I think they are to my day job.

It's an incredible bargain... 7-8K for an MS (not an online MS) from a top 10 school in CS. That on it's own makes it worth it for me.

It's not easy and it's not like a typical Coursera/Udacity course. Depending on which courses you take it can be quite challenging (which is a good thing). You typically don't have much interaction with the Professors but there are a lot of TAs and other students to help you along the way.

Here's a reddit in case you haven't come across it that answers many questions:


And here's an awesome course review site that a student built:


forrestbrazeal 10 hours ago 2 replies      
The answer is highly context dependent. If you think the degree will magically open up a lot of job opportunities for you, you might be kidding yourself. However, if you love to learn and don't mind putting in the long hours, it can be rewarding for its own sake.

(Source: current OMSCS student, hopefully graduating in December)

I made an "informed decision tree" awhile back that goes into much more detail about my thought process when signing up for this degree:


I also reviewed the OMSCS program in detail here: https://forrestbrazeal.com/2017/05/08/omscs-a-working-profes...

Hope that helps!

throwawayaug15 9 hours ago 9 replies      
Logging in as a throwaway. The program only costs $5k but it was one of the most expensive things I've done in my life.

Got a job at Google directly because of this program (a few classes like CCA helped a lot with interviews). I'm aware of at least a couple dozen of us from OMS here.

The program cost me dearly. It cost me my relationship with the SO and it cost me my health (staying up late nights, lots of coffee).

* $5k cheap, it's nothing, the real way you pay for it is via your time.

* The teachers like the flexibility as much as we do. Many are top notch. I took two classes from professors that work at Google (Dr. Starner and Dr. Essa), one at Netflix (Dr. Lebanon), and a few others have their own startups.

* One of the classes was taught by Sebastian Thrun, with a TA at Google, but I think that's changed now.

* The lectures are good, but you have infinite ability to subsidize them with Udacity, Coursera etc.

* You learn squat by watching videos. The true learning happens at 2am when you are trying to implement something, and end up tinkering, debugging, etc. That's when things click.

* The hidden gem is Piazza and some of the amazing classmates that help you out. Lots of classmates that work in industry and can explain things a lot better. I.e: Actual data scientists and CTOs of Data Science companies taking the data science class. They were amazing and I owe my degree to them in part.

* Working full time and taking classes is not easy. Consider quitting and doing it peacefully.

* From within Google, I've heard from people that did the Stanford SCPD (I'm considering it) and also OMSCS. Lots of people that say the SCPD program wasn't worth the time and effort. No one yet that's said the same about the GT program.

I've heard from people that have done the program in-person, and they say the online lectures and materials are significantly better.

CoachRufus87 10 hours ago 2 replies      
Richard Schneeman (an engineer at Heroku) wrote a great blog post on this very topic; worth the read: https://schneems.com/2017/07/26/omscs-omg-is-an-online-maste...
crueoj 10 hours ago 0 replies      
As someone currently in the program and graduating this Spring, I have found this program to be incredibly rewarding. GT has done a fantastic job turning their on-campus courses into an online format. At first I was skeptical, but I have found this program extremely challenging and have learned a great deal. It has been fantastic in my career development as well, allowing me to land a job in ML before I have graduated.

The program does have its hiccups here and there. Some courses have been reported as being poorly organized, but this is certainly the minority. Also, you may not receive as much individual attention as you would in a on-campus program. This is aided by the fantastic community of students in the OMSCS program which provide a support system for each other through online forums/chat. If you are not much of a self-starter and need specific guidance, this program may not be for you.

nvarsj 55 minutes ago 0 replies      
Not to be harsh, but you probably won't get accepted. You'll need to do some CS nanodegrees first, or something equivalent (a full undergrad CS/maths degree is obviously ideal). I know people in similar positions, even one with a physics degree, who could not get in due to lack of academic experience.

Otherwise, I think OMSCS is totally worth it. It is hard though. Really hard. I have a family, significant engineering experience, and I find the workload intense. It puts pressure on my family at the same time because I'm not available as much. So I'm taking it very slow, no more than 2-3 courses a year.

It feels great to be 'back at school' after so many years. I love learning new stuff and the challenges of hacking away at low level things. The kind of thing you rarely get to do professionally unless you're very lucky (or not getting paid much). Almost makes me wish I had done a Ph.D.

I don't know if it will help me get a better job or whatever, but it definitely fulfills my own internal itch.

learc83 9 hours ago 3 replies      
If you haven't been working as a software developer long and you don't have a background in CS, it's going to be difficult.

I'm about halfway through and many of the classes assume that you have the equivalent of an undergrad CS degree. It's not intended to replace an undergrad degree.

That doesn't mean you can't do it, but your going to spend a lot of time catching up. From what I've seen, the students without a CS degree, even those with significant industry experience, have had a much harder time with the more theoretical classes.

It's also a graduate program, and the classes are pretty rigorous compared to what I did in my undergrad CS degree.

Also keep in mind that admission is fairly competitive. And admission is only probationary. You have to complete 2 foundational classes with a B to be fully accepted.

mindvirus 11 hours ago 1 reply      
I graduated from the program in December and I found it incredibly rewarding. There are a lot of great classes, and I learned a ton - in particular, the machine learning and reinforcement learning courses were top notch, as we're the systems programming ones.

One thing I'd warn though is that you'll get out of the program what you put into it - so it's really up to you to choose classes that will set up your career the way that you want it.

orsenthil 9 hours ago 0 replies      
I have been doing this for 2 years now. I enjoy this program. There are many students who are taking Gatech OMSCS and also taken MOOCs from Coursera, Udacity, and EDX. The defining characteristic of good students taking this course is, they are all self-learners, independent, and they want to learn Computer Science without giving up on the current full-time job. I have been keeping notes for the all the subjects that I have taken: http://www.coursedocs.org/gatech/index.html - Have a look at it to get a glimpse of the course work involved.

Cons: I've noticed some students who come to get their MS degree from a reputed institution because it is cheap. Due to coursework pressure, they take short-cuts, like doing group-work, discussing solutions when you are prohibited, plagiarizing in assignments, etc.

hnrodey 10 hours ago 1 reply      
What are some of the prereq's to be prepared to be successful with completing this degree? Asking as someone who graduated with a CS degree from almost ten years ago (wow, time flies). I've been programming/development pretty much that entire time but I think I have forgot most of the core math and core CS concepts that might be necessary in a CS masters degree.

It's hard for me to estimate how much prep I would need to do to come in to this program and feel comfortable with the tasks at hand.

w8rbt 8 hours ago 0 replies      
I've completed the majority of the OMCS program. My specialization is 'Computing Systems'. I have a 4.0 GPA so far. I did not do CS as an undergraduate, but I've been programming since I was very young.

Here are my thoughts on what people need to succeed as an OMCS student:

 * Be able to program in C, C++, Python and Java at an intermediate level. And, know one of these very well. * Be able to use a debugger (GDB) and valgrind. * Be able to administer and configure Linux systems. * Understand data structures and examples (std::set in C++ is RB Tree backed, std::unordered_set is hash table backed) * Understand basic networking concepts and key technologies (TCP, UDP, IP, switching, routing, etc.). * Understand the x86 computer in general.
Finally, I think some of the classes are meant to weed students out. People may think that 'Intro to Graduate Operating Systems' would be an easy first course for CS beginners. It's not (unless they've changed it). It was primarily about writing multi-threaded clients, servers, caches and proxies in C, using shared memory (IPC, POSIX Shared Memory) and various other C/thread projects until you become a half-way decent C programmer. They deduct points for working code that has any errors (memory leaks, etc.) too. So don't be surprised if a seemingly easy OMCS course turns into... I had no idea. I'm going to have to drop this course. I saw that happen to several students.

I've done well so far, but I have the programming/logic background to do the work. If you don't, brush up on the skills listed above before enrolling.

Edit: The class projects are a lot of work. Be prepared to give-up your weekends and evenings. Even if you know the material and the language, it's a job to get through some of the projects.

schneems 4 hours ago 0 replies      
I wrote about my experiences a few weeks ago: https://schneems.com/2017/07/26/omscs-omg-is-an-online-maste...

I'm through my second OMSCS semester, and it you want to know if I think it's worth it...you'll have to read the post ;)

rgrieselhuber 11 hours ago 0 replies      
As someone who hires machine learning / data science oriented engineers, I've looked at this curriculum pretty closely and think it looks like a great program.
el_benhameen 8 hours ago 0 replies      
I'm self-taught and have a job as a SWE. My BA is in an unrelated field. I'm considering the OMSCS because it would be the cheapest way to add credentials to my resume and because I'd rather not go back for a second bachelor's. (I don't mean to sound cynical--I'm interested in the subject matter, of course, but you can get all of that without going through a degree program.) Exchanging $7k for more legitimacy in the eyes of prospective employers is the main appeal of a formalized program. Does anyone have any experience with or thoughts on the signaling potential of the degree?
mathattack 8 hours ago 1 reply      
Only a few data points as an outside observer, but...

1 - The people I've seen doing it are learning A LOT - more than another online program I've seen.

2 - They're also working A LOT - it intrudes on all aspects of their personal life. It's as much or more work than doing an in person CS degree.

3 - The folks I know don't have CS undergrads, which also makes it more difficult.

Net - it can be worth it if you missed CS as an undergrad, but you'll have to work. You need to ask if there are enough people in Brazil who value the credential (or implied skills) to make it worth the time. The time investment is more expensive than the $s. (It will be thousands of hours)

fokinsean 10 hours ago 0 replies      
I've been entertaining the idea of going through this course as well. I graduated 2 years ago, BS in CS, and part of me misses being in school. Plus my employer has a decent tuition reimbursement program.

Would anyone who works full time and gone through this program care to share their thoughts?

Edit: Just found this great article from another comment


rrmm 8 hours ago 0 replies      
I have a regular old masters in CS from GT. It's probably worth it from a career standpoint (if nothing else it signals that you care about self-improvement and take active steps to doing it). I would expect you'd miss some of the 'grad school experience' (for better or worse) and networking opportunities. The actual content itself can probably be gotten for free from other courses on the web if you take a syllabus from a CS dept to get an overall program. That path wouldn't have the benefit of access to teachers and would require a lot of discipline.

I don't know how it would be looked at in Brazil or what the economic cost/benefit are in terms of your own income. I did know a few folks from the University of Sao Paulo that did grad and postdoc work while I was at GT though, so clearly some people are aware of GT in Brazil. That might be another avenue to get opinions from. I would be interested to hear how the costs compare to an institution that was local to you.

gatechnoway 2 hours ago 1 reply      
Working pro with 12 years experience. Absolutely not worth it. Most of your classmates will have not coded before.

The classes are cheap. The hours are long. In the end your grade depends on teammates who haven't been vetted. Three teammates who can't code? You get a C and don't pass.

Course content is extremely dated. UML and SDLC paradigms from the 70's with xerox pdfs distributed to "learn" from.

This is a money grab.

daok 6 hours ago 3 replies      
I have a question about the requirement to enter the program. It says is require to have 3 people to write a recommendation letters. I have finish my bachelor +10 years ago and I am not touch with any professors. Does providing managers are enough? On the website they put emphasis of not adding friends which I can understand, but I am curious about the serious about getting these letters.
josep2 9 hours ago 0 replies      
My background: B.A. In Math and Economics. Been working as a software engineer for 6 years now. I have been in the program for about 1.5 years. I've really enjoyed it and learned a ton. I've also been able to pay the full cost of tuition out of pocket. I agree with others in the thread that it depends on context.

I don't think it will have an immediate impact on my earnings or place in my company, but I think the long term value of having it far exceeds what I'm paying for it.

mukhmustafa 8 hours ago 1 reply      
I joined the program in Fall 2016, and I am half way now. So far, i can say that the program is very useful for workers who are looking for a part-time degree, or for people who can't afford the on-campus program. However, the knowledge you gain and experience you get can't be compared to the on-campus program.
artmageddon 1 hour ago 0 replies      
It better be, my first class for the fall semester starts in a week!
cweagans 8 hours ago 0 replies      
Personally, I'd go for an undergrad CS degree first. uopeople.edu might be a good place to start. I'm currently working through that program, and I intend to continue to the GA Tech masters program when I'm done.
decimalst_us 10 hours ago 1 reply      
As a secondary question, for those who did complete the program, what was the general time commitment per (semester or class) vs. how long you were in the program? I see that you must take 2 classes in the first year, but didn't see any other further requirements on speed of completion.

edit: Answered my own question - You can't have two consecutive semesters "off"[1]. I.e. the slowest possible pace would be 2 classes in the first year, then 1 class every other semester. So I suppose it would be:spring/summer 'xx: 6 credits, 24 remaining, spring 'xx + 1: 9 credits, fall 'xx +1 : 12 creditsetc.

[1] - per https://www.reddit.com/r/OMSCS/wiki/index

aschampion 4 hours ago 0 replies      
You may want to look at a previous recent discussion here on HN: https://news.ycombinator.com/item?id=13382263
j_s 10 hours ago 0 replies      
It came up on yesterday's launch of Lambda School (YC S17), but not sure anyone there can provide any additional info.


jondubois 4 hours ago 1 reply      
After a bachelor degree, university isn't that useful for CS unless you want to get into serious AI research. I don't think it has much effect on salary or opportunities.
maverick2 7 hours ago 1 reply      
I am a BA(Business Analyst) with mostly traditional Project Management duties. My bachelors was in CS, and I still love to delve into technical details of a solution. I do some data analytics for my product. But have been interested in more analytics driven roles and eventually find a Product Owner/Manager role.

Does anyone have insight if doing Georgia Tech's - Master of Science in Analytics will help me land such role?

frgtpsswrdlame 10 hours ago 4 replies      
Don't you need an undergrad degree in computer science to be admitted?


omscs_is_great 4 hours ago 0 replies      
Using a temp account. I don't think I would have gotten any interviews at the best of the best tech companies without it (I had an engineering BS in another field). I was only 8 classes in. So career wise, it's definitely worth it.

The classes take a lot of time (see https://omscentral.com), but the learning has been a lot of fun. I loved it.

ncfausti 6 hours ago 0 replies      
Glad this was posted. I was admitted to Penn's MSE in CIS as well as OMSCS for the Fall. No funding for either. Penn is roughly $60k. I currently live in Philly. I'm curious to see what HN thinks would be the better option.
serg_chernata 9 hours ago 0 replies      
I applied and got rejected due to not having my BS from a regionally accredited school, though it's nationally accredited. Very confused because their page implies students from all over the world attend. Bummer.
bitL 7 hours ago 0 replies      
How is the difficulty of courses when comparing to edX's MIT's Underactuated Robotics or Stanford's Roughgarden's Algorithms?
soneca 7 hours ago 0 replies      
Just to let here my thanks for all thoughtful answers! (as I can't edit my question anymore). Lots of good insights and useful links in this thread.
cdnsteve 9 hours ago 1 reply      
Their SSL is currently broken and displaying warnings...https://www.omscs.gatech.edu/
root_axis 9 hours ago 2 replies      
Anyone know of similar programs for undergrad? i.e. an online accredited CS bachelors from a real university.
nheskia 8 hours ago 2 replies      
just wondering, is the admission process similar to other graduate programs? do you need GRE scores? letters of recommendation? what has been people's experiences around these requirements?
jinonoel 8 hours ago 1 reply      
Are there any equivalent online PhD programs that are any good?
davidreiss 9 hours ago 2 replies      
> I believe this program is a good complementary source of knowledge to become a better software developer.

That's something you could learn on your own. But your knowledge of "technologies" are more valuable to employers than CS degree - especially if you have work experience.

The tech industry isn't like academia ( economics ) where you have to build up credentials. Work on projects that deal with web technologies or even better learn the back end ( databases ) or even the middle tier/server code if you are a front-end developer.

Becoming a full-stack ( front-end, middle-tier and especially back-end ) is going to be far more important to employers than if you know what undecidability is or computational theory.

Degrees are very important if you want to break into the industry ( especially top tier corporations ). But if you are already work in the industry, employers want to see the technologies you are competent in.

If your employer is willing to pay for it and you have free time, then go for it. Learning is always a good thing. But if you want to further your career, go learn SQL ( any flavor ) and RDBMs technologies - SQL Server, Postgres, etc ( any you want but I recommend SQL Server Developer Edition if you are beginner on Windows OS as it is very beginner friendly from installation to client tools ).

A full-stack web developer is rare and you could even sell yourself as an architect/management. That's a difference from being a $60K web developer and a $200K full stack developer/architect.

0xa 7 hours ago 0 replies      
I'm speaking from my past experience as a hiring manager at a start up with outlier standards for performance and trajectory in software engineering and machine learning. I estimate I've screened tens of thousands of resumes and interviewed at least a thousand people in my career.

First and most important: your internships and work experience, and what you accomplished during those jobs. They should tell a story of increasing and accelerating personal growth, learning, challenge and passion. If you can share personal or class projects, even better.

After your experiences, your degrees will be considered based on the number of years each typically requires, with early graduation and multiple majors being notable.

 1. PhD, if you have one. A STEM PhD was particularly helpful for ML/Data science positions, but not required. 2. BS/BA (3-4 year degree) 3. MS/MEng (1-2 year degree)
Put another way, if you don't have a PhD, the MS/MEng program is a tiebreaker compared to your experience and undergrad credentials.

International students get a raw deal. The online masters will barely help you get a job or launch a career in the US. US universities appear to offer the chance to work for major US companies with a notable university (such as Georgia Tech) on your resume, only to feed their graduates into our broken immigration and work authorization system, H1-B indentured servitude and no replies from the countless companies that have an unspoken higher bar for those needing sponsorship.

To round out a few other contexts HN readers might experience:

If you are an international considering an on-campus MS/MEng, US universities are charging full price while giving you a credential of limited value and utility. Apply the same comments above but at a much higher price than GA Techs OMSCS.

If you are completing/just completed a less notable undergrad degree, paying for a masters program at an elite CS school (like GA Tech) is usually a bad deal. If it not a requirement for the positions you seek, it won't help your career chances much.

If you have an undergrad degree and your employer will pay/cover your MS/MEng at night/personal time (and that is your passion), awesome and go for it! It will be a lot of work and lost sleep to get everything out of the experience, but a lifelong investment in your growth and experience.

If you are completing/just completed a notable undergrad degree (tier-1, internationally recognized program), you don't need the masters. Feel free to get one for your learning, sense of self and building research connections while you ponder getting a PhD. The hiring and salary benefit will be very small--you are already the candidate every company wants to meet. If you decide to get a PhD, that will open some new doors but take 5+ years to get there.

At my previous company, we made it our forte and team passion to get authorization for employees--given a global pool of candidates and a hiring bar to match. I'm really proud of our effort here given the broken and unfair system. Sadly, many companies do not share this value or cannot justify the time, effort and expense, or cannot scale such a program to a larger number of employees across a less selective bar.

user5994461 4 hours ago 0 replies      
Online courses are worth nothing.

Employers will ignore you the second they find out your master is not legit.

Real-time 3D visualization of geospatial data with Blender github.com
94 points by based2  11 hours ago   23 comments top 9
valine 9 hours ago 1 reply      
This is very cool. After a quick skim I noticed this relies on Blender's ops api. Using bpy.ops is generally considered a bad practice because a lot of the bpy.ops operators depend on the state of the UI - things like which objects are selected and which object interaction mode is active. The alternative to bpy.ops is to write scripts that manipulate the datablocks directly. Using bpy.ops can save a lot of time as it maps more cleanly to the GUI, but if you use it too much things can spiral out of control. Its just something to be aware of.
stuntkite 2 hours ago 0 replies      
My girlfriend and I have been working on a startup for the last year to do realtime 3D geospatial vis on the web and mobile with a focus on AR.

This is a really cool approach! I maybe have to fork to add in support for our data mixing platform.

ingenieroariel 10 hours ago 1 reply      
This is quite interesting, does anyone know if Blender can work with particles too or is it only 3d polygons?

Also, if the original poster is reading this: I am at foss4g with a 360 gopro camera rig, perhaps we can go shoot some high fps immersive video of old Harvard buildings and brainstorm about how to get that into Blender

ingenieroariel 10 hours ago 2 replies      
Could Blender be used as a lidar point cloud annotation tool?
throwaway2016a 10 hours ago 2 replies      
I had no idea Blender could even do this. Very cool.

Although like using blender from the UI, reading through the code I feel like there is probably a large learning curve here.

annerajb 10 hours ago 0 replies      
Hmm this seems really useful for rendering lidar data points in a 3d mesh/map.
sanjeetsuhag 10 hours ago 0 replies      
This is amazing. Blender truly has infinite potential.
Boothroid 8 hours ago 0 replies      
Very nice.
s73ver_ 4 hours ago 0 replies      
I'm really glad they went into the detail they did on the landing page on Github. But, not knowing exactly what kind of diagrams/visualizations can be done, it took quite a while to find an example of what they were talking about. The very first thing I should have seen there should be examples of output that can be produced.
Teaching an AI to play a simple game using Q-learning practicalai.io
90 points by ml-student  11 hours ago   7 comments top 3
hervature 11 hours ago 1 reply      
One of my favourite undergraduate projects was applying Q-learning to the game Flappy Bird. http://www.mast.queensu.ca/%7Emath472/FlappyQ.pdf
pigscantfly 7 hours ago 0 replies      
If anyone is interested in learning more on this topic, Mykel Kochenderfer's "Decision Making Under Uncertainty" offers a stellar treatment of reinforcement learning from the ground up. https://mitpress.mit.edu/decision-making-under-uncertainty
CGamesPlay 9 hours ago 2 replies      
This game really is quite simple! The go-to example I use for a simple game is called 21.

- There are N (usually 21) tokens in a pile.- A turn consists of removing 1, 2, or 3 tokens from the pile.- The player who removes the final token is the winner.- The opponent will always take tokens equal to n mod 4 if that is a valid move, otherwise will play randomly (this is the optimal strategy).- The AI plays first.

You can see my write-up here: [1]. One of the most interesting things for me was visually inspecting the action scores (at the end) to see how the agent learned the optimal strategy over time. My configuration took 3000 games to reach the optimal strategy against against a strong opponent (opponent epsilon = 0.1), and substantially longer as the opponent starts to play worse.

[1] https://www.dropbox.com/s/eooqlhgg98zc398/Q-Learning%2B21.ht...

Launch HN: Thematic (YC S17) Customer Feedback Analysis via NLP
48 points by zelandiya  6 hours ago   20 comments top 11
randoman 25 minutes ago 0 replies      
I really enjoyed reading about your company on Idealog (https://idealog.co.nz/technologymonth/2017/08/meet-thematic-...). It was especially great to see how you bootstrapped and found product-market fit rather than just raising a bunch of cash for an idea that may or may not be applicable in the real world. Congratulations on the launch and your success with YC. Go Kiwis!
Yertis 24 minutes ago 0 replies      
Love this! When I was at one of the big tech companies years ago, we tried to do something like this for reviews for all the products, and see how they stacked up in the marketplace. Was a really challenging problem to do in an automated way at scale -- definitely wish we had a solution like this!

Do you guys see yourselves sticking to a model that spits out analysis, and let customers decide what insights to gain from the data? Or could there be a path where eventually it lets users take specific actions based on the data?

jstandard 1 hour ago 1 reply      
Great tool, this is an area I've been looking forward to more automation in.

I might have missed it on the website, how does pricing work?

Also, do you have any integrations with other tools like Intercom or Zendesk to ease data-sharing? A monthly insights report generated directly off of my main customer support tool can replace hours of manual work.

wadkar 4 hours ago 1 reply      
Congratulations and best wishes!

I for one really liked the demo and the blog - specifically, (a) I have great exemplars for what you mean by "theme", and (b) this post[1] shows great insights into your thinking about the problem faced by your customers

> Developed on canonical text like news article or Wikipedia, they either failed to understand the variety of expressions, or were too hard to explain.

It appears to me that the current methods and resulting tools are heavily dependent on the problem formulation (or domain in general). Moreover, no matter how fancy your technique is (or "how deep is your net"), the resulting model won't work unless you take specific steps to train it on data from the domain.

Yes, what I just said sounds borderline truism. However, I am more interested in discussing why it is so? Here's my initial thinking:

Let us look at (one of) the definition of Machine Learning, from Prof Tom Mitchell's textbook,"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."

Here, experience E can be loosely considered as the amount of data you have for training - obviously, more data (i.e. training) should improve learning. However, the abstraction of T and P hides an important underlying problem of specification - or in other words, formulation of T (and E).


> I wrote a new approach [capitalizing] on my PhD and new Deep Learning approaches.

I hope we get to see some of your insights in a paper or article (or blog post :)

[1] https://www.getthematic.com/post/visualizing-customer-feedba...

tixocloud 6 hours ago 1 reply      
Congratulations on your launch! This is very interesting and it's something that has great applicability. Being in financial services, we collect a ton of feedback and audio but not enough manpower to process it all. I love the themed approach but am curious about the subthemes. Would you be able to shed any light on that?

I'm also happy to make introductions if you're ever thinking about expanding up north to Canada.

hobolord 2 hours ago 0 replies      
Congrats on the launch! I think I saw you talk recently in NZ, sounds like you have an exciting path forward
seanwilson 6 hours ago 1 reply      
How do you go about evaluating the accuracy of the themes and action items? Do you create a test set with obvious themes and actions and check the results for example?
forgotmysn 6 hours ago 1 reply      
would you say that your solution provides a sentiment analysis, similar to Quid, or do you focus on action items and things that product managers can actually address?
pj_mukh 6 hours ago 1 reply      
This is super cool, I also appreciate the demo's you have on your website (airlines, MBA schools etc.). Makes the end result super clear.

I don't know much about NLP but are you only using unsupervised learning on the raw data? I would think you would need an NLP layer as well that sorts out basic synonymical issues, phrasing differences etc.?

bitL 3 hours ago 1 reply      
Congrats and good luck!

I work on the same, just for my own company to automate customer interaction (well, at least 99% of it).

bberenberg 5 hours ago 1 reply      
Are you planning to provide your service as an API to other companies that want to wrap a product around your work?
Text Messages Between Travis Kalanick and Anthony Levandowski ieee.org
106 points by edshiro  7 hours ago   63 comments top 8
swang 3 hours ago 5 replies      
> A source close to Ubers operations says its engineers watched the intersection where Ubers cars were said to have run the red light, and that this text refers to them recording a number of normal, human-operated vehicles also breaking the law. Uber has never officially admitted that its software was to blame.

is the implication here that that intersection has a lot of red light runners? if so, are they so dense in not understanding how normal people running red lights is less of an issue here than a machine running that red light?

normal humans run red lights because they're either not paying attention or they're assholes. how is a machine safer or better if it can't pay attention (or even worse, is an asshole).

someone could have died because uber decided the rules didn't apply to them. it's ridiculous that they're still allowed to operate in california.

loceng 5 hours ago 7 replies      
"Uber Saw Tesla as a Huge Competitor

While Uber followed Googles cars closely, it was Tesla and Elon Musk that the duo discussed most frequently.

9/14/2016 Levandowski: Tesla crash in January implies Elon is lying about millions of miles without incident. We should have LDP on Tesla just to catch all the crashes that are going on.

9/22/2016: Weve got to start calling Elon on his shit. I'm not on social media but let's start "faketesla" and start give physics lessons about stupid shit Elon says like [saying his cars dont need lidar]"

Does anyone know what they're referencing here? I don't take Elon as a person to lie, his character seems too strong for that - he understands public perception and seems to deeply cares about it.

zitterbewegung 5 hours ago 1 reply      
Reading their exchanges its more interesting than just finding stuff about the court case. Specifically how these insiders think about their competitors. Such as how they think that Elon is the biggest competitor and how they wanted to partner with Google.
koolba 5 hours ago 1 reply      
> 7/23/2016 Kalanick: You hungry? .. Can get some Uber Eats steak and eggs.

Travis shows dog fooding at its best.

yohann305 6 hours ago 3 replies      
i don't understand how us the public are allowed to read a private conversation. Don't take me wrong i enjoyed reading it, it felt like snooping, but isn't it a blatant privacy violation?
loceng 5 hours ago 2 replies      
I didn't read much of the transcript, however these guys must be intelligent enough to keep what they know to likely be illegal as a private in-person conversation without record?
Overtonwindow 4 hours ago 2 replies      
I can't help but shake a feeling that all of this posting of text messages is just trying to shame Levandowski, Kalanick, or both.
mikeevans 5 hours ago 6 replies      
Simply unbelievable how much garbage is on that web page without an ad blocker (I put a box around the actual content that appears without having to scroll): http://i.imgur.com/0S2FIAW.png
Bezos should put his billions in public libraries wired.com
208 points by steven  9 hours ago   212 comments top 46
graphitezepp 8 hours ago 12 replies      
I would much rather billionaire types take on large issues that governments don't want to touch, like space flight or specific diseases in foreign countries, than something already in service like libraries. Maybe I just have a bias due to having a library just up the street from where I live currently, but I have never not had access to an adequate library in my life time, and only see room for marginal diminishing returns in improving them.
Overtonwindow 8 hours ago 1 reply      
Libraries are an excellent place to put some money. They provide learning opportunities, but many also provide resume and job search training, community meeting places, free internet access, and I think is one of the few "neutral" places in American society. Neutral in that there are few, if any politics involved, and it's an equal opportunity benefit to a community that most people can get behind. Even if they don't use the library, few I think would speak against.
_jal 8 hours ago 1 reply      
This is poorly thought out and edited, because I have to run, but I thought it worth posting. I've been thinking more about libraries lately. I really think it is time to reinvigorate and expand them. Pretty sure I'm preaching to the converted here about the power of information; I think what some folks miss is just how incredibly valuable libraries are. No, they aren't a panacea, but they are a cheap source of immense social good.

A lot of people see a building full of books and wonder why it can't be replaced by a bank of terminals and Google. I won't get in to the relative merits of dead trees vs. electrons, and largely don't care about it. What that line of thought misses is two-fold: the librarians and the community space.

Decent librarians are hugely underrated resources. Great ones can be incredible. Maybe natural language systems will become good enough in my lifetime to handle some of the vague requests librarians routinely manage to match to the right book, but the leaps of association to related topics, the knowledge of the edge cases of information classification to navigate them well, and the general mass of knowledge they accumulate is massively useful to have on hand. And so few people take advantage of it.

Meeting spaces in this context (both formal, sign-up-for-your-group and informal) serve an important role as well. It seems[1] like they're becoming rarer as government buildings use security as an excuse to close to the public, and in calling around to private groups with spaces that previously did that sort of thing have been much more reluctant to do so when I've tried to organize things over the last several years.

To personalize this a bit, I grew up in a poor family. One thing that was heavily emphasized to me was the value of learning - I think it was reaction to missed opportunities. Who knows what would have happened, but I do know that my college essays (written referencing library books, building on interests fostered in the math and the American Lit sections) would have been very different without them, and I kinda doubt I would have gotten a free ride to a top-10 school if I had been only drawing on what public school offered.

I'd love to see more experiments with libraries. I know some are playing with becoming more "maker-space"-ey, which is a decent thing to explore. I think finding a way to offer peer-classes in whatever - learn Javascript, fancy knitting techniques - would be an interesting thing to try as well. But I'm bad at seeing opportunities like this. I wonder what people with that super power could come up with.

[1] Anecdata alert!

songzme 7 hours ago 0 replies      
As much as I hate books, I love public libraries. Our local library (Northside branch santa clara) gives a big conference room every saturday to a team of passionate locals trying to teach themselves programming. My friends and I go there every Saturday to help people who are stuck and give them guidance (what to learn next, how to prepare for interviews, what language is best suitable for what they are trying to do, etc).

Talk about diversity, the library is a place where you get to see people from all walks of life outside the silicon valley bubble (different race, age, handicap). It builds a learning community where people have the opportunity to help each other at a more human level.

randyrand 7 hours ago 1 reply      
The modern day equivalent of a public library (storing information history) is the internet archive.

I think donating to the internet archive would be a better donation which a lot more benefit to society than funding physical libraries.

Libraries solve one of the worlds most important problem - keeping societies important information history safe. Websites are not immune to this problem. They require maintenance. When a webpage goes down its gone forever. Without something like the internet archive, we would not have a modern day library equivalent for the web. We are losing a lot of important information. Physical libraries today are in comparison much less important than digital ones.

philipps 2 hours ago 0 replies      
Strengthening public libraries is an excellent idea, with lots of public benefits. Libraries are one of the most trusted public institutions in the US and provide a range of key social services including access to education, internet, health information. They also reach and support a demographic that is currently not well served through online-only programs.

I co-founded Peer 2 Peer University [1] a non profit that brings people together in learning circles to take online courses. When we switched from online-only to face to face meetings in public libraries we started teaching adults who had fallen out of the education system and who were not benefiting from online courses. And I can't say enough positive things about the librarians who we work with.

[1] https://p2pu.org

speedboat 5 hours ago 1 reply      
Libraries are great. They should be funded with taxes by society, not the whims of charity.

Raise taxes and on people like Bezos and Gates for the needs of society.

jamesred 6 hours ago 3 replies      
The problem libraries solved, mostly access to information, has largely been monopolize by the internet. Most, including the third world impoverished, have access to the internet. Therefore the necessity of a library has been largely diminished and inevitably libraries will disappear. Complaining about libraries when people have no sanitation and access to clean tap water sounds largely like a first world problem, as much as I dislike the term.

Libraries should evolve with the change of technology and move their function from curation and access to information to something that is able to benefit more people. Books occupy volume and removing them would make more room for desks and rooms where people with no access to quiet areas could use to be more productive.

karl11 7 hours ago 1 reply      
I think this is a good idea. I would add YMCAs and similar places.

I think one of the best places for a mega-philanthropist to invest would be in the time and places that kids spend outside of public schools. Many of the biggest disadvantages in opportunities for kids are created when they fall behind before and after school and during summers, relative to kids who are better off socioeconomically. These disadvantages compound and are lasting. Safe places to engage in healthy recreation, productive endeavors, and getting something nutritious to eat that they wouldn't otherwise have access to would go a long way for underprivileged youth and have an impact for the rest of their lives.

saosebastiao 7 hours ago 2 replies      
Here's a better idea...and one that will ultimately benefit libraries as well: start buying out all the evil academic publishers, overhaul their technology, get rid of copyright assignment, and offer free access to anyone.

Then do the same with legal records, although that is more of a legal problem than a money problem.

diffeomorphism 7 hours ago 1 reply      
The article does not mention ebooks, which is a surprising omission considering it involves Amazon.

I don't know how it is in the US, but for instance German libraries offer to loan ebooks: http://www.onleihe.net/

Donating ereaders and rights to ebooks to libraries seems more effective than printed books.

Big caveats here are Amazon's monopoly position, DRM and copyright and loans for ebooks vs. physical books.

dnprock 1 hour ago 0 replies      
My libraries cost $180/year through property tax. Most of the books I want to read are checked out. Most of the movies I want to watch are checked out. It takes a lot of time just to find something. I come to library to hang out. We then pick up something to read/watch randomly. The system is not very efficient.
orik 7 hours ago 2 replies      
Im sure thousands of people have ideas of how Bezos should spend his.

Bezos should spend (or not spend) in ways and things he values, to maximize what he gets out of what hes earned.

(P.S. libraries compete with his book selling business! Why wouldnt he rather sell a library pass on a kindle for a monthly subscription?)

bitL 7 hours ago 3 replies      
What's the point of libraries these days? We literally have the possibility for everyone to have a "Hitchhiker's Guide to the Galaxy" in their pocket with all books that were ever written.
WalterBright 3 hours ago 0 replies      
By far the biggest barrier to books being available is perpetual copyright.
tostitos1979 8 hours ago 1 reply      
I want to give a shout out to a charity called "Room to read". There is a book about the founder's story .. he was one of us (a tech leader at Microsoft). His book and story touched me deeply.
tanilama 8 hours ago 3 replies      
His company hosts a large part of the biggest library ever existed: Internet.
sjg007 7 hours ago 0 replies      
Or in school lunches. How about nice healthy food? Lunch should be a class where you learn how to eat.
kolbe 8 hours ago 3 replies      
Bezos still needs to focus on actually delivering the value that the world has priced into the expectations of his company. He can't just retire and collect income off of an existing machine like Gates.
SJWDisagree111 5 hours ago 2 replies      
I'm guessing most of the people here have not been to a local public library in a long time. They have basically become daytime homeless shelters. That is the reason they are no longer attractive for philanthropists.
hilyen 8 hours ago 0 replies      
Wait, didn't he already replace the need for libraries with kindle and kindle unlimited monthly service? Heh.
stcredzero 3 hours ago 0 replies      
If I were Bezos, I would start a vlogging platform and a competitor to Twitter -- one free of concerns of covert partisan censorship. There would be serious synergies for amazon.com and other Amazon products.
em3rgent0rdr 4 hours ago 1 reply      
As commenters have noted, libraries conflict with Amazon's business. How about instead mass produce cheap (low-profit or at-cost) kindles pre-loaded with a large amount of public domain and other free material (including wikipedia offline compressed database). And then bezos will still make money from some of them buying paid kindle books.
payne92 5 hours ago 1 reply      
Libraries are struggling with an identity crisis as printed books become less relevant. It's an issue not solved by the injection of billions.

Carnegie's legacy, the example used in the article, doesn't translate to the present.

If Bezos wanted to democratize information in a comparable way, perhaps he could underwrite universal access to high-speed Internet. Many many parts of the country still do not have reliable, high-speed, low latency Internet connections.

jpao79 5 hours ago 0 replies      
Or maybe arm US public libraries with free to checkout Kindles that have no hardware for WiFi or LTE support and simply are cached with the latest videos from Khan Academy and a significant portion of the most read portions of Wikipedia. The cache would be updated over the air weekly at the library.

If the Kindle ever was jailbroken, well then the kid or whoever just learned about jailbreaking/hacking. Without Wifi or LTE support, likely no one would really bother.

I find this one inspiring:https://www.ted.com/talks/curtis_wall_street_carroll_how_i_l...

oconnor663 7 hours ago 1 reply      
> they're the only noncommercial places other than city squares where people meet across genders and ages


MentallyRetired 5 hours ago 0 replies      
Or, you go earn billions, and then you can decide what to do with them.
pravinva 8 hours ago 1 reply      
Is non fossil fuel energy research too hard to be funded? Gates and Bezos etc have funded just about a billion. Why not 20 billion
jjtheblunt 7 hours ago 1 reply      
Should is a pretty insulting verb: who is to presume the wisdom to tell another what to do?
yohann305 7 hours ago 0 replies      
i love libraries but i never go cause i'm not allowed to sip a coffee. i mean come on, where's the fun factor! Wouldn't you take the risk of spilling drinks on books rather than preventing people from coming?
gondo 6 hours ago 0 replies      
everyone knows what to do with other people's money
denisehilton 7 hours ago 0 replies      
Or maybe donate some of it to the poor? Like Bill Gates?
kazishariar 3 hours ago 0 replies      
Now, that would solve that problem now wouldn't it.
Clubber 7 hours ago 0 replies      
We already have libraries, if you are going to presume to tell someone how to spend their money, at least pitch something we don't already have.
balls187 7 hours ago 1 reply      
Fix traffic and housing in Seattle.
addedlovely 7 hours ago 0 replies      
How about pay some taxes.
melling 8 hours ago 2 replies      
So far, youve concentrated on things that might benefit our distant successors ... space trave, cancer treatments, AI

I would hope were going to make large strides in these in his lifetime. If we could effectively funnel more into R&D sooner, wed all see the benefits sooner. Cancer(s), for example, might be cured in say 2060 with our current effort, but if we solved the problem by 2030, hundreds of millions would benefit.

MrZongle2 6 hours ago 0 replies      
I think public libraries would be a wonderful beneficiary of Jeff Bezos' fortune, but I would hope that in the (admittedly unlikely) event that it happens, the bulk of the donated funds are not thrown at shiny "library of the future" initiatives. Not after-school STEM programs, not summer Minecraft redstone programming camps, not 3D printer labs.

Just books, staff and facilities: the three things that libraries always need, won't become obsolete in a few years, and are equally available to all patrons in an area.

Yes, public libraries need to evolve to meet their community's needs as they change. But just as a new coat of paint or solar-powered lighting doesn't strengthen an aging bridge, focusing on the flair rather than the core of what makes a library a library would be foolhardy.

cko 7 hours ago 3 replies      
Whenever a topic like this gets posted, it feels like the majority of commenters feel 'entitled' to other people's money and think they know best how to spend it. Or the notion they have to 'give back.'

I'm not rich in the popular sense of the word (besides having the fortune of being American middle class), but I do have investments by virtue of almost never spending on consumer goods. And having no wife or kids. My coworkers realize after years of seeing me drive the same beater correctly assume I'm in better shape financially, and some have the audacity to jokingly ask me to put them in my will.

Now, I will not deny that I am an extremely fortunate person who is cognitively able, like Bezos or anyone well-connected with material wealth, but what's with the 'he should donate to this cause instead'?

It's his money. He could buy a fleet of yachts, set them on fire, and upload the video footage - why shouldn't he be allowed to do that? At what arbitrary level of wealth does 'his' money become everyone else's money?

the-dude 8 hours ago 1 reply      
HillaryBriss 6 hours ago 0 replies      
what a ludicrous idea. libraries lose thousands of books each year to theft and vandalism. what would happen to all those bundles of cash?

just keep the money in banks. that's what they do.

miguelrochefort 7 hours ago 0 replies      
Although I don't like the government taking my money to subsidize stuff, I'd rather see them make basic Internet "free" than build and maintain libraries.
tambourine_man 7 hours ago 1 reply      
I find this titles very off putting.

It seems to imply that someone, who wasn't competent enough to make billions of their own, is somehow more apt to know how to better spend them than the one that actually did.

sddfd 8 hours ago 4 replies      
Why would anyone think libraries are important in 2017?

If things keep getting digitalized at the current speed, all knowledge of the world will be accessible online in our lifetime.

Unless you believe that a large percentage of citizens will not be able to afford a device for accessing the internet, libraries are a waste of money.

Oh and since librarians were mentioned, if AI keeps advancing, we will be able to have a conversation with a search engine within 30 years. So who needs a librarian?

tzs 7 hours ago 1 reply      
I'd like to see some billionaire put a lot of money into abortions. The political fight over abortion keeps messing up other things, and a billionaire could fix that.

For example, I don't think many on the right disagree that funding prenatal care is a good thing--but some major prenatal care providers, such as Planned Parenthood, also provide abortion services and so some politicians want to cut all their funding to make sure none of the Federal money goes to abortions. A whole bunch of women's health services get cut in order to make sure there is no chance the money ends up helping abortions.

So I'd like to see some billionaire, or some well-funded charity like the Gates Foundation, build several clinics that provide free abortions around the country in the states with the least restrictions on abortions, and fund a program that provides free travel to and from those clinics for women in the states with restrictive laws that have forced most such clinics to close.

Then organizations like Planned Parenthood can get completely out of the abortion business, taking away the major excuse that is used to cut their funding.

State legislators can stop spending a lot of time coming up with new ways to try to shut down abortion clinics in their states (because shutting down such clinics will no longer stop the abortions), and state attorney generals can stop wasting time defending those attempts in court, and maybe they will finally realize that the best way to reduce abortions is to make it so people don't need them in the first place. Maybe then states like Texas can drop their idiotic "abstinence only" approach to sex eduction (which has resulted in soaring teen pregnancy rates...) and switch to something actually effective.

Edit: any down voters care to name specific objections? That Planned Parenthood provides a lot of useful women's health services that are not related to abortion should not be controversial. That abortion is the main reason Congress wants to completely defund PP should also not be controversial. That "abstinence only" programs are a massive failure is pretty well documented. That many states keep passing abortion restrictions which then get challenged and often struck down as unconstitutional is not controversial.

       cached 16 August 2017 01:02:01 GMT