hacker news with inline top comments    .. more ..    21 Mar 2017 Best
home   ask   best   11 months ago   
They Used To Last 50 Years recraigslist.com
1297 points by teslacar  1 day ago   826 comments top 4
mjgoins 14 hours ago 11 replies      
When I went to the first semester of engineering school at the University of Wisconsin - Madison, there was a class called Engineering Professional Development 160: Introduction to Engineering.

One of the early class sessions was a lecture from a guy at a power tool company, making some kind of jigsaw or handheld cutting tool.

He explicitly told us, paraphrasing, "You don't want to make your product too reliable, because people won't buy more of them, and you won't make as much money". I was horrified, literally looked around at my classmates to see how horrified they would be, none of them were. I ended up majoring in philosophy.

Of course this site is dedicated to the idea he expressed, so I am not looking for agreement here, just relaying what I consider an interesting historical fact.

mrbill 1 day ago 7 replies      
When we moved into this house 12 years ago, I got the cheapest, simplest, most "mechanical" washer and dryer set that Lowe's offered (I think it's branded GE).

Haven't had a single problem other than the light inside the dryer eventually burned out when a housemate left the door open.

I know that when one of them eventually fails, it will hopefully be a cheap and simple mechanical fix.

I have no desire to own a "major appliance" (W/D, dishwasher, fridge) that has LCD screens, Internet connectivity, or any of those features that you don't really need and are just another point of failure. I manage computers all day at work, I don't want to come home and have to apply a firmware update to my washing machine.

crazygringo 8 hours ago 2 replies      
But they're cheaper, by something like 75% (and even more when you take energy efficiency into account). [1]

For people who move every few years, whose requirements change (from solo to 4-person family), and so on, the current situation is pretty ideal.

Spend less money, change models more frequently to fit your changing needs, and often it's even cheaper to buy a new one than to get it fixed -- which for consumers is amazing! (Because repair costs certainly aren't getting cheaper.)

Of course, the negative externalities on the environment are pretty clear and potentially horrific. As well as what it means for cultural values where more and more things are disposable.

But that's the answer, that's why they don't last 50 years -- because consumers actually prefer something that lasts only a few years at 25% of the price.

[1] http://www.aei.org/publication/the-good-old-days-are-now-tod...

prodmerc 1 day ago 15 replies      
I've worked on hundreds of domestic appliances.

Newer ones from any manufacturer are indeed failing more often, and are designed worse.

The only explanation is that this is on purpose - just like cars or laptops or smartphones, they are designed to fail faster so you buy new ones. Planned obsolescence, plain and simple.

The best appliances today, by the way, are made by Bosch/Siemens and Miele. None of the other manufacturers come close, period.

Interestingly, the high-end machines from Bosch/Siemens made in Germany are higher quality than the ones made in Poland, China, Spain or Turkey.

Same design, but it seems they use lower quality electronics and metals, as the most common failures are with the motors, control boards and bearings.

Immersive Linear Algebra textbook with fully interactive figures (2015) immersivemath.com
986 points by sebg  1 day ago   73 comments top 32
eellpp 1 day ago 3 replies      
Last year, i saw the videos by 3Blue1Brown and inspired by it went on to read some of the standard text books like linear algebra application by Strang. I had seen the Strang videos earlier but somehow did not follow it through. This time however my perspective was changed. I was approaching the subject through the lens of intuition and simplicity. Wherever i found something challenging, i waited for the next day to again re-do it (because i know that this should not be that complex to understand or the understanding is wrong). And to my surprise, again and again, the difficulty was in my rigidness in understanding. The next day , or even later during the day, when my mind was fresh again, i can reason through the concept and get the intuition behind it.

Since then i have seen the Strang videos again and again. Beginning to end. Read the book chapter by chapter and exercises by exercise. And what a delight it had been. And then i jumped upon Joe Blitzstein's probability lectures. What a blast ! Is there a list of teachers like these there, who in the pretext of teaching algebra/probability etc are in reality wiring up our thinking process in ways immaterial to subject they are teaching. Many of us don't want the material to be too casual/layman terms (which hampers self understanding as its no challenging anything within us) and not too rigid (where we cannot break through the challenge).

laretluval 1 day ago 2 replies      
If you're interested in more great animations for a visual understanding of linear algebra, I can't recommend 3Blue1Brown's "Essence of Linear Algebra" series on Youtube highly enough.


ktta 1 day ago 2 replies      
Although the focus with this book is with the fully interactive figures, I'm more impressed that ALL of the book is available for free.

I'm constantly seeing more people coming out with books for free or just a passive donation link. This makes me immensely happy seeing as how they're leveraging the available free resources (Latex, CC-BY-SA content, free software for graphics) to make more resources available for free.

Open software is one thing, but a book is much more permanent in my opinion. A book like this will never go 'stale' or old like software does. We only need handful of good books for every topic out there at which point we can basically not buy books anymore. For many topics, I hardly have to consider buying a book since I can just use a free book offered by a professor. And watch course lectures.

What I want to say is this: Please write more for free. It doesn't matter if there is not much interest in what you are writing. It will help you too!

Dangeranger 1 day ago 1 reply      
This is very nice.

As a suggestion for improvement, consider allowing the learner to edit the formulas which represent the figures and have the figures update. Additionally editing the figures could update the formula in real time.

This sort of bi-directional instant feedback will aid the understanding and engagement of the learner better than figure manipulation alone.

yellow_postit 1 day ago 1 reply      
One thing missing in comparison to most other textbooks is a set of problems to test the reader's understanding. The rotating and interactive figures are a very nice touch though.

Along the lines of interactivity, maybe having a scratch area like a Jupyter notebook would be a potentially great addition so that I could try problems near the area where I'm reading.

blt 1 day ago 0 replies      
So glad someone made this. I'm a "true believer" in animations and interactivity for learning math. They can make some concepts instantly intuitive that take a while to grasp symbolically.
dang 1 day ago 1 reply      
kgarten 1 day ago 3 replies      
on a tangent, I was just readinghttps://medium.com/@dominikus/the-end-of-interactive-visuali...

in short (over simplified, my take): interactive visualizations are dead because nobody interacts with them.

wondering if this holds here as well.

Can't find the reference anymore, but there were also papers in educational sciences that interactive books usually don't increase comprehension of kids (they just play with them instead of depen their understanding).

edit: sorry didn't want to sound too critical. The work is awesome (upvote), was just thinking out loud.

mathgenius 1 day ago 1 reply      
To me this book looks like a whole bunch of equations, with some fancy graphics sprinkled on top. And, far too many equations! Linear algebra is much more elegant (simple) than this. To pick one example, they define the inner product using the cosine of the "smallest angle between the two vectors." Sure if you want to calculate a number (an inner product in two dimensions), and you happened to know the angle in question, this might be helpful. But otherwise it completely obscures everything else about the inner product. How does an interactive graphic help you understand wtf is a cosine doing in this equation? What is a cosine anyway? Where is the graphic for that?

This just seems too backward and over-done to me. But go ahead and test it on some newbies, maybe I'm totally wrong here.

sp4ke 17 hours ago 0 replies      
You might be interested by this curated list of explorable explanations [1]. It has a good section on math.

[1] https://github.com/sp4ke/awesome-explorables

ydmitry 1 day ago 0 replies      
Try use web workers to not block UI while processing: http://immersivemath.com/ila/ch05_gausselim/ch05.html
mkl 1 day ago 1 reply      
Does anyone know more about how the figures were developed?

I found https://www.lth.se/fileadmin/lth/genombrottet/LUkonf2015/41_... which says "The figures were programmed using JavaScript using a graphics engine that we have developed for the interactive illustrations", but that's all the detail it gives.

msaharia 9 hours ago 0 replies      
Q: How would one go about creating such animations? What's the best workflow? Found the figures were slow to load.

Great resource, though!

BooglyWoo 1 day ago 0 replies      
Mandelbrot cites his ability to "think in pictures" as fundamental to his process and insights.

He offers some interesting reflections and anecdotes on this subject in this interview describing his classes prparatoires aux grandes coles. Apparently his teacher considered him a total wildcard case who would either flunk the exams or pass with flying colours, because of his habit of approaching everything through geometric intuition rather than symbolic manipulation.


jarek83 17 hours ago 0 replies      
Great content, makes it much easier to get through concepts.But as most math learning resources, it lacks examples of their practical application."The law of cosines is a very useful formula to know" - fine but why? Most learners can't imagine where they can apply what their teachers say, which brings it down to learn useless-for-them terms, needed just to pass exams. Some basic examples like these would make it much more reasonable to learn: http://study.com/academy/lesson/solving-real-world-problems-...
blinry 1 day ago 0 replies      
If you like this, you might be interested in /r/explorables, a subreddit filled with interactive stuff:


techman9 1 day ago 0 replies      
I feel as if I'm looking at the future of textbooks. This is absolutely incredible.
ErikBjare 1 day ago 1 reply      
I'm a MSc in CS student at the university where the authors (whom I've met a few times) teach.

If you have any questions you think I'd know the answer to, ask away!

darkhorn 1 day ago 0 replies      
The order of the words and suffices is like in Turkish. One of my English teachers who has lived in Japan for a time (and she was a Turk) told us that it is very easy to learn Japanese because it follows the same word orders, and vica verse. On the other hand English is very hard becuese there is zero connection.
markbao 1 day ago 1 reply      
Amazing resource, but is it still being updated? Last update seems to be from back in July last year.
mrkgnao 1 day ago 2 replies      
Excellent work! I've been trying to get my mother (she's a physics teacher) to learn linear algebra properly for a long time. Artin didn't work (ha), Khan Academy moved too slow/bored her, but she seems interested in this.

It's important to appreciate how useful it might be to make math "tangible". Sure, someone who can define a manifold by saying "oh, put charts on it, locally diffeo blah blah" probably has a good set of mental models that help them find analogies and even "tangibilize" (word?) new ideas. Once you learn the way abstraction works, broadly speaking, you can take the training wheels off: but lots of people never get past that stage. On one hand, I see a lot of people on HN talk about how the complicated notation of academic math/CS keep people out (and there is an understandable amount of resentment at people keeping "outsiders" out with this), and on the other hand, I sort of reflexively bristle (it's gotten lesser now) at people integrating the notion of an inner product into a vector space, because it is important to not stumble later when you find out your basic intuition for something is broken[3]. (Of course, intuition can be incrementally "patched": Terence Tao's essay[3] talks about this, from the perspective of someone who is a brilliant educator in addition to also being one of the most versatile mathematicians around.)

Maybe presentations of basic mathematics that are

- simple

- rigorous

- free of half-truths

can be made accessible by using such visualizations and interactive techniques to decrease the perceived unfamiliarity of the ideas? I don't think there are many[2] treatments of mathematical topics that satisfy these criteria and yet manage to be approachable: one either skimps on a clean presentation (Khan Academy), or assumes a lot of mathematical maturity (shoutout to Aluffi!) from the reader. "Manipulable resources" might help fill this gap. It's an exciting time!


In the section where they give examples of matrix inverses, to give people a sense of how important multiplication order is, they give an example of RHR'H' (using a prime for inverse, R for a rotation matrix, and H for a shear matrix). One of the most beautiful illustrations[1] in the book follows, with the four corners of a square moving independently in circles, and then the book states that

 It is quite close, but it is not at all useful. 
While I understand the need to clarify the importance of multiplying matrices in the correct order, maybe a short aside on the unreasonable (practical!) effectiveness of commutators[0] would be useful?

[0]: https://en.wikipedia.org/wiki/Commutator

[1]: http://immersivemath.com/ila/ch06_matrices/ch06.html ("Example 6.12: Matrix Product Inverse au Faux")

[2]: Visual Complex Analysis is brilliant, though.

[3]: https://terrytao.wordpress.com/career-advice/there%E2%80%99s...

tomrod 1 day ago 0 replies      
This is fantastic, and where I have expected e-textbooks to go for years. Kudos!!!
boramalper 1 day ago 0 replies      
Reminded me of Explorable Explanations: http://explorableexplanations.com
machiaweliczny 1 day ago 0 replies      
This looks great! Could some of you share other nice books and video series covering core CS math basics?
bobajeff 1 day ago 0 replies      
I was just wondering recently why something like this didn't exist when trying to learn about Matrixes via Kahn Academy.
kowdermeister 1 day ago 1 reply      
Jaw dropped. This is exactly what I needed to get some progress with my shader learning :)
aarongeisler 1 day ago 0 replies      
I really like this. This would've been handy as a student. I definitely prefer this to a textbook.
NumberCruncher 1 day ago 0 replies      
I wish we had resources like this when I was on the university.
Demcox 1 day ago 0 replies      
This will come in handy for my LinAlg exam :3
freakynit 1 day ago 0 replies      
Wow...i so so so much wanted a book just like this... Thanks
digitalshankar 1 day ago 1 reply      
Can anyone hack this and make a PDF book?
aligajani 1 day ago 1 reply      
Wow, this is great.
Distill: a modern machine learning journal distill.pub
659 points by jasikpark  9 hours ago   72 comments top 23
j2kun 7 hours ago 8 replies      
I sure hope this catches on, but we should all be aware of the hurdles:

- Little incentive for researchers to do this beyond their own good will.

- Most ML researchers are bad writers, and it's unlikely that the editing team will do the work needed (which is often a larger reorganization of a paper and ideas) to improve clarity.

- Producing great writing and clear, interactive figures, and managing an ongoing github repo require nontrivial amounts of extra time, and researchers already have strained time budgets.

- It requires you to learn git, front-end web design, random javascript libraries (I for one think d3 is a nuisance), exacerbating the time suck on tangents to research.

Maybe you could convince researchers to contribute with prizes that aligned with their university's goals. Just spitballing here, but maybe for each "top paper" award, get a team together to further clarify the ideas for a public audience, collaborate with the university and their department and some pop-science writers, and get some serious publicity beyond academic circles. If that doesn't convince a university administration that the work is worth the lower publication count, what will?

In the worst case it'll be the miserable graduate students' jobs to implement all these publication efforts, and they won't be able to spend time learning how to do research.

choxi 7 hours ago 0 replies      
I've been trying to read more primary source information, sort of as my own way of combatting "fake news" but before that term was coined. There's a learning curve to it, but I've found that reading S1 filings and Quarterly Earnings Reports can be more enlightening than reading a news article on any given company. Likewise, reading research papers on biology and deep learning is significantly more valuable than reading articles or educational content on those topics.

As you'd imagine though, it's really hard. Reading a two page research paper is a very different experience from reading a NYTimes or WSJ article. The information density is enormous, the vocabulary is very domain specific, and it can take days or weeks of re-reading and looking up terms to finally understand a paper.

I'm really excited about Distill, there's a lot of value in making research papers more accessible and interesting. I've noticed that the ML/AI field has been very pioneering about research publication process, some papers are now published with source code on GitHub and the authors answering questions on r/machinelearning. This seems like a really great next step, I hope other fields of science will break away from traditional journals and do the same.

TuringNYC 6 hours ago 1 reply      
I don't want to undermine visualizations, they are awesome, but one of the big problems I see with ML research is the lack of re-produceability. I know that Google, Facebook and some others already share associated source repos, but it should almost be mandatory when working with public benchmark datasets. Source + Docker Images would be even better.

I worked in clinical research in a past life and studies would be highly discounted if they couldn't be reproduced. A highly detailed methods section was key. Many ML papers I see tend to have incredibly formalized LaTeX+Greek obsessed methods section, but far short of anything to allow reproduction. Some ML papers, i swear must have run their parameter searches a 1000 times to overfit and magically achieve 99% AUC.

Worse, I actually have tons of spare GPU farm capacity i'd love to devote to re-producing research, tweaking, trying it on adjacent datasets, etc. But the effort to re-produce is too high for most papers.

It is also disappointing to see various input datasets strewn about individuals' personal homepages, and sometimes end up broken. Sometimes the "original" dataset is in a pickled form after having already gone through multiple upstream transformations. I hope Distill can instill some good best practices to the community.

minimaxir 7 hours ago 1 reply      
The announcements and About page indicate an emphasis on visuals and presentation, which I apprI've. But when I think of "modern machine learning," I think of open-source and reproducibility (e.g. Jupyter notebooks).

Will the papers published on Distill maintain transparency of the statistical process?

I see in the submission notes that articles are required to be a public GitHub repo, which is a positive indicator. Although the actual code itself does not seem to be a requirement.

Xeoncross 8 hours ago 1 reply      
As a developer with a weaker background in mathematics, I face a language barrier with many modern algorithms. After lots of research I can understand and explain them in code, but I have no idea what your artistic-looking MathXML means.

Visualizations or algorithms described using code are much, much easier for me to understand and serve as a great starting point for unpacking the math explanations.

cing 7 hours ago 2 replies      
Is there any concern about a web-native journal being less "future-proof"? I've come across quite a few interactive learning demonstrations in Flash/Java that no longer work.
dang 7 hours ago 0 replies      
YC Research's (and longtime HNer!) michael_nielsen wrote an announcement here: http://blog.ycombinator.com/distill-an-interactive-visual-jo.... Hopefully he'll participate in the discussion too.
blinry 6 hours ago 0 replies      
Shameless self-plug: If you like interactive explanations, check out http://explorableexplanations.com/ and the explorables subreddit: https://www.reddit.com/r/explorables/
rememberlenny 8 hours ago 1 reply      
I wish there was a way to subscribe to a weekly email related to this.
EternalData 4 hours ago 0 replies      
Looks very good (especially the team behind it!), but I wonder if there's a discrete step down to where you make machine learning materials accessible to the general public beyond data visualizations and clear writing. This will certainly be a more interactive experience, but it seems to cater to those who are "in-the-know" and require a bit more interactivity/clarity. It'd be nice to discuss the format changes or the "TLDR" bot of machine learning that makes machine learning research truly accessible to the general public.
aabajian 4 hours ago 0 replies      
I already have a nomination. The guy who wrote this blog post:


It's the only way I could get a working model of Caffe while understanding the data preparation steps. I've already retrofitted it to classify tumors.

chairmanwow 5 hours ago 0 replies      
I feel like science publication in general could benefit from disruption of the publishing model. I'm not sure that the toolkit that Distill has provided is quite enough to totally change the paradigm, and it currently restricted to only one field.

I like the idea of having research being approachable for the non-scientist, and the more important question of whether there is a more efficient form (in terms of communicating new science between scientists) for research papers to take.

Is there any relevant work along this vector of thought that I should check out? Because I would really love to do some work on this.

fnl 6 hours ago 0 replies      
How does this provide IF ratings? Probably irrelevant for industry, but publishing in academia is all about IF, no matter how bad and corrupt one might think it is.

And what about long-term stability/presence. Most top journals and their publishing houses (NPG, Elsevier, Springer) are likely to hang around for another decade (or two...), while I don't feel so sure about that for a product like GitHub. Maybe Distill is/will be officially backed (financially) by the industry names supporting it?

That being said, I'd love seeing this succeed, but there seems much to be done to get this really "off the ground" beyond being a (much?!) nicer GitXiv.

mysore 2 hours ago 0 replies      
Wow this comes with great timing!

I am a UI-developer who has been wanting to learn ML forever. I started working on

1. fast.ai2. think bayes 3. UW data science @ scale w/ coursera4. udacity car nano degree

I'm going to write some articles about what I learn and hopefully move into the ML field as a data engineer in 6 months. I figure I got into my current job with a visual portfolio of nicely designed css/js demos, maybe the same thing will work for AI.

Old_Thrashbarg 7 hours ago 2 replies      
I don't see it written explicitly; can anyone confirm that this journal is fully open-access?
taliesinb 5 hours ago 1 reply      
Great stuff! I'm a fan of what's gone up on distill so far. Question for colah and co if they're still around: When does the first issue of the journal come out (edit: looks like individual articles just get published when they get published, n/m). Also, that "before/after" visualization of the gradient descent convergence is intriguing -- where's it from?
blunte 5 hours ago 0 replies      
I don't know jack about machine learning, but these illustrations are gorgeous - simple, elegant, and aesthetically very pleasing.
mastazi 2 hours ago 0 replies      
wodenokoto 5 hours ago 0 replies      
Looking at the how-to section[1] for creating distil articles, I fail to find how to write math and some notes on how best to reference sections of the document.

Other than that, this looks, much, much easier to write than LaTex.

[1] http://distill.pub/guide/

JorgeGT 6 hours ago 2 replies      
You should definitely assign a DOI to each article.
transcranial 6 hours ago 0 replies      
This is really exciting! Chris et al: have you guys seen Keras.js (https://github.com/transcranial/keras-js)? It could probably be useful for certain interactive visualizations or papers.
good_vibes 4 hours ago 0 replies      
I will definitely submit my first paper to Distill. It draws upon a few different fields but the foundation is definitely machine learning.

What a time to be alive!

The last patent on AC-3 (Dolby Digital) expires at midnight ac3freedomday.org
566 points by robbiet480  1 day ago   214 comments top 16
coin 20 hours ago 7 replies      
We in North America have been paying the Dolby AC-3 tax for the last 20 years. Dolby successfully weaseled AC-3 as the multi-channel audio encoding standard for MPEG-2 DVDs. This is despite the fact that MPEG-2 already has a multi-channel audio encoding (AAC). DVDs in Europe utilize AAC audio.

Thus every DVD and DVD player sold in N. America has to license AC-3 from Dolby.

The best way to steal from people is to do it without their knowledge.

Keverw 1 day ago 13 replies      
20 years just seems way too long for a patent, especially as fast that the tech industry moves nowadays. Even 3 or 5 years would be better if we're going to keep having government granted monopolies.
Animats 1 day ago 3 replies      
The MPEG-2 patents also ran out recently. Even MPEG-4, as used online, may be out of patent. The newer patents in the MPEG-LA portfolio for MPEG-4 are mostly for things nobody uses online, such as interlace and 5-channel audio. It's about time for someone to take a hard look at the remaining MPEG-LA patents.
emptybits 22 hours ago 1 reply      
> "You have probably paid many AC-3 license fees over the years. AC-3 license fees are part of the cost of TVs, game consoles, and other AV equipment sold in the last 25 years."

I'm curious ... what's the ballpark end-user license cost the public has been paying for the ability to decode AC-3 on their TVs and receivers and consoles and other devices?

CalChris 21 hours ago 4 replies      
When Japan opened up to the West, Korekiyo Takahashi visited the US in 1886. "We said, 'What is it that makes the United States such a great nation?' And we investigated and we found that it was patents, and we will have patents."


You need to reward innovators or you won't have innovation.

LeoPanthera 23 hours ago 1 reply      
This makes Laserdisc a completely open format!
johnhattan 1 day ago 3 replies      
What's the practical upshot of this? Are there some apps that are waiting for this bit to expire so they can finally make things work the way they should?

For example, Audacity for Windows doesn't install with MP3 support. You have to download a plugin from Germany before Audacity will read/write MP3. Which I presume is because patents.

ksec 21 hours ago 1 reply      
This makes, possibly, one of the most Widely used and Hardware compatible codec patent free. ( All MP3's patent will expire in Dec 2017 )

And AC-3 offer lossless mode, which means you now have a Free, Lossloess codec that can be played on a very wide range of media player.

My previous experience ( that was properly more then 10 years ago already )was that AC-3 sounded a lot better then Mp3 at high bitrate / 256Kbps.

Not sure how it fares with AAC.

shmerl 23 hours ago 2 replies      
Why would they still use old codec when there are modern and free ones like Opus?
magila 1 day ago 1 reply      
Is there a similar status page somewhere for DTS/DCA? I think that's the last of the first generation multichannel audio codecs which is still both widely used and patent encumbered.
bubblethink 1 day ago 1 reply      
Good news for Fedora ? With native mp3 and ac-3 decoding in 2017, who knows what this new world holds in store for us. /s
kozak 11 hours ago 0 replies      
I have an AVCHD camcorder that is excellent in all respects, except for the fact that it uses AC-3 at something like 256 kbps for its stereo sound: compression artifacts are quite noticeable.
jsnell 22 hours ago 1 reply      
I happened to open this link with about 5 minutes left on the countdown, so I totally watched it tick down all the way to 0. Unfortunately no fireworks at the end, just the clock starting to show the time the patents had now been expired :)
tehabe 12 hours ago 0 replies      
I think the last MP3 patent will expire by the end of the year, I always thought AC-3 is younger than MP3 but well. (Almost the same age) MP3 is also an example of prolonged patent validity, it was approved in 1991, published in 1993 but the last patent expires in 2017.
hoschicz 19 hours ago 0 replies      
Does that mean that VLC on iOS will be able to play AC-3? Now, one can't play a movie on an iOS. device, you have to transcode it first.
jlebrech 12 hours ago 0 replies      
does audio still have that much innovation, isn't it now a good idea for dolby to look elsewhere for patents?
Scientists sent a rocket to Mars for less than it cost to make The Martian backchannel.com
548 points by leslielemon  3 days ago   155 comments top 22
shas3 3 days ago 4 replies      
If you are interested in Indian women scientists and engineers, there is a nice compilation (a bit tiresome to read, but worth it, IMO) of biographical essays called 'Lilavati's Daughters' https://ia800402.us.archive.org/33/items/LilavatisDaughters-...

My perception of it, growing up plugged in to the Indian science and engineering community is this: Indian Government organizations (essentially, the major employers till the 1990s) were quite progressive when it came to women in the workplace than you'd think based on population statistics of women-at-work, etc. The reason for this imbalance is inequality: millions of women suffering terrible inequality in the lower socio-economic and lower-caste segments of the population offset gains on the higher end.

return0 3 days ago 3 replies      
I'm glad they chose a title that highlights the feat, instead of the fact that they are women. And it's weird that most comments here focus away from the impressive feat.
ChuckMcM 3 days ago 3 replies      
So here is the punch line, can they make a movie of the incredible accomplishment of these women and have it's proceeds cover the cost of developing the mission? :-) That would be pretty profound serendipity.

One of the statements in the article that I really liked was this one, "I would star at the dark and wonder what was beyond it." That is the kind of curiosity you want to nurture in your children.

throwaway6497 3 days ago 0 replies      
I hope this picture becomes a symbol of inspiration to all girls worldwide that they are equals when it comes to excelling at STEM. We all need to actively promote the right kind of imagery and narrative around us to build a world where women are empowered are to go after STEM careers, and never ever feel that they won't be able to do better than men.
wooshy 3 days ago 2 replies      
"And they happen to be women." I don't know about you but I count three men in that picture. Not even counting the rest of the people that undoubtedly worked on this project who were a mix of men and women.
nileshtrivedi 3 days ago 1 reply      
Apparently, the methane sensor was defective and the data it sent was unusable: http://www.space.com/34943-india-mars-orbiter-mission-methan...
webaholic 3 days ago 6 replies      
Well, 'The Martian' made the money spent on it and more. It is up for debate how much the rocket sent to Mars has given back on the investment. And by this I mean that it's an apples to oranges comparison.
JamilD 3 days ago 4 replies      
Even taking into consideration cost of labour, it seems like US manufacturing and infrastructure costs are excessively bloated. Is there any way to reduce it while still maintaining an acceptable level of risk?
valuearb 3 days ago 0 replies      
It looks like their payload was 2,900 lbs to orbit, and then they used engines on the orbiter to spiral up and enter mars injection orbit. Total scientific payload was only 30 lbs.

It's unclear from Wikipedia whether the booster put it into LEO or closer to GTO, but it appears it has has the capability to put 3k lbs near GTO.

For contrast, a Falcon 9 can put 18k lbs in GTO for $62M, so maybe a 1,800 lb scientific payload to Mars. But that doesn't include payload costs and payload development cost, so probably well over $100M. And building a probe in 18 months has to be really hard.

It will be interesting if Falcon 9 prices drop because of re-usability, say to $30M, whether that will open up the opportunity to do lots of custom probes to inner planets and asteroid belt.

thebiglebrewski 3 days ago 1 reply      
Haha wow I can't believe the negativity in these comments. Congratulations to this team for a big achievement! I'm in support of anything that gets us closer to a multiplanetary society. Cheaper space technology is one of those things, even better that a lot of women were involved on the the job, that's inspiring. Kudos to this team!
danm07 3 days ago 1 reply      
That's not surprising. Hollywood is egregiously wasteful in its spending. If it takes $100M to film a movie, that equates to ~100 Series A rounds. With a ~80% write-off rate, that's ~20 successful companies for roughly the same money (except the money won't be sitting at the bottom of your DVD rack a year later).
spenrose 3 days ago 0 replies      
The phrase "cost to make" is a problem here. Most of the millions in movies are spent to claim a share of a fixed-size amount of our collective attention. You could "make" an identical video artifact for a small fraction of The Martian's budget. It just wouldn't get seen.
spodek 3 days ago 1 reply      
Also, the actual Titanic sank in less time than the Titanic in the movie.
nawitus 3 days ago 0 replies      
"These Scientists Sent a Rocket to Mars"

These and other scientists who worked on the mission to be more precise.

ram_rar 3 days ago 1 reply      
While comparing Frugal Engineering vs Art. Cost does not seem like the valid factor to compare. Also, Wages in India are paltry compared to the USA.
vasira 3 days ago 2 replies      
But Martian movie made more than the money spent on it.
matthewhall 3 days ago 1 reply      
TheAdamAndChe 3 days ago 3 replies      
Wow, literal quotas? That's just discriminatory.
babyrainbow 3 days ago 4 replies      
bsder 3 days ago 1 reply      
It's a rocket to Mars. That's always cool.

However, there is a big difference in sending a rocket to Mars with 2015 technology vs. 1960's technology in terms of cost, reliability, materials, etc.

jordache 3 days ago 0 replies      
What a horrible title.

Into the Wild was a movie about a homeless traveler. It cost $15million to make.

nyrulez 3 days ago 1 reply      
No offense to the Martian mission but I probably got better enjoyment and insight out of the movie so far than the actual mission. Not that it is or should be the only metric, but is the metric that matters to me today.
YC AI ycombinator.com
629 points by craigcannon  1 day ago   213 comments top 31
jph00 1 day ago 18 replies      
If you're the kind of person that's interested in taking up this challenge, but you currently have the coding skills without the deep learning skills, we built something that can equip you with most of the current best practices in deep learning in <2 months: http://course.fast.ai/ . It doesn't assume anything beyond high school math, but it doesn't dumb anything down (key mathematical tools are introduced when required, using a "code first" approach).

We don't charge anything for the course and there are no ads - it's a key part of our mission so we give it to everyone for no charge: http://www.fast.ai/about/

And yes, it does work. We have graduates who are now in the last round of applications for the Google Brain Residency, who are moving into deep learning PhDs, who have got jobs as deep learning practitioners in the bay area, etc: http://course.fast.ai/testimonials.html . Any time you get stuck, there's an extremely active community forum with lots of folks who will do their best to help you out: http://forums.fast.ai/ .

(Sorry for the blatantly self-promotional post, but if you're reading this thread you're probably exactly the kind of person we're trying to help.)

kolbe 1 day ago 6 replies      
As someone who's capable of implementing and understanding many of the most fashionable tools in AI, I don't know what to do with the current economy. I think there is far too much attention being paid to the pie-in-the-sky research, and even though wealthy investors think the things that they don't understand and I do are capable of accomplishing things that they think can be accomplished and I don't, the problem is that they want to pay people like me to chase their dream. And while I do love money, I also love the idea of living a meaningful and fulfilling existence by pursing technologies that will advance mankind.

Can other people who've actually seen real promise in their AI research chime in and help convince me that we are actually on the precipice of something meaningful? Not just more classification problems and leveraging the use of more advanced hardware to do more complicated tasks.

roymurdock 1 day ago 5 replies      
> We want to level the playing field for startups to ensure that innovation doesnt get locked up in large companies like Google or Facebook.

AI and ML are exciting because they promise to help us evolve systems and machines quickly to perform more accurately.

This requires access to a constant flow of large, proprietary datasets.

Providing cheap access to datacenters and compute power is a great first step for leveling the playing field for startups.

I'll be interested to see how YC tackles the (IMO) more important problem of providing access to the data needed to train models.

I think IBM has taken an extremely wise first step by acquiring the data assets of The Weather Company. This will give its Watson IoT portfolio a leg up on other companies that need to rely on a patchwork of public and private data sources when factoring something as integral as the weather into algorithms and logic engines.

Perhaps YC can consider something similar, pooling investors and VCs together to acquire/partner with providers of essential data.

gabrielgoh 1 day ago 3 replies      
I've wanted to break into robotics from machine learning for a long time, but I haven't found a good entry point for the problem. It seems like a rather large vertical cliff I have no way of scaling.

One of the handholds I'd need is a physics engine which which models a robotic arm down to the finest levels of motor control and feedback. I am not a mechanical engineer, and I do not know where to begin with a problem like that. I don't imagine this is hard to build using existing physics engines, but it wouldn't work out the box. It requires some deep domain knowledge of things like friction, tensile strength, and so on. A system like this would spur progress in robotics immensely.

petra 1 day ago 8 replies      
>> RFS: Robot Factories.

From all the places where AI could help, why focus on that field, the place where the most vulnerable employees are found, the hundreds of millions from china, Bangladesh, etc - who have little chance of having a meaningful Social safety net ?

>> job re-training, which will be a big part of the shift.

I don't believe that this is realistic, even in the west. Why ? because the internet, who is obsessed about this subject, and doesn't lack imagination, can't even supply a decent list of what jobs will the future offer that could employ the masses who's jobs will be automated.

personjerry 1 day ago 2 replies      
How exactly does this "democratize" AI? Doesn't this only potentially prop up another AI company with the hope that YC will be backing it (and thus profit from its success)?
itchyjunk 1 day ago 3 replies      
Slightly off topic but had 2 questions.

The last part reminded me of google training a bunch of robot arms [1]. Haven't seen much being done with it, does anyone know if anything is being done with the data?

We don't really know how much of each resources it takes to grow something. How much oxygen does 1 tomato plant take? I think growing in space would require one to investigate such questions in more detail. Ever since I came across ML and DL, I wondered if there was a way to train a deepFarmer. Something that understands relations between minerals nutrients and the fruiting body. Sometimes you want to grow stuff that might have less of certain stuff but you want more off it. Eg: low calorie high volume foods. Or you might want high protein food. If an "AI" could figure out how to maximize vitamin C for example in tomato for a population that needs more of that vs it learns to make the tomato more water rich for some other reason.AI is interesting.

[1] https://research.googleblog.com/2016/03/deep-learning-for-ro...Edit:Forgot link

karmicthreat 23 hours ago 2 replies      
I think this is one of the few RFS that I would drop everything for.

The factory environment has to be one of the most frustrating as a software developer. So much low hanging fruit that you really have to develop too much if you are not following the typical industrial idioms. (Throw ladder logic at it, keep at it till it barely works, then run it till the wheels fall off.)

Even just something as simple as reporting on a 30K$ PLC driven machine is painful. You can't buy a 10K HMI with Wonderware (a still painful piece of software) so you just ignore it. In my particular case I developed a simple reporting system on a RPI.

These industrial systems are usually not capable of even SQL or MQTT. You have to strap on an extra piece of hardware and a ton more money for licensing.

Even deployment is painful. You can't just push new code because you have no way to mock your industrial systems. Even if you could you will need to live edit your code into the PLC or you will wipe its current state of recipes and other user data. Because your PLC wasn't able to use standard SWE tools to get that data. God help you if you need to roll back.

So I am applying to this. Everything is broken. Where do you even start? Fix the platform, fix the robots, fix the job setup, fix the people.

jbarham 1 day ago 2 replies      
I really hope the bit about "free machine-powered psychologists" is satire, but given YC's unironic techno utopianism fear that it is not.

Reading "Computer Power and Human Reason" by Joseph Weizenbaum (published over 40 years ago!) should remind people that attempting to build a "machine-powered psychologist", even if it's something that can be done, is not something that should be done.

emcq 1 day ago 0 replies      
Perhaps due to the article starting with discussing how AI might be overhyped, but I'm very much not blown away by this post.

Reinforcement learning for self improving robots is one of their called out areas? I've never found companies focused on tech or research problems first to be all that successful. In terms of a research projects it's not very interesting or socially beneficial compared to self driving cars or robotics applications in medicine.

It all leaves me wondering what YC's strategy is here. Maybe it's easier to establish a fund in AI, get smart people to apply, or that their expected future returns are higher?

makmanalp 1 day ago 0 replies      
> Some think the excitement around Artificial Intelligence is overhyped. They might be right. But if theyre wrong, were on the precipice of something really big.

I mean, even if it is overhyped, I think there's a lot to be excited about. Weak AI is still an amazing breakthrough for automation. The trick is to not try to do too much at a time. We do ourselves a disservice by not considering how amazingly efficient humans augmented by ML can be. The research for AI doing everything just isn't there yet, and that's OK.

patkai 12 hours ago 0 replies      
I'm not surprised that the general public is worried about AI but I would expect many others to worry about data. Exaggerating a bit: mathematics and AI skills is something any talented person can get individually, but gathering useful personal data on a large scale requires a huge infrastructure. So if we want to "democratize" then I'm wondering why not democratize access to data.
Entangled 9 hours ago 0 replies      
> If the experiment works out, well expand what we offer to include things like access to proprietary datasets and computing infrastructure.

Datasets, that's the most important point in Machine Learning and exactly what Google has been collecting for the past decades.

I wanted to start a project about dermatological images but how would I get that information? Then decided to start an agricultural project but then again, how to get a million images to identify a thousand species? Birds? Legal documents? Human faces? Fashion? Speech? Translation? Everything needs a huge collection of datasets.

The tools are there, that's the easiest part.

throwawaysbdi 1 day ago 2 replies      
Ahhh this is the moment I've been waiting for. Hype has officially hit stratospheric proportions.

Time to add the words "deep" or "learn" to your startup name and reap in the dough!

urs2102 1 day ago 0 replies      
This seems neat.

Are there any other future verticals which you would consider domain specific perks for?

Also, what exactly constitutes an AI startup? If you utilize a ML library to handle a small feature of your product, are you an AI startup?

johnrob 1 day ago 0 replies      
At some point, making food/housing/healthcare cheaper seems like a more achievable goal than finding work for people (who are in theory competing with AI).
mackan_swe 19 hours ago 1 reply      
My goal is also to democratize AI, in particular AI research. I believe that every developer should have at their disposal the same kind of tooling and, even more important, the same ability to intersect their data with the world's data. Engineers at Facebook, Google, Microsoft and so on can test their models or even enrich them by using the Facebook, Google or Bing dataset. Independent entrepreneurs cannot do the same thing with the same ease. If we want to reach general AI any time soon, indie entrepreneurs must be let in to play.

My strategy is to build a service, free for non-profits to use, that would solve the problem of "if I only had the same data Google engineers had, this product would be perfect". Here is how it would work.

1. Go to my webpage and register a site you want me to index for you. The site URL you enter may already have been registered by another user, but to be sure the data is in my index, register it again. I will now continue to index this site every 24 hours for as long as I live. You need higher frequency indexing? Sure, no problem. You will owe me for the additional cost.

2. Download a client of choice from the website, we have them in c#, java, python, R ect. The client will let you query your own private data as well as the data in the cloud (the data I'm now generating and refreshing every 24 hours). The query language will also let you join or intersect between the two datasets. In fact, due to the nature of RPC you can use your local data and all of the data I'm generating and refreshing, as if it was your data.

3. In the end, I will be indexing such a large part of the internet that there will not be much use for Google anymore, or ads. That's the vision.

I'm not American and can't see how I'm a good fit for the YC program this summer. However I will be needing funds for cloud machines pretty soon and so far I've found noone at OpenAI to contact. Is there anyone from OpenAI reading this? This should be right up your alley. Care to speak?

amelius 14 hours ago 0 replies      
Perhaps it's an idea for YC to start a "job agency for AIs".

On the "demand" side, client-companies can offer problems to be solved by AI.

On the "offer" side, startups can provide algorithms solving specific problems.

YC can be a mediator, running the algorithms, and keeping the data of client-companies safe from anybody else (including the AI startups).

Here's an example of such an agency: http://www.aigency.co/about/

aabajian 1 day ago 0 replies      
I'm interested in this per my earlier discussion on machine learning in radiology (see: https://news.ycombinator.com/item?id=13571847). It's disappointing that you have to be in the Bay Area to participate. I'm just starting residency and don't have the time to drop everything and enroll in an incubator. I think I'm one of the few people with a master's degree in computer science and (soon) to have a medical degree. I can handle the technical and medical sides of a radiology informatics / machine learning business, but I'd need someone to manage the business, marketing and sales sides.
eddd 18 hours ago 0 replies      
I see ML and AI raising on the market quite fast. How does this work? How many ML engineers are out there? If, broadly speaking, we have shortage of software engineers, what is the demand for ML engineers? It is not something you learn overnight as a dev or mathematician, so where are there coming from?
legel 1 day ago 0 replies      
The post is clear that A.I. startups in this vertical will be given special resources, but not clear if there will be more startups selected specificially for this?
elmar 1 day ago 0 replies      
So how to correctly "mention this post in your application." just insert the URL ? "https://blog.ycombinator.com/yc-ai/"
samirparikh 20 hours ago 1 reply      
Seeing as how sam altman has invested in an AI startup (vicarious.com), which is pursuing robotics applications, why would he/YC want to fund competitors for RFS ?
lowglow 1 day ago 0 replies      
I asked this last time but got no response.

1. Is there a firewall between the information companies applying give you and the rest of the OpenAI effort?

2. What's to prevent a partner from seeing a good thing and passing the info along to a potential competitor already funded inside the program?

Overall it seems that this may be used to give OpenAI a strategic competitive advantage by using ingress application data for market analysis/research/positioning/signaling/etc.

pron 1 day ago 0 replies      
> Some think the excitement around Artificial Intelligence is overhyped. They might be right. But if theyre wrong, were on the precipice of something really big. We cant afford to ignore what might be the biggest technological leap since the Internet.

1. We need to work on big things whether or not they're overhyped and whether or not we're on some precipice.

2. Those who think what marketers call AI is overhyped (myself among them) don't think that it isn't something really big. Even though we haven made very little progress since the algorithms that power most modern machine learning were invented fifty years ago, there is no doubt that machine learning has become quite effective in practice in recent years due to advances in hardware, and heuristics accumulated over the past decades. It is certainly big; we just don't think it has anything to do with intelligence.

3. Are there any machine learning experts who think we are on the precipice of artificial intelligence? If so, do they think we can overcome our lack of theory or do they think that a workable theory is imminent?

deepnotderp 1 day ago 0 replies      
Would YC be interested in funding a deep learning chip startup?
malux85 1 day ago 0 replies      
Is YC still interested in Solo Founders?
tylermenezes 1 day ago 1 reply      
I'm excited to see what new ways companies will find to call a bunch of if-statements "AI".
partycoder 1 day ago 0 replies      
If you find it hard to sell AI based solutions, just call it automation rather than AI. It is a term that people react to differently.

It's like the term "technology". Spoons, chairs, bricks are technology. The modern usage of the word technology is what people used to call "high technology".

EGreg 17 hours ago 0 replies      
"We think the increased efficiency from AI will net out positive for the world, but were mindful of fears of job loss. As such were also looking to fund companies focused on job re-training, which will be a big part of the shift."

I find a lot of wishful thinking, denial and cognitive dissonance in this sentiment, which is found everywhere.

"Computers will eliminate these jobs, but humans will always have more to do."


If computers can learn at an accelerated pace, what makes you think that by the time you learn that next thing, it won't already be eliminated (or shortly afterwards) by a fleet of computers that use global knowledge? Do you really think that Uber driver - turned - newbie architect is going to be in demand vs the existing architects with their new AI helpers?

It's not black and white, but the AVERAGE demand for human labor is going down, not because EVERY human job will be eliminated but because automation allows LESS people to do the job.

So wages drop. On average.

The only real solutions are either unconditional basic income, or single payer free healthcare / food / recreation.

itcrowd 1 day ago 4 replies      
At Risk of being downvoted to oblivion and out of a very well-meant interest: what if you'd replace AI with Chinese traditional medicine in the post above? Why would AI be more viable than that?

(Note: I am very skeptical towards CTM and quackery. But also towards AI and ML in general. Any pointers would be great. Why is this the new industry to look out for, for example?)

The beginning of Git supporting other hash algorithms github.com
420 points by Velox  1 day ago   121 comments top 15
bk2204 1 day ago 5 replies      
I'm the person who's been working on this conversion for some time. This series of commits is actually the sixth, and there will be several more coming. (I just posted the seventh to the list, and I have two more mostly complete.)

The current transition plan is being discussed here: https://public-inbox.org/git/CA+dhYEViN4-boZLN+5QJyE7RtX+q6a...

lvh 1 day ago 0 replies      
From a cryptographer's perspective, everything around SHA-3 is a little weird. We ended up with something that's pretty slow even though we had faster things, for which general consensus was that they were just as strong. Similarly, consensus was that some SHA-3 candidates made it as far as they did because they are drastically different from previous designs. Picking a major standard takes a while, and immediately preceding it we saw scary advances in attacks on traditional Merkle-Damgard hashes like SHA-0, SHA-1. Not SHA-2, but it's pretty similar, so the parallels are obvious.

Bow that we have SHA-3, we ended up with a gazillion Keccak variants and Keccak-likes. The authors of Keccak have suggested that Git may instead want to consider e.g. SHAKE128. [0]

[0]: https://public-inbox.org/git/91a34c5b-7844-3db2-cf29-411df5b...

It's a bit unfortunate that this is really a cryptographic choice, and it seems to mostly be made by non-cryptographers. Furthermore, the people making that choice seem to be deeply unhappy about having to make it.

This makes me unhappy, because I wish making cryptographic choices got much easier over time, not harder. While SHA-2 was the most recent SHA, picking the correct hash function was easy: SHA-2. Sure, people built broken constructions (like prefix-MAC or whatever) with SHA-2, but that was just SHA-2 being abused, not SHA-2 being weak.

A lot of those footguns are removed with SHA-3, so I guess safe crypto choices are getting easier to make. On the other hand, the "obvious" choice, being made by aforementioned unhappy maintainers, is slow in a way that probably matters for some use cases. On the other hand, not even the designers think it's an obvious choice, I think most cryptographers don't think it's the best tool we have, and we have a design that we're less sure how to parametrize. There are easy and safe ways to parametrize SHA-3 to e.g. fix flaws like Fossil's artifact confusion -- but BLAKE2b's are faster and more obvious. And it's slow. Somehow, I can't be terribly pleased with that.

lvh 1 day ago 1 reply      
FWIW, Fossil released a version with backwards compatibility, configurable graceful upgrades a week ago: https://www.fossil-scm.org/index.html/doc/trunk/www/changes....
pwdisswordfish 1 day ago 1 reply      
struct object_id was introduced in this commit, in 2015:


So this change doesn't do much for now. Good to see, though.

corbet 1 day ago 0 replies      
This work actually began in 2014... https://lwn.net/Articles/715716/
VMG 1 day ago 1 reply      
Is there some explainer on how the support will look like in the end? I'm curious to know how multiple hash algorithms will be supported in parallel.
benhoyt 1 day ago 0 replies      
I immediately looked at the length of this commit's hash to see if it was longer than 40 hex chars -- but no, it's just an SHA-1. It would have been cool if somehow the hash of this commit that added new hashes was a new hash.

Slightly similar: for a while I've wanted to recreate just enough of git's functionality to commit and push to GitHub. My guess is the commit part would be pretty trivial (as git's object and tree model is so simple) but the push/network/remote part a bunch harder.

gkya 1 day ago 0 replies      
zoren 1 day ago 5 replies      
Someone please remind me why the hash is not a type definition so the representation would only have to be changed in one place.
ossmaster 1 day ago 1 reply      
So could be my ignorance of this project in detail, but where are the tests for this?
btrask 1 day ago 0 replies      
This is the chance to get rid of the object prefixes (i.e. "blob" plus file length) that prevent the generated hashes from being compatible with hashes generated by other software.
kozak 1 day ago 2 replies      
Do they anticipate that one day we'll have to move from SHA256 to something else again? It's only matter of time. Hash function have lifecycle. Tre transition has to be done in a way that will also make the next transition more straightforward.
koolba 1 day ago 2 replies      
Since the majority of us are running x64 machines, will the hash be a truncated SHA-512/256 or will it be SHA-256? The former is significantly faster on x64 machines.
kazinator 1 day ago 1 reply      
What problem does this solve? Are collisions common?
irontoby 1 day ago 3 replies      
U.S. Web Design Standards 1.0 usa.gov
420 points by molecule  2 days ago   106 comments top 22
neves 2 days ago 6 replies      
Great! Even if you disagree, just the existence of a Gov standard will:

1) Create a minimal common language for all sites that the citizens are forced to use. Now we can all can use these patterns in our sites knowing that more users will know them. Remember that software isn't intuitive, but familiar: http://www.asktog.com/papers/raskinintuit.html

2) Force software developers for public services to put up a minimal decent interface, instead of just a list of features.

The best page is their Design Principles: https://standards.usa.gov/design-principles/

algesten 2 days ago 2 replies      
I'm really surprised all the typography options require downloadable fonts (Source Sans Pro and/or Merriweather).

I'm on a 1mbit throttled connection right now, and it's really noticeable, even on these pages, the font loading takes a while and suddenly the whole page jumps around and re-renders.

Though apart from that It's a great guideline.

Really glad they put so many color choices in there and taking care of showing how to combine typography with color. Many guidelines I've seen have two accent colors and when you come to implementing the site, you straight away must deviate from the guideline and invent new things.

jimbauer_12 2 days ago 0 replies      
Been using this on a govt. site Ive been redesigning for 2 years and could be doing so for another year or more.

We've went from one govt. design standard to another and I'm not sure if the site will ever get redesigned or finished as it has to get signed off by a 100 govt. VP types. I'm burnt out by the bureaucracy and they need someone to get this thing approved vs. what do you think and what do you think and oh what do you think ... each person having a different opinion and nothing getting done but revise it again. What version are we on now .. oh number 599.

Keeps me gainfully employed thankfully, but redesigning a website should not take 3 years or more.

dangero 2 days ago 1 reply      
This is great at this point because it's short and simple, but I could imagine this becoming terrifying in a few years if it does not remain in good hands. The natural thing to do is point out what could be added and pretty soon it could become a massive burden to make sure you're compliant with the standards. Long term I suspect this could increase the weight (cost and time of delivery) of government software projects. If only we could have some predefined limit set of how long and complex it can become. My libertarian side is coming out here, but the government is really good at adding cool things, not so good at taking them away, decreasing them, or even maintaining them.
SippinLean 2 days ago 1 reply      
>18F specifically does not recommend using Bootstrap for production work because:

>It is difficult to adapt its opinionated styles to bespoke design work, and

They have a tool on their website to generate a theme (Bourbon and PureCSS do not), and Sass functions and mixins for customizing elements. Bourbon and PureCSS's components are just as opinionated (and there are less of them).

>Its CSS style places semantic layout instructions directly in HTML classes.

Sure, but you can just use the Sass mixins instead, allowing you to use the grid system without adding a single class to HTML.

Seems they knew this about Bourbon:

>Bourbon is a Sass mixin library that has extensions for a robust semantic grid (Neat)

The same is true of Bootstrap, so they recommend against it?

It sounds like the author wasn't familiar with Bootstrap.

0xcde4c3db 1 day ago 0 replies      
I love this:

> The UI components are built on a solid HTML foundation, progressively enhanced to provide core experiences across browsers. All users will have access to the same critical information and experiences regardless of what browser they use, although those experiences will render better in newer browsers. If JavaScript fails, users will still get a robust HTML foundation.

I guess it remains to be seen just how robust this really is, but it's a fantastic goal to see explicitly embraced for modern websites. I skimmed some things in Lynx and w3m (elinks failed with an SSL error; rumor has it that it doesn't support SNI), and it honestly looks better than I remember a lot of sites looking in Lynx ~20 years ago, let alone the average modern site. Sure, part of that is imagemaps and frames going out of style, but modern sites haven't necessarily replaced them with more graceful constructs.

ourmandave 2 days ago 1 reply      
Their wonderful tag line at the bottom of the page footer:

Were from the government, and were here to help.

sdevoid 2 days ago 2 replies      
> An official website of the United States government Here's how you know...

> This site is also protected by an SSL certificate that's been signed by the U.S. government...

Oh really?

DST Root CA X3- Let's Encrypt Authority X3 - standards.usa.gov

Kudos for using Let's Encrypt though!

hyperhopper 2 days ago 2 replies      
The main page developers will care about:


Some things are great, and show a pulse on the industry, but some aren't. For example, Bower is prohibited, but yarn isn't even mentioned or encouraged, much less required.

Though they do give devs the freedom to choose any framework, with a very nice pros/cons lists of the popular ones


sacheendra 2 days ago 0 replies      
The introduction to the colors section talks about communicating warmth and trustworthiness. How does a mostly blue, grey and white page communicate those feelings. Personally, I have always associated blue and grey with cold and isolation.
nateabele 2 days ago 4 replies      
I had a manager once who had a prior career in government. One of their many cynical internal catchphrases was "the conveniences which you have requested are now mandatory".

Anyone with common sense and a knowledge of history knows exactly where this is going. As things continue to move online, internet access and associated technology standards begin to be declared a "public necessity" or some other nonsense along similar rhetorical lines.

You don't even need to look very far for examples. UK accessibility laws (not that they're an unqualified evil, simply a legislative 'gateway drug'), references to 'digital haves and have-nots' from a few US election cycles ago, etc.

I'd really hoped to be closer to retirement before this sort of thing started happening.

Edit for clarity: I'm not saying this is immediately going to turn into some draconian thing, but as a founding member of a standards body[0] with a narrowly-defined intent, I've seen how easy it is for something like this to become a de-facto industry standard that non-experts use to judge things against, even when it's not appropriate.

[0] http://www.php-fig.org/

dbg31415 2 days ago 0 replies      
This is really well done. Code is well laid out, well documented. Will make it easy to use this for training new college hires for sure. Didn't expect something like this out of the government. Nice to be surprised here.
burntrelish1273 1 day ago 0 replies      
This is great. Also worth noting in the UK, accessibility features for web pages are required by law (EQA, DDA). In the US, RAA 1998 is the similar legislation. Most govt agencies and private-sector businesses are required by law to provide accessible web pages, or face lawsuits and/or fines.

TL;DR: all sites of reasonable size (govt and otherwise) should follow WCAG 2.0.

WCAG: https://www.w3.org/TR/WCAG20/

UK - Equality Act 2010 (EQA): http://www.legislation.gov.uk/ukpga/2010/15/contents

UK - Disability Discrimination Act 1995 (DDA): http://www.legislation.gov.uk/ukpga/1995/50/contents

US - Rehabilitation Act Amendments of 1998 (RAA):https://www.congress.gov/congressional-report/105th-congress...

tingletech 2 days ago 0 replies      
it says on the site: "This site is also protected by an SSL (Secure Sockets Layer) certificate thats been signed by the U.S. government. "

But when you look at the certificate, it is signed by "Issued by: Let's Encrypt Authority X3"

wyldfire 2 days ago 1 reply      
Is this supposed to be used by non-US gov sites? Can any old private site use it? It's public domain in the US, so assuming the answer to that question is yes, would it make sense for a private site to use it?
pfooti 2 days ago 1 reply      
From the installation: Note: Using npm to install the Standards will include jQuery version 2.2.0. Please make sure that youre not including any other version of jQuery on your page.

Isn't that pretty much precisely what peer dependencies are for?

AstroJetson 2 days ago 1 reply      
I have an older Firefox that I'm running and the Web Design Standards page is all borked up. I'm wondering how many people that don't keep the upgrade stream going for their browsers will now have problems.
sacheendra 2 days ago 0 replies      
Also notice the very inclusive picture on their landing page template.
cjhanks 2 days ago 0 replies      
Why does this standard not fall under the purview of N.I.S.T.?
ourmandave 2 days ago 0 replies      
So the General Services Administration (GSA) forked Bootstrap. =)
7ewis 2 days ago 1 reply      
So these are the web design standards the government abides by?
ronilan 2 days ago 0 replies      
And so the Hero Unit becomes a US standard.
Uber president Jeff Jones is quitting recode.net
367 points by fluxic  1 day ago   222 comments top 20
loudin 1 day ago 5 replies      
Serious question - why hasn't the Uber board replaced Kalanick yet? While it is indisputable he successfully brought Uber to where it is now, it doesn't seem like he has the good judgement to be leading Uber at this stage of the company's lifecycle. Wouldn't the best thing for the company at this point be a complete overhaul of leadership?
fluxic 1 day ago 2 replies      
Update 1:

>Travis Kalanick just sent out a company-wide email. It essentially says after Uber said it was naming a COO, Jones decided to leave.

[0] https://twitter.com/MikeIsaac/status/843586902817099777

Update 2: Ex-president Jones makes statement

>"I joined Uber because of its Mission [sic], and the challenge to build global capabilities that would help the company mature and thrive long-term.

"It is now clear, however, that the beliefs and approach to leadership that have guided my career are inconsistent with what I saw and experienced at Uber, and I can no longer continue as president of the ride sharing business.

"There are thousands of amazing people at the company, and I truly wish everyone well."

[1] https://twitter.com/MikeIsaac/status/843620240961368065

israrkhan 1 day ago 4 replies      
Given that he quit just in six months, probably he did not even wait for his RSU grants to mature (1 year cliff for stock grants). The situation might be much worse that what appears.
yalogin 23 hours ago 5 replies      
If you are living in the Bay Area you would think Uber is a bad company. But I just met someone outside of the Bay and realized people love Uber. They have a lot of problems and a lot to do but they are still the market leader.
was_boring 1 day ago 2 replies      
While this is big for the company, I am curious what it means for recruitment of future executives at the company.

My understanding is at this level it's about personal connections and grooming an image. With that in mind, does it become harder to attract great executives on a go-forward basis for Uber?

CydeWeys 1 day ago 4 replies      
Uber doesn't have a prayer of rehabilitation until Kalanick himself goes. The rot goes straight up to him. Jeff Jones was a much more recent acquisition who came from staid corporate America (Target) -- I don't think he was the problem.
nananonymous 1 day ago 3 replies      
Meta: how can you write an article about Uber's president leaving and leave out the massive lawsuit from Google's parent company?
Fricken 1 day ago 2 replies      
I think Uber's biggest problem is the broken relationships it has with so many of it's drivers. Their abuse of their labour force is the reason unions exist.
redm 1 day ago 2 replies      
The "echo chamber" works both ways. When there is excitement and buzz, it drives startups to success like no where else. Apparently it works in reverse too and people love to see the big kid on the block take hits.
scottmcleod 1 day ago 0 replies      
Uber's board is failing the company and the public right now.
rexreed 23 hours ago 0 replies      
I don't understand the obsession by the press, VCs, and even consumers for Uber. I know it has changed the dynamic of transportation for many people. I know it has changed the dynamic of self-employment for many people. I know it has changed the dynamic of regulation for many cities. Oh wait. Ok I get it now. All that said, it's remarkable that ride sharing is the pinnacle of startup valuation in this latest wave of startups.
bogomipz 1 day ago 2 replies      
What is the role of "President" of the company? Where does that title fit in to an org chart or relate to the C-level execs? I am not familiar with this title in tech companies.
NicoJuicy 17 hours ago 0 replies      
In the assumption that any news is better then no news. Has the bad publicity contributed in a decline or increase of installations / usage for Uber?
dkarapetyan 22 hours ago 0 replies      
Hypergrowth is a lot like cancer. Either they take swift action to excise all the toxic elements or it is going to be a slow and steady decline.
Overtonwindow 23 hours ago 0 replies      
Jones saw the writing on the wall...
thedarkginger 1 day ago 1 reply      
"That was not the reason for Jones departure, sources said, even though it meant that Kalanick was bringing in a new exec who could outrank him. Instead, these sources said, Jones determined that the situation at the company was more problematic than he realized."
MichaelBurge 1 day ago 1 reply      
I wonder if it's a good idea to apply to Uber. With all the controversy, you could argue for a pretty good salary bump or signing bonus to compensate.

Though, if executives are quitting after 6 months even before their stock vests, maybe it's more than just the outrage machine news clamoring for clicks and views.

aphextron 1 day ago 6 replies      
I really hope that Uber serves as a warning to companies who think they can ignore customer complaints. I've never dealt with such an opaque company in my entire life, in terms of getting any actual human support whatsoever regarding anything.
throwaway3332 1 day ago 2 replies      
This Uber story, where scandals just kept dropping struck me as very suspicious. Like Google just randomly got CCed by a supplier? You want me to believe that?

I was suspicious but who could be behind it, and why? Who is telling people to air the dirt that's undoubtedly very real, but that they have been just sitting on for a long time, at this particular moment?

I recently learned that Kalanick is very close to Trump, and given that this started just after the election that would seem the likeliest explanation.

bsg75 1 day ago 6 replies      
> He is leaving after apparently deciding the current controversies are too much to handle.

Not exactly leadership material.

Edit: Apparently an unpopular opinion. Do those here think otherwise given he joined the company less than a year ago (and is thus not part of the original culture), but instead could try to repair the damage done by the uber-bro culture instead of looking for an easier paycheck?

Bad SSL badssl.com
401 points by aburan28  2 days ago   81 comments top 22
FiloSottile 2 days ago 2 replies      
An incredibly useful resource, well maintained by some of the best people in the PKI space, recently quoted by US-CERT [1] and so quick to use that I try it before starting to use any browser.

A year ago it made me find out that the most popular iOS Tor browser doesn't check certificates at all. [2] (Use OnionBrowser instead.)

[1] https://www.us-cert.gov/ncas/alerts/TA17-075A[2] https://twitter.com/FiloSottile/status/765230315132559360

For an easter egg, try to click on "Defunct"...

delinka 2 days ago 7 replies      
I have no idea what I'm looking at. Do I need to enter a domain name some place? What domain is this telling me about? I scroll to the bottom of the page, it's telling me what browser and OS I'm on ... ok, maybe this page is showing me how bad my browser is at SSL?

Oh, these things are clickable. "This pages contains a lone password field not wrapped in a <form> tag." Um ... yeah? Oh, you're saying that my browser renders that and it probably shouldn't.

dh2048 is green let's click that. "dh2048.badssl.com uses an unsupported protocol. ERR_SSL_OBSOLETE_CIPHER"

Alright, I give up. I have no idea what I'm looking at.

Edited to add:If this site is reporting issues with my browser, why does it seem to say that Chrome supports dh2048 (this item is green on the page) but then following the link the browser complains that it's unsupported? Either the point of this site is not obvious, or it cannot be trusted to know the right things about my browser.

raverbashing 2 days ago 1 reply      
Good concept, but the usability is bad

If you see the dashboard things are clearer, but there's a mismatch between connected/not connected and what they were supposed to do.

_jal 2 days ago 4 replies      
Really don't understand the people downvoting in this thread. Apparently a certain percentage of the population believes that if they don't understand the function of an artifact in the world, the maker of said artifact has failed.

It isn't quite projection; seems like more of some wildly misguided consumer-is-king impulse. Browsing academic libraries must be hell.

lvh 2 days ago 0 replies      
In case you're wondering how you might use this: BadSSL is really convenient for checking clients. Is that weird version of Curl in that PHP webapp totally busted? Answer: probably yes -- although you can use this to find out how it's busted specifically in the context of TLS.

If you have a modern browser, whatever it does is probably fine. Also, consider using Chrome. (Yes, I know about the battery life issues.)

(Curl-from-PHP can be bad for non-TLS reasons! It's a likely SSRF vector, and it often does things like TFTP and Gopher so it will helpfully let you speak all sorts of protocols.)

bigbugbag 1 day ago 0 replies      
I had no idea what was the purpose of this website until I found some explanation in the comments.

Adding offense to injury it actually shows unhelpful info if you put your cursor on the Badssl title on top of the page.

Allow to suggest a little modification here, replace the following:

<div class="title-bar" title="badssl.com - a memorable site for HTTPS misconfiguration"> badssl.com</div>


<div class="title-bar"> badssl.com - manual testing of security UI in web clients</div>

vidyesh 2 days ago 0 replies      
I misclicked and realised how many times this has been posted


dcosson 2 days ago 2 replies      
I'm surprised to see so many negative comments. It's a super straightforward UI, you click on stuff to see how your browser treats that ssl (mis)configuration. This is a great resource, thanks for posting.
siminsayz 2 days ago 0 replies      
Used this in junnit tests. A callback to a specific host caused entire platform to hang 5-10 minutes or untill restart of different batch jobs but also dermed like Translations did it too. Turned out specific certificate with strong key would not be picked up by java provider but in stead fell down to our ncipher hardware box and Got stck. Now we Used httpclient but through Socks proxy and timeout does not Work Then. After That Bouncy Castle was put before ncipher as provider and was tested with all those Certs to avoid similar problems.
goblin89 2 days ago 0 replies      
So far it appears that latest Safari on macOS is fully vulnerable to pinning, would only alert about a revoked certificate on second access to the page (?), and doesnt care about insecure password/credit card forms.
wbond 2 days ago 1 reply      
I created a complementary resource, https://badtls.io to allow for automated testing of TLS client libraries. It uses its own self-gendered CA root to allow for generating certificates that exhibit different edge case conditions.

A few practical differences are that badtls.io is designed to be easy to run locally and having simple Python scripts to generate new keys and certs.

For my TLS libraries I utilize both badssl.com and badtls.io to provide more diverse coverage.

EliRivers 2 days ago 5 replies      
If this is meant for general technical consumption, it's sorely lacking in usability. After several seconds, I guessed that it might be referring to something about my browser.

Some of the colours seem to indicate badness. Clicking on things provides no additional information, but then makes me wonder if it's meant to be an example of a bad webpage and there's nothing wrong with my browser.

Another failure of minimalism.

jwilk 2 days ago 0 replies      
Somewhat related: https://www.howsmyssl.com/
Walkman 2 days ago 0 replies      
Last week I made a comparison matrix against Badssl.com for different Crypto libs on OS X: https://gist.github.com/kissgyorgy/54601c883891991f28e49ac1b...
buhrmi 2 days ago 1 reply      
this must be really useful when developing your own browser
bigbugbag 1 day ago 0 replies      
Just noticed that this is an unofficial google product, part of chromium github repository.
ikeboy 2 days ago 3 replies      
Chrome on iOS fails revoked and pinning tests.
andygambles 2 days ago 0 replies      
Regularly used to test client configurations and also as a training aid when teaching users about web security.
yuhong 2 days ago 0 replies      
The fun thing is that the old SHA1 roots pulled from browsers also happens to be the SGC roots.
joombaga 2 days ago 1 reply      
No-subject is surprising to me. Why would Chrome allow that? Doesn't that open it up for MITM?
emondi 2 days ago 0 replies      
Shouldn't spoofed favicon link to http instead of https?
De-Location Package: Keep Your Career and Live Beyond the Bay Area zapier.com
319 points by bryanh  3 days ago   184 comments top 17
bryanh 3 days ago 15 replies      
Zapier CTO & co-founder here.

Much like Stripe's "hire a team" experiment - this is an experiment to pay people to "de-locate" from the Bay Area. Don't get us wrong, we absolutely love the Bay Area (I live here) but the cost of living is just outrageous for so many.

We're seeing a lot of candidates talking to Zapier (we're fully remote) about leaving the Bay Area to go "home" (some to start a family, some for other reasons) but want to stay in their tech career.

Happy to answer any questions, and I am sure there are a lot of Zapiens in the thread that could answer questions too.

hapless 3 days ago 5 replies      
It benefits them to have you leave the bay area, because they know it will be much more difficult for you to find your next job. They are paying $10k now to minimize future raises and equity grants.

I can think of less risky ways to squeeze $10k out of a new employer.

exolymph 3 days ago 1 reply      
I got to interview the team about this decision, so here are more details if you're curious: http://www.inc.com/sonya-mann/zapier-remote-work-de-location...
dmode 3 days ago 3 replies      
I am probably in the minority, but I hate remote teams. I have worked with remote teams in various timezones all my life and it has pretty much destroyed any semblance of love of work that I had. Calls in the mornings, calls in the evenings, lack of white boarding capabilities, not able to quickly tap on the shoulder and ask a question, not able to have team outings or in promptu happy hours etc. has exhausted me. I am now looking for a job with a 100% local team.
djb_hackernews 3 days ago 2 replies      
Obvious question, do they get to keep their Bay Area salary?
amyjess 3 days ago 1 reply      
I wish more companies did this, not just Bay Area companies.

I'm in Texas and looking to relocate to SoCal (specifically Torrance) in the second half of this year, and while I'm hoping my employer will let me either go remote or relocate me to their LA office, I'm terrified that they won't and I'll have to find a new job (I really like my employer, so I'll only leave if I absolutely have to).

pmiller2 3 days ago 2 replies      
Sounds like a neat way for Zapier to take advantage of Bay Area talent without paying Bay Area salaries. This isn't really for me, because I don't like remote work and I like living in the Bay Area, but I think this is a brilliant move. However, I'd want to know what the pay scale was like before making this kind of move.
dsacco 3 days ago 2 replies      
Analyzing this announcement critically, I work through the following:

1. Why would Zapier publicize a new policy for existing employees? Cynically, that appears to be a PR opportunity.

2. The new policy is an "experiment" that pays existing employees $10,000 to cover relocation costs to move outside the Bay Area.

3. I assume that the employees' salaries are then adjusted for COL (this is the crux). This presents an arbitrage between the $10k and salary adjustment.

4. Zapier is facilitating a process that, in effect, acquires employees from one location for a cheaper price than competitors, as long as those employees are willing to move outside of the Bay Area.

I don't have a comment on whether I agree or disagree; I guess it's good to support employees living where they want to. But I'd be interested in knowing 1) if employees do have their salaries adjusted for cost of living when they move and 2) where many employees ultimately end up moving to.

There is a face value lens that I can read this announcement through, but there is another lens related to the price of talent in the Bay Area. If I don't take this at face value, it's interesting to me if this carries any signal about real estate or tech salary outlook.

EDIT - Nevermind, salaries aren't adjusted for cost of living.

deanclatworthy 3 days ago 0 replies      
What a refreshing idea. I'd love to hear the results after a year of this experiment.

I think the idea can apply to many cities, even outside the US. Some European cities are in the midst of a housing (availability) crisis forcing rent prices up and leaving young families to live in small apartments to be within a commutable distance to work.

coding123 3 days ago 3 replies      
The Bay Area needs to re-think traffic. Perhaps the hyperloop isn't so much needed between two already congested cities, but between lots of rural areas that can feed into congested cities. Live 5 hours (by car) from the BA and get there by hyperloop in 25 minutes.
sudosteph 3 days ago 2 replies      
You should really expand this to Seattle folks as well. I've decided to stay for now, but I'd be lying if I said I didn't wistfully start looking at the housing prices in Miami after this awful winter...
rajeshp1986 3 days ago 0 replies      
I think this is a great start. More and more companies should start doing this. That way people will get convinced to take these packages and move to smaller cities where they can have better life.

It doesn't make sense to cram your family in a 700 sq. ft apartment and still pay $3500 rent for it. I like other people was very excited about bay area but after coming here I often wonder whether it is worth it. Also, 30s is where you start saving more for future. giving that money as rent makes my heart cry.

agibsonccc 2 days ago 0 replies      
I'd just like to commend the zapier folks for promoting a remote first company like this. We're distributed across 6 timezones and have built up like this for years. I always try to hire engineers where they live. This causes some complications but it's been worth it for us. We've found you have to have the right culture for it though. It's harder to "bolt on" remote.
nedwin 3 days ago 2 replies      
Would love to know how fundraising has been impacted by being a remote first company. Did you get resistance? Are there certain investors who are fundamentally for or against?
nunez 3 days ago 0 replies      
Interesting. Does one salary stay the same with this agreement?
komali2 3 days ago 0 replies      
This is my dream, and this is why I switched industries to become a programmer. One day, I will achieve the apex - beach cafe in Vietnam, tethered net connection, happily tapping away code for my full time job.


beatpanda 3 days ago 0 replies      
More like this, please, from everyone. God bless you for doing this.
Intellectual Humility increases tolerance, improves decision-making duke.edu
299 points by bootload  2 days ago   101 comments top 15
trevyn 2 days ago 4 replies      
The "intellectual humility doesn't get grants" and "loses elections" comments lead me to believe that people aren't understanding the concept of intellectual humility as presented in this article.

It does not mean that you give the outward appearance of being humble in any way.

It is defined in the article as "an awareness that ones beliefs may be wrong" and "intellectually humble people can have strong beliefs, but recognize their fallibility and are willing to be proven wrong on matters large and small."

This was tested with a study in which "participants read essays arguing for and against religion, and were then asked about each authors personality", and evidence that "people who displayed intellectual humility also did a better job evaluating the quality of evidence."

There is a distinction between an individual's internal mental processes (as tested in this study) and the way they present themselves externally. Being outwardly assertive and confident absolutely wins elections and grants, but this is not at odds with an ability to internally re-evaluate one's beliefs.

I particularly enjoy this quote from the article, as it reveals that the authors may have missed a subtlety:

If youre sitting around a table at a meeting and the boss is very low in intellectual humility, he or she isnt going to listen to other peoples suggestions, Leary said. Yet we know that good leadership requires broadness of perspective and taking as many perspectives into account as possible.

I absolutely agree that good leadership requires broadness of perspective, but this does not imply that every suggestion should get an audience at meetings -- capable leaders often have a much broader range of experience than their reports, and dismiss suggestions not out of arrogance or closed-mindedness, but simply because they have already evaluated and discounted that path, and have elected not to spend their limited time bringing everyone else up to speed. (Which can have its own issues, but that's a digression.)

rosser 2 days ago 3 replies      
Can't this notion pretty much be summed with the Bertrand Russell quote, "The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt"?
contingencies 1 day ago 0 replies      
There is some related material in the written lore of Buddhism. Interestingly and perhaps tellingly the material is largely phrased on classifying the motivations and appropriate responses to questions in general, rather than the explicit act of decision-making: http://www.accesstoinsight.org/index-subject.html#questions

There are questions that should be answered categorically [straightforwardly yes, no, this, that]. There are questions that should be answered with an analytical (qualified) answer [defining or redefining the terms]. There are questions that should be answered with a counter-question. There are questions that should be put aside. These are the four ways of answering questions.

sova 2 days ago 1 reply      
Oh my god the dawning realization that I have absolutely no intellectual humility absolutely terrorizes and embarrasses me. I can change.
dlwdlw 2 days ago 0 replies      
Heres a model i though of: https://www.ribbonfarm.com/2014/02/20/the-cactus-and-the-wea...

The core is a 2x2 with strength of views on one axis and holding strength on the other.

A fox is someone who knows many things and a hedgehog is someone who knows one big thing.

The degenerate versions are the cactus, which stubbornly knows one thing, and the weasel, whose views ate too two-faced.

The weasal does double-think while the fox can hold contradictory models in mind and create a new one or use the one that better fits the situation. (e.g particle wave theory)

The fox has better perspective, but because energy is spread out among too many things, has trouble implementing. The hedgehogs energy is directed and more focused with more momentum, but still has the ability to steer, unlike a cactus.

Intellectual humility IMO is about flexibility of mental models. In terms of real world success, hedgehogs have the most while foxes are more armchair philosophers.

The article is telling cactuses to be more like hedgehogs, but the 1 dimensional terminology ropes in foxes as well, who despite having intellectual humility as well, don't usually have as much direct impact on the world.

csense 8 hours ago 0 replies      
Didn't Socrates have this figured out ~2500 years ago?
MichaelBurge 2 days ago 0 replies      
> aimed at promoting qualities such as intellectual humility.

Did the paper conclude that changing a person to be more intellectually humble causes the benefits described? If not, any promotion or changing of behavior seems premature.

thetruthseeker1 2 days ago 0 replies      
The one thing I would have liked to see in this article would have been the benefits and downsides of both qualities. Intellectual Egotism, and Intellectual Modesty. That would have been a balanced approach.

There are people who know that they are intellectually modest that read this article, and it may reinforce their belief that being intellectually modest is the one true path! There are people who know that they are intellectually arrogant and may feel that they are destined for hell by reading this article. There are some advantages to being an egotist as well and disadvantages being modest.

The other subtle incoherence I wanted to point out is, at some point the author claims intellectually modest people are in both classes of people, conservative and liberals. But he does say that republicans by and far are more likely to not accept a person who has "flip flopped" [may or may not be for good reasons] than a democrat. So assuming most conservative people are republicans, and most liberal people are democrats there is some intellectual quality that differentiates their bases. This probably needed more analysis in my opinion.

sandGorgon 2 days ago 0 replies      
This is the very googley concept of psychological safety right ?

Google Rework has some great stuff about it https://rework.withgoogle.com/blog/how-to-foster-psychologic...

moomin 2 days ago 2 replies      
You could also add "Loses Elections"
lutusp 2 days ago 5 replies      
Summary: one unfalsifiable study, not replicated, with no theoretical basis or reference to other theories; a study unable to establish whether the measured trait (intellectual humility) and the measured effect (more effective functioning) are in fact related as cause and effect, or the reverse, or are both related to some third unstudied property.

In a recent ambitious and well-funded replication project, only 39% of published psychology studies could be replicated at all, and of the successful minority, the average measured effect size was half that of the original study, in some cases falling below generally accepted standards for statistical significance. The take-away from the replication project is that, when reading a random psychology study, the reader must remember that the study's probability of having any relation to reality is less than 50%.

More on this topic: http://arachnoid.com/psychology_and_alchemy

ouid 2 days ago 2 replies      
This isn't a study, this is an essay about an activity that some people did.
nabla9 2 days ago 1 reply      
Psychology is rubbish, that's already established, so there is no need to read this particular study before forming an opinion, or criticizing the study bases on it's merits. /s

On the other hand, published psychological studies are probably better than opinion articles, essays etc. So maybe there is some value in it that.

I myself find these four studies in the paper interesting in the context of ambiguity tolerance and critical thinking. Studies in personality and social psychology like these are usually best understood as some sort of cluster analysis.

Most interesting points form the discussion chapter that draws from many other studies:

1. Many of the effect sizes for intellectual humility were relatively small

2. High IH might come together with epistemic curiosity, cognitive ability and ambiguity tolerance.

3. Most personality characteristics display substantial within-person variability across situations. So the studies probably reflected domain specific intellectual humility (IH).

4. Intellectual humility did not correlate with religiosity or political orientation.

5. Indicators for stubbornness, rigidity, narcissism, or defensiveness are not central aspects of low intellectual humility.

pottersbasilisk 2 days ago 2 replies      
Intellectual humility doesnt get grants though.
desireco42 2 days ago 1 reply      
This is one of those articles and studies we've been warned about. It doesn't mean anything, and you will be no better off reading it. It is interesting point but it can't be taken as a scientific pointer you should apply for your life.

As a newspaper article, I actually like it.

Virtual machine escape fetches $100k at Pwn2Own hacking contest arstechnica.com
401 points by hcs  3 days ago   155 comments top 12
mnarayan01 2 days ago 2 replies      
So if you were using a use-once-and-destroy VM to do some Edge security testing, they could blow threw everything and get you anyway. The fact that there are vulnerabilities in each of three layers they needed to hit is unsurprising, but having one guy find all three is more incredible to me than Barcelona's recent come back against PSG.
lokedhs 2 days ago 3 replies      
Being someone who uses Qubes OS, seeing cracks that allows a virtualised system to break out to the is scary. After all, the entire security model of Qubes OSis based on the fact that it's not possible to break out from the VM's.

I'm really glad these guys valued their reputation higher than the extra money they could have gotten from selling these exploits on the blank market, but this makes me wonder how many such vulnerabilities are available that hasn't been published.

pizza 3 days ago 9 replies      
How does someone learn this skill? I get that it obviously takes a ton of work, but how do you even prepare/study for (whatever it takes work to do) in the first place? The intuition required seems so different from what my intuition of computer systems would allow even imagining.
jtchang 2 days ago 3 replies      
A set of exploits like the 3 described could be worth quite a bit on the open market. I'm surprised how low the bounty was for a full vmware escape starting from a browser. Surely there are other exploits like this out in the wild under government lock and key.
tannhaeuser 2 days ago 4 replies      

Is there consensus on security bad/best practices when it comes to VMs and containers? I mean if even established VMs/Hypervisors can't save you, what prospect is there of containers to actually contain anything?

Maybe POSIX chroot jails aren't that bad after all? Because, like, they're the simplest thing that can possibly work, and don't come with the complexity of VMs and Linux containers/namespaces; and if they turn out to be vulnerable you'd be able to switch to another kernel (BSDs)?

mistersys 3 days ago 4 replies      
Title misleading: virtual machine escape from a web browser, all the way down to host machine.
kbart 16 hours ago 0 replies      
"All started from and only by a controlled a website"

That is really scary. And another reason to limit/disable JavaScript (if somebody still don't find enough). Has anyone found more detailed report? I'd love to see this in action.

lima 1 day ago 0 replies      
> You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.

~Theo de Raadt


simplehuman 2 days ago 7 replies      
This is why we need rust. Has Windows and edge and VMware used rust, these bugs wouldn't have happened. Rust is a game changer with it's borrow checker.
hl5 2 days ago 1 reply      
I don't think I'd want to take credit for a hack like this. I imagine a few interesting phone calls are heading the researchers way.
jankedeen 1 day ago 0 replies      
Perfect storm.
tapirl 2 days ago 1 reply      
> We used a JavaScript engine bug within Microsoft Edge to achieve the code execution inside the Edge sandbox, and we used a Windows 10 kernel bug to escape from it and fully compromise the guest machine," ...

What? The security of VMware relies on the security of the guest OS? Really a surprise, and a horrible, for me.

btw, how many people are using Edge browser? 1%? And how many of them use it in VM? 1%%%? :-)

How Utah Reduced Chronic Homelessness npr.org
327 points by teslacar  3 days ago   270 comments top 29
mabbo 3 days ago 18 replies      
The 'Housing First' model was really pushed hard by Stephen Harper as Prime Minister of Canada. It was weird because he is such a staunch right-wing kind of guy, yet here he was putting money towards what seems like a very left-wing socialist idea.

But he, like Lloyd Pendleton in this article, apparently figured out that it saves a lot of money, ideology be damned.

Maybe that's a good example of how to get right-wing politicians to agree to some left-wing ideals: prove that it will save money and let them lower taxes.

austenallred 3 days ago 8 replies      
I'm confused by this. My brother has an office in downtown SLC near the Rio Grande building, and it's pretty clear when I visit him that there are hundreds of homeless people living in that neighborhood alone. Pioneer Park is also covered in homeless people.

If that is less than 200 people either I don't understand what "homeless" means or something is... off. How does one define "chronic homelessness?"

I really want it to be true. I have seen first hand how hard so many people work to help the homeless; through nonprofits, the government, and the LDS Church. But declaring "victory" over homelessness rings pretty hollow if you walk the streets of Salt Lake City.

meritt 3 days ago 5 replies      
Utah didn't solve homelessness. The primary "reduction" was changing the definition of chronic homelessness and now less people are being counted.


hackermailman 3 days ago 3 replies      
Vancouver tried the same thing. The made a policy of eradicating homelessness by X date and the city bought up SROs to put them in and other housing options.

The problem is once the rest of Canada's homeless heard about this they came in droves, often given one way bus tickets by police, so it's impossible to solve chronic homelessness without some kind of national effort. Utah this strategy works because there isn't a flood of people going there like they are San Francisco or LA's Skid Row.

It would be great if governments could have some kind of yearly conference to compare data and strategies to figure out what works/doesn't work at scale.

saosebastiao 3 days ago 1 reply      
I have no doubt that this program works, even if the success has been overstated [0]. What we really need to think critically about is how exportable the concept is. And really, it comes down to one singular reason, housing costs[1]. High housing costs like those in most coastal cities contribute to higher inflows into homelessness and lower outflows out of homelessness, which means that higher housing costs increase the total number of homeless people and the duration of homelessness. They also increase the capital and administration costs of administering the program, often by a factor of two or more.

So in a city like Seattle or San Francisco, you're going to have drastically more people to house, for drastically longer periods of time, and at higher fixed and variable costs. I have no problem believing that this solution if exported to San Francisco, New York, or Seattle, would cost anywhere between 1-2 orders of magnitude more than SLC as a percentage of total population.

IMO, anti-housing-development trends in our most economically important cities truly have become the US' largest source of injustice in the 21st century [2].

[0] https://www.theguardian.com/us-news/2016/apr/27/utah-homeles...

[1] https://www.rentjungle.com/average-rent-in-salt-lake-city-re...

[2] https://medium.com/the-ferenstein-wire/a-26-year-old-mit-gra...

John23832 3 days ago 0 replies      
Imo housing first has always been the best option. Think about how hamstrung people feel trying to conduct their lives out of hotels... now imaging being homeless.
hl5 3 days ago 1 reply      
Homes for the homeless. What an obvious solution! May I suggest medical care for the sick next?

I'm glad some cities are starting to recognize the level of dignity and hope a private residence with private facilities can give to impoverished families. Sadly it appears these types of programs seem to only be funded when wrapped in some questionable marketing or political gain.

snowpanda 3 days ago 1 reply      
> Utah says it won 'war on homelessness', but shelters tell a different story


suresk 3 days ago 3 replies      
It was interesting to hear about this, then drive down Rio Grande street or around Pioneer Park and see the massive tent cities and tons of homeless people. Things got really bad downtown the last year or so.

I wonder if this is a case of Salt Lake doing such a good job that homeless from other states were attracted here?

wyldfire 3 days ago 1 reply      
> For Joe Ortega, it means ... getting used to being alone.

I would not have figured that moving off the street would be more isolating than living on the street. Don't these housing options mean that someone else is living just in an adjacent room or across the hall?

randyrand 3 days ago 1 reply      
As an aside, this is a perfect illustration of the states-rights approach going well.

States are testing incubators for trying new ideas.

perseusprime11 3 days ago 0 replies      
I wish there is something we can do in New York City. In the last two years, I saw a significant increase in the number of homeless people in New York City. They have no home and take refuge in subways and the bathrooms at Penn Station.
tomcam 3 days ago 0 replies      
This will sound hostile. It is not meant to.

Didn't we do the "housing first" thing decades ago? The Projects in the Bronx? Cabrini Green in Chicago? Now most of those places are torn down or being gentrified or are miserable black holes of chaos and crime.

It seems to me that it's really important that residents have some skin in the game, which may be the difference in the Utah program. It is also notable that the Mormon church has such deep involvement. Here in Seattle the Union Gospel Mission has been infinitely at dealing with homeless than the bureaucracy.

tfo 3 days ago 0 replies      
tabeth 3 days ago 3 replies      
I wonder if better/more accessible health care would eliminate homeless as a side effect. It seems much of the homeless in Boston, anyway is mental health/health related.

I doubt anyone will say that what Utah did here is bad, but you have to wonder what's preventing a more comprehensive solution (the answer is politics, but ideally you'd get a more specific answer).

Unbeliever69 3 days ago 1 reply      
The situation in Utah is FAR from perfect. I work near a building that was renovated to house the homeless. These places are often rampant with prostitution, drug dealing and other nefarious operations. In effect many now have a roof over their heads, but they now live in what some might consider slums.
cpncrunch 3 days ago 0 replies      
>A similar approach was first tried in Los Angeles in the late 1980s and New York City in the early 1990s.

Does anyone have a reference for this? I did some searching, but didn't find anything. If anything, the articles I found just said that homelessness in LA was a continuing problem throughout the 80s until today.

vondur 3 days ago 0 replies      
Last time I was in Salt Lake, there were a bunch of homeless people camped out all over the place. It was really odd, I noticed some of them were just casually smoking weed. Still not as bad as it is here in the Southern California area.
tracker1 2 days ago 0 replies      
Definitely better than when they were just buying bus tickets for the homeless to go to Phoenix and Albuquerque... (Fixing the homeless problem by making it someone else's problem)
thebosz 3 days ago 0 replies      
Regardless of how effective or how it's just "changing the definition" at least it is helping.

Unlike here in Portland where it's just ignored and occasionally the tent cities get pushed to a different area.

NewCathargo 3 days ago 0 replies      
Give every homeless person a $20 thing of nicotine each month.

Stop the ponzi scheme of housing costs.

Teach them skills general enough to plug into a variety of jobs and specific enough to survive alongside automation.

Giving a chronically homeless person a "home" doesn't make up for the completely shattered social network they have. And the shattered sense of civil conduct in a society which literally tossed them to the trash.

My melancholy tells me the above will never happen. Homelessness is a shadow industry where nobody gets punished for letting relapses happen. Therefore, nobody "owns" performance in getting the homeless into a housed state. In the industry, talking points are more valuable than actual "performance".

tn13 3 days ago 0 replies      
Homelessness should not be a problem in USA where land is plenty. If homelessness is a problem it is mostly because cities are not growing horizontally fast enough.
rpmcmurphy 3 days ago 0 replies      
Imagine what the states could do with the > $50 billion we are about to send to our already outsized military.
brilliantcode 3 days ago 0 replies      
So you redefined homelessness and end up counting less people in order to call it success.
Mz 3 days ago 0 replies      
Maybe "enlightened self interest" as a governing principle will catch on.
ninju 3 days ago 0 replies      
This an older article...and I think it might have been on HN already
Overtonwindow 3 days ago 0 replies      
*From December 2015
m-j-fox 3 days ago 1 reply      
Lack of Oxford Comma Could Cost Maine Company Millions in Overtime Dispute nytimes.com
315 points by uyoakaoma  3 days ago   235 comments top 32
gnicholas 3 days ago 6 replies      
The title of this HN post, like nearly every headline I've seen for this lawsuit, is misleading. The headlines makes it seem as if the company forgot an Oxford comma and is now losing millions as a result of having made a typographical error.

This is not the case. There is a statute that does not have a comma where one might be, and the litigants fought over whether the lack of a comma toward the end of a list ought to be read in light of the Oxford comma convention or not.

And it's actually more complicated, as the last item in the list is (potentially) compound, which makes it difficult to tell whether the "and" is to be attached to the final element alone, or to be a binding of the final element and previous elements.

Not that this its uncommon for articles to have misleading headlines, but I've been surprised at the extent to which nearly every article (save this one [1], by the ever-precise law professors at the Volokh Conspiracy) have misrepresented the case (or misunderstood it?) to make it seem like a company's typo led to a million dollar loss.

1: https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

mbillie1 3 days ago 4 replies      
Alternate headline: workers to receive appropriate compensation following accurate interpretation of state law.
weeksie 3 days ago 3 replies      
I don't think I've ever heard an impassioned argument against the Oxford Comma. I mean, I have no problem with it, but there seems to be a belief that this is a less filling/tastes great holy war, but really, omitting the serial comma is fairly archaic at this point. At least in my experience. Sometimes lists need one, sometimes they don't. At this point in the evolution of our written language that should be a fairly unambiguous proposition.
rayiner 3 days ago 7 replies      
The law in question excluded "canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of" agricultural products from the requirement for 1.5x overtime pay.

In a sane world where everyone used the Oxford comma, that sentence would be clear and the last item in the sequence would be "packing (for shipment or distribution)" and the drivers would be entitled to their overtime pay. But the Maine Legislature's drafting manual recommends omitting the Oxford comma so the above sentence is ambiguous. And it's really ambiguous because either meaning could really be what the Legislature intended.

burntwater 3 days ago 3 replies      
My question is, what makes the canning industry so unique and special that it's deserving of exemptions from fair pay?
kazinator 3 days ago 0 replies      
It is perfectly clear here that "distribution" is a separate list element, set off by the final disjunction "or":

The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of:

The pattern being: A, B, C, D or E.

This does not change even if we add a spurious comma after D:

A, B, C, D, or E.

D and E are still separately listed items.

Quite simply, D or E cannot be a single list item without being preceded by some conjunction.

If the intended meaning were "packing (for shipment or distribution)" to the exclusion of "distribution", then the conjunction "and" or "or" would be required before that entire phrase:

A, B, C, D or E for X or Y.

Here the entire phrase E for X or Y (such as packing for shipment or distribution) is now the last list item (and, again, that is clear whether or not we add a spurious comma after D).

The "or" in (for shipment or distribution) belongs to this inner phrase and therefore cannot serve as the delimiting conjunction of the last list item; another delimiter is required.

Thus, the insertion of the comma makes no difference. The alternative interpretation is hard to justify, because it is based on the claim that a meaning-altering conjunction is missing, even though the existing text happens to be grammatical.

seanwilson 3 days ago 5 replies      
In almost every case I see people arguing for an Oxford comma I see a sentence that should be broken up or restructured to make it less ambiguous. A sentence where not noticing a comma can drastically alter it's meaning (especially when it's a losing battle to make everyone understand the Oxford comma) is not a good sentence.
stupidcar 3 days ago 1 reply      
"I'd like to thank my parents, Ayn Rand and God."
jeffmk 3 days ago 0 replies      
Avoiding the Oxford comma is the JavaScript automatic semicolon insertion of English grammar.

Each has odd exceptions that have to be thought about, and each is polarizing.

In both cases I really wonder: why the insistence on adding to cognitive load?

bmcusick 3 days ago 0 replies      
There should be an "or" before the word "packing" if "packing for shipment or distribution" is a single phrase (rather than two alternatives). The Oxford Comma debate is a sideshow. It's all about where you find the "or", which indicates the last choice.
kolbe 3 days ago 1 reply      
It's kind of amazing that we even write laws this way anymore. There isn't much to gain from freely allowing legislators to use the English language to dictate our system of rules. I'm sure there are much clearer templates that everyone should follow.
danbruc 3 days ago 0 replies      
Couldn't they just have looked up the intended meaning of the law? I mean laws or not made by just writing down the law in its final form, there are designs, discussions, and drafts. Wouldn't it be likely that there are documents showing the intended meaning more clearly, for example by listing the exceptions as bullet points? In my opinion that would provide a way better argument to settle the issue one way or another than appealing to grammar rules and style guides.
madenine 3 days ago 1 reply      
Ignoring the comma issue, it still seems poorly worded.

"The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of:"

Lets group related activities based on what workers might be doing/where they might be doing it.

If the law intended to exclude truck drivers from overtime, our groups could be:

- Canning, Processing, Preserving, Freezing, Drying (processing facility?)

- Marketing (?)

- Storing, Packing for Shipment (Warehouse)

- Distribution (On the road)

Why is one sentence trying to define rules for all those groups?

ape4 3 days ago 1 reply      
Perhaps they should adopt programming language like syntax for laws. eg You will be jailed for [kill, hurt, steal].
bandrami 3 days ago 1 reply      
The case was decided on equity, not grammar. What the comma style did was give the company a specious argument to make.
boltzmannbrain 3 days ago 0 replies      
JFK and Stalin are still the best proponents for using the Oxford comma: http://www.verbicidemagazine.com/wp-content/uploads/2011/09/...
sixhobbits 3 days ago 0 replies      
There are two types of people this world: those who use oxford commas, those who don't and those who should.
bob_rad 3 days ago 1 reply      
This should totally be a case study in grammar lessons. This comma could cost you millions kids, pay attention!
gtirloni 3 days ago 1 reply      
Instead of discussing a grammar issue, why not go back to the intent of the law and apply it as... intended? What were the lawmakers thinking when they wrote that law? Are they still around? Are there any notes or recorded sessions discussing this?
lotsofpulp 3 days ago 1 reply      
I wonder how Maine is able to exempt works from overtime, wouldn't that contradict federal overtime laws?


PuffinBlue 3 days ago 2 replies      
The article asks:

> Does the law intend to exempt the distribution of the three categories that follow, or does it mean to exempt packing for the shipping or distribution of them?

I'd say it does both. It exempts workers who package for shipment AND workers who distribute.

Discuss... :-)

Mz 3 days ago 0 replies      
As a freelance writer, this is sort of cool to me.

"The pen is mightier than the sword" and all that. A little comma -- or lack thereof -- moves millions of dollars in one direction or the other.

Food for thought.

hughlang 3 days ago 0 replies      
They could have done so much more with that first sentence. Example: "An appeals court ruling on Monday was a victory for truck drivers, punctuation pedants and nazis"
dsfyu404ed 3 days ago 0 replies      
Meanwhile a complete jerk move did cost Maine university employees thousands of dollars in unpaid wages over the course of a decade.
dredmorbius 3 days ago 0 replies      
Of possible relevance: The New York Times, per its internal style guide, eschews the Oxford Comma.

I smell a possible revolt on the part of Mr. Victor.

timthelion 3 days ago 0 replies      
Why is everyone discussing the grammar and no one questioning why laws have such silly exceptions? Why should there be an exception for the packaging of perishable goods? It is not like the goods cannot be packaged if the workers are paid extra for working over-time. And it is not like a well planned logistics chain should necessarilly be forced to have workers working overtime. So really, this exception is just a gift to a specific industry.
verbatim 3 days ago 0 replies      
The longstanding question posed by Vampire Weekend has finally been answered: Oakhurst Dairy cares.
d--b 3 days ago 0 replies      
Or: why AI is a long way to understand natural languages.
kaosjester 3 days ago 1 reply      
The sheer amount of misused punctuation in this comments section seems to indicate a systematic problem.
dmckeon 3 days ago 0 replies      
Also posted to HN as:https://news.ycombinator.com/item?id=13886467https://news.ycombinator.com/item?id=13879156

For me the lesson here is "use unambigious language"rather than "{always|never} use an Oxford comma".

The 29-page decision shows that the "Oxford comma" is only partof the court's interpretation of the law, and shows thatthe court examined several paths to reach an interpretation.


Railroad Tycoon filfre.net
374 points by smacktoward  3 days ago   148 comments top 29
a_d 3 days ago 3 replies      
RT2 is one of the coolest "business simulations" of all time - the goal was to make profits. The simplification was the "business" was achieved by setting the game near the dawn of the industrial age - when there was more of a relation between work vs output. The goal was to connect cities/companies so economic activity could take place. The big obstacles were set at random (i.e. train breakdowns) that added the element of fun. The game took the player through "macro" settings like recession and booms.

My deep appreciation of the game also comes from the fact that the settings were largely historically accurate. The big industrial centers (Albany, Denver (for lumber)) were accurately depicted. An extremely engaging way to learn a lot of early U.S. history, regarding how cities came into existence (answer: based on commodities trade). Also fascinating to learn about tech evolution (engines!)

Customary hyperbole: One of the best "business simulations" ever made! :)

nodesocket 3 days ago 14 replies      
Somewhat of a competitor, but any Sim Tower fans here? I was obsessed with everything Sim* but especially loved Sim Tower. Maxis was an amazing gaming company and actually my first entry into Macs. My friends dad had a Macintosh II, then classic, then LC, and I would spend hours playing games on them.
DonHopkins 2 days ago 3 replies      
Railroad Tycoon and SimCity had a huge influence on Factorio, the vast scope of which I can't begin to describe, but its trailer does it justice. [1]

About the game:Factorio is a game in which you build and maintain factories.

You will be mining resources, researching technologies, building infrastructure, automating production and fighting enemies. Use your imagination to design your factory, combine simple elements into ingenious structures, apply management skills to keep it working and finally protect it from the creatures who don't really like you.

[1] https://www.factorio.com/

"This game is like crack for programmers." -kentonv [2]

[2] https://news.ycombinator.com/item?id=11266471

adanto6840 2 days ago 3 replies      
(Disclaimer: Developer on the project)

We recently release SimAirport via Steam's Early Access program. The game was initially riddled with bugs, probably released about 2+ weeks or so too early, but after >= daily patching over the last 2 weeks it's finally yielding pretty solid gameplay.

If you enjoyed the old Bullfrog games, the Roller Coaster Tycoon series, or Prison Architect in modern times (huge inspiration), then you'd probably SimAirport too.

We're in Early Access, so go easy on us! You'll still hit bugs for sure, but we [hopefully] don't have any major game stoppers at this point. There's no tutorial, some so experience with similar games is helpful, but we've got a lot of players with 20, 30, and way upwards of 40 hours in just the short <2 weeks since we initially released.


Spooky23 3 days ago 1 reply      
Railroad Tycoon was amazing because as a kid I had access to higher quality, more timely metrics of my simulated company than many real life companies do.
suresk 3 days ago 2 replies      
Interesting story - love reading more of the stories behind some of the games I loved growing up, especially the people behind them.

Railroad Tycoon II is still one of my favorite games - the economic simulation side of it was a lot of fun, and you could sort of decide how much of it you wanted to bite off. I haven't really found anything quite like it since.

In some ways, it is kind of a shame - tablets/phones would make great platforms for economic sim games, but every single one I've tried has been disappointing for the same reason - the mechanism they use to funnel you into buying things makes the game really un-fun and repetitive very quickly. I wouldn't mind paying $10-$20 for a good economic sim that didn't act like this, but I guess not enough other people would.

1123581321 2 days ago 0 replies      
Soren Johnson's Designer Notes podcast[1] mentioned in the article is excellent and I've learned quite a lot from the long interviews, especially Bruce Shelley's, Louis Castle's and Amy Hennig's. Currently he's releasing his interview with Sid Meier, which looks like it'll end up being 5-6 hours in length and is great so far. I really enjoy when in-depth discussions of niche topics give me ideas about other things and that's happened many times while listening to this show.

Those interested in the market/economic simulations in Railroad Tycoon might enjoy playing Soren's game, Offworld Trading Company[2]. I'll just mention that entire games rarely take more than 30 minutes, they are almost always interesting ones, and it is fun without triggering compulsive play (for me.)

[1] https://www.idlethumbs.net/designernotes

[2] http://www.offworldgame.com

keyle 3 days ago 2 replies      
My favourite Sid's game of all time is still the original Colonization. So much depth, replayability, and overall fun. To me it's always been superior to Civ because it's a clear road to independence.
dmazin 3 days ago 1 reply      
By the way, filfre is probably the highest quality VG and possibly computer history writer I've read. His archives are gold. He mostly focuses on interactive storytelling, but that was most of gaming in the early days of computing, so his histories amount to histories of computing.
clock_tower 3 days ago 3 replies      
If you're looking to learn the basics of the stock market, play Railroad Tycoon 2 -- I can't speak for the original (didn't play it), but the sequel at least can teach you quite a bit.
Spakman 2 days ago 4 replies      
Did anyone else play A-Train (looks like it was actually A-train III)? I loved it but have never found anyone else who has even heard of it.


forgotpwtomain 2 days ago 1 reply      
I'm astounded by the amount of content on this site: http://www.filfre.net/sitemap/
dddw 3 days ago 4 replies      
it's great, and lovingly being recreated: http://www.openttd.org/
flippmoke 2 days ago 0 replies      
If there was ever a game that encouraged me to become a developer during my youth it was definitely Railroad Tycoon. I remember being so curious how everything might have worked within the game with out knowing a thing about programming and it was definitely an inspiration to me in so many aspects. Games were my avenue for learning DOS and later my learning to write Software. I can't imagine what my life would have been like with out so many great early game developers.
hartror 2 days ago 1 reply      
We build planning and scheduling software for railroads and I continually think back to playing this game as a kid. The concerns are unsurprisingly similar but the level of detail you go to in managing a real railroad is obviously far greater.

For example we delivered a piece of software to manage where to park trains during storms so they don't blow over.

http://biarrirail.com/ btw.

llcoolv 2 days ago 0 replies      
A really sweet one is Transport Fever. It is however more like of an electronic model railway and less of a game - there is no competition, scenarios (pretty much a tutorial actually) are very straight forward and it is difficult to go under.However, the beautiful part is modelling the transport network and the detail of it, however I was mostly fascinated by the city growth and development.
fsiefken 2 days ago 1 reply      
A great board game with a train theme is Steam, it's available for iOS and Android and next week for OSX, Windows and Linux as well. The good thing is that you can play it as a boardgame as we, competing with others face to face as to who can lay the tracks and transport goods most efficiently.http://store.steampowered.com/app/595930/
robohamburger 3 days ago 0 replies      
If you find this interesting, the interview with him at idlethumbs is really good: https://www.idlethumbs.net/designernotes/episodes/sid-meier-... (looks like its one of the sources for this post). I think the portion related to railroad tycoon is in the second part but its worth listening to all of it.
bbarn 3 days ago 0 replies      
All this love of microprose games and no mention of Silent Service? This was the first "choose your own path" kind of game I ever really got into.
sevensor 3 days ago 0 replies      
Oh, Railroad Tycoon! The first game that really hooked me. I still remember the feeling of utter betrayal when I first tried cutthroat competition and lost a station in a rate war. Then the absolute glee of stealing a valuable station from another line on my next play through. I could go on for hours, but instead I'm going to fire up DosBox now.
frik 2 days ago 2 replies      
Additionally to RT2, I have found memories of Industry Giant 2. Does anyone remember this 2002 game?

What are Bruce Shelley (RT, Age of Empire), Rick Goodman (Age of Empire, Empire Earth), Chris Sawyer (Transport Tycoon, Roller Coaster Tycoon, etc) and Geoff Crammond (Sentinel, Grand Prix 1-4) doing lately?

flashmob 2 days ago 0 replies      
Building railways is kind of like visual programming.

The Railway orientated programming comes to mind http://fsharpforfunandprofit.com/posts/recipe-part2/

cbanek 3 days ago 0 replies      
Absolutely love this game. I found a dosbox image that runs on mac, and still guiltily play it on occasion. :)
scribu 2 days ago 0 replies      
For anyone interested in a minimalist track laying game, I've had a lot of fun with Mini Metro. It's available on both iOS and Android.

The goal is to transport as many passengers as possible, so it has a little competitiveness in it.

fransporter 2 days ago 1 reply      
Someone please please please make a modern refresh of transport tycoon. I still love that game but would love to see it brought up to date in the style of cities in motion and with more depth added.
loydb 2 days ago 0 replies      
My early 20s were heavily Microprose. Silent Service, Red Storm Rising, various flight sims, and Railroad Tycoon consumed huge blocks of time. This article was a great look back.
steven_pack 3 days ago 1 reply      
I don't what version I had, but this game was my introduction to Jazz.
Unbeliever69 2 days ago 0 replies      
I don't know if it is just the MIDI, but the intro tune reminds me of the intro music to M.U.L.E., one of my favorite games of all time!
based2 3 days ago 0 replies      
Trs Grande Vitesse
Two Executives to Leave Uber, Adding to Departures nytimes.com
341 points by flyingramen  22 hours ago   224 comments top 12
alistproducer2 14 hours ago 13 replies      
All the speculation around the turmoil over seems to be missing one key component. I think a bigger driving force then this sexual harassment and CEO antics is the waymo lawsuit. Uber itself has acknowledged that without self-driving capabilities the company will most likely not find profitability. From reporting I've read, it appears the waymo lawsuit has a high probability of success.

If the waymo lawsuit essentially forces Uber to start its self driving program back from scratch then the path to self-driving viability has essentially been scrapped. To me, this is a better explanation of the recent spat of high-level exits from the company.

alphonsegaston 20 hours ago 3 replies      
On the story about Jeff Jones, someone commented that Kalanick has a stock arrangement that gives him control of the board. Does anyone know if this is true? I can't see how they can let this hemorrhaging continue when the problem is obviously Kalanick's leadership. But it shows no signs of stopping.

The only other thing I could think of is that the board is complicit in something worse than what's been exposed, and is afraid Kalanick will blow things up if they move on him.

EDIT:BBC is now reporting that two separate internal sources at Uber say Kalanick will step down when a new COO is in place.


jpatokal 15 hours ago 0 replies      
In case the name doesn't ring a bell, Brian was one of the creators of what became Google Earth and spent over 10 years at Google before jumping ship to Uber in 2015.


ziszis 21 hours ago 2 replies      
Now every departure is viewed with suspicion. With a 10,000 person company there is going to be exec departures, but even normal departures will have rumors attached to them (why did they really leave? Isn't the timing odd? Was something about to come out?)

Every senior person leaving Uber now has to manage their image as they leave. And every journalist is digging to see if there is any dirt.

rattray 20 hours ago 2 replies      
Doesn't seem like much of a real story:

> Mr. McClendon is departing amicably from Uber and will be an adviser to the company. ... His exit has been in the works for some time

tyingq 13 hours ago 0 replies      
If you're getting the paywall, as I did, another story on the same topic: http://www.businessinsider.com/uber-vp-of-maps-brian-mcclend...

Includes a list of other recent Uber departures.

umeshunni 21 hours ago 2 replies      
"In a statement, he said he was moving back to Kansas, where he is from, to explore politics. His exit has been in the works for some time, and his last day at Uber is March 28."
omarforgotpwd 20 hours ago 1 reply      
Feels like Uber's next fundraising event may be a down round...
DigitalSea 14 hours ago 1 reply      
Travis is destroying Uber and his ego is so big, he won't step down which would save the image of Uber somewhat. What a trainwreck, I deleted the app and quite a few of my friends did as well. What a disgusting company and the way it's being run is just gross negligence.
jlebrech 14 hours ago 0 replies      
there should be a ride sharing app that gets it's money from the city's congestion budget and reward those that pick each other up.
brightball 11 hours ago 0 replies      
It's because they switched to MySQL isn't it?
jbverschoor 18 hours ago 2 replies      
Berkshire Hathaway of the Internet medium.com
420 points by pknerd  1 day ago   111 comments top 21
nrao123 1 day ago 4 replies      
This is what I wrote to a friend in an email on this post:

A few challenges with the trying to replicate the Berkshire model on the internet:

##Lower cost of Capital (Float): One of the unsaid rules of BK is that their cost of capital is cheaper than most operating companies because of thier insurance float (Ajit Jain, General RE, GEICO etc). So, they can take an existing business & assume absolutely no changes to their changes & still make better ROI because their capital structure is more efficient. It is a different matter that they take portfolio companies like BNSF & use float to expand their capital projections (this is like making a private investment vs a public investment.From the BK 2015 Letter on how capex is linked to float:

After a poor performance in 2014, our BNSF railroad dramatically improved its service to customers last year. To attain that result, we invested about $5.8 billion during the year in capital expenditures, a sum far and away the record for any American railroad and nearly three times our annual depreciation charge. It was money well spent. BNSF is the largest of our Powerhouse Five, a group that also includes Berkshire Hathaway Energy, Marmon, Lubrizol and IMC. Combined, these companiesour five most profitable non-insurance businessesearned $13.1 billion in 2015, an increase of $650 million over 2014.*

##Bullet proof Revenue (Moat): One of the operating assumptions behind BK's acquisition is that the business does well regardless of management. Therefore, if the revenue assumptions hold (i.e. this is the moat part of the strategy)- then the float kicks in & does magic.

However, the challenge with using a BK model in tech i.e. acquiring companies & being hands off without PRODUCT ENERGY- the products atrophy & revenues goes. The reason that BK hates tech is because there is moats are hard to sustain & it is too competitive.

I am looking at Tiny's portfolio: http://www.tiny.website/ & I am hard pressed to think of any product that has moat. I would suspect that once the founder leaves in almost all of these cases, the products will atrophy.

tptacek 1 day ago 1 reply      
Lots of people here pointing out that BRK's buying strategy isn't what distinguishes their business so much as their access to insurance float. That's a good point, probably the most important one.

Also worth looking at is the relative contribution of these kinds of acquisitions to BRK's overall performance. Simply put, it's not Borsheim's and See's Candy that give BRK.A a $425B market cap. There's an enormous portfolio of high-ticket investments --- many of them in gigantic public companies --- that dwarf these mom-and-pop acquisitions. Even among the wholly owned subsidiaries, and even in the smaller "Manufacturing and Retail" segment, contributions are dominated by very large companies like Clayton Homes.

Noted below in the thread, the "powerhouse five" of Berkshire's portfolio: BNSF Railroads, IMC (one of the largest metalworking companies in the world), Lubrizol, Berkshire Energy, and Marmon (one of the largest home builders in the US). I doubt very much the acquisition of any of these companies followed the casual pattern listed in the post. Some of them weren't even acquired all at once.

Finally: having twice been involved in the sale of a company I was a principal at, the acquisition process was pretty unlike the breakdown this person provided. It was somewhere in between what he reports Buffett's process to be and what he claims the normal process is. The actual decision --- getting to an MOU with a notional price --- was reasonably informal and didn't take up much time. Certainly nobody did a road show. The rest of the process was legal, and I very much doubt Berkshire avoids much of that work either.

tim333 1 day ago 5 replies      
Buffett's own writing on selling him your business is quite interesting. "Appendix B" at the end of the 1990 letter http://www.berkshirehathaway.com/letters/1990.html
crdb 1 day ago 5 replies      
The TL;DR: if you skip the admin and due diligence of a deal, you will close more deals. For example: "I couldnt understand why buyers felt they needed to dig into accounting minutiae".

However, to paraphrase something I read from a sailor, "every regulation in the Navy is written in the blood of your predecessors". If you do 10 deals, one might wipe out any upside from the other 9, even if you got more upside by negotiating a better valuation, and having a faster turnaround and lower cost of doing business. And of course, if a buyer got the reputation of not doing due diligence, it would attract all the problematic companies.

There is a reason behind every "annoying" stage, and those reasons collectively explain why the entire industry is made of buyers that follow those stages.

Since he runs 10 businesses, it is probable that the author has other ways of doing the work. He will keep track of most of the promising names in his industry, he will have a network of insiders to keep him appraised of the real story in those companies (although never officially), and he will know when the timing is ripe. His good relationship with his existing bankers means financing will be quick and standardised already, his back office experienced and streamlined. His experience tells him how you can play with what numbers to present a rosier picture than the reality, as well as how much to offer to get a founder to come to a quick decision. Unless he decides to go buy palm oil crushing plants in Indonesia, it might work out quite well.

zbruhnke 1 day ago 1 reply      
Am I the only one here looking at Tiny's site and wondering when they acquired buffer?

I didn't know it was sold and can't seem to find an announcement anywhere?

amix 1 day ago 1 reply      
I doubt The Berkshire Hathaway is successful because of their processes. They are successful because Buffet and Munger are geniuses in their respective field and to build the internet version of The Berkshire Hathaway, you would probably need to copy the wisdom and not merely their processes.

Theres an interesting video about how Magnus Carlsen makes his chess moves. He makes the next move in seconds and then thinks if its the right move, ref: https://www.youtube.com/watch?v=PZFS0kewLRQ -- And maybe Buffet does a similar thing, he can look at balance sheets and quickly decide if it's a business worth buying or not.

mooreds 1 day ago 2 replies      
Where's the float?

One of the reasons BH has an advantage is their insurance business, which throws off large amounts of investible capital. He talks about how special and important the insurance businesses are in his annual letters. They make it easier for Buffet to be patient or to handle bad deals.

Does Tiny have access to this, or something like it? Maybe existing SaaS products that have very high gross margins?

Animats 1 day ago 0 replies      
Berkshire Hathaway buys successful operating companies, preferably with a long history. They look at GAAP earnings, assets, and debt. Evaluating such companies is straightforward. This simplifies acquisitions.

They rarely buy "growth companies". Those are speculative investments.

jacques_chester 1 day ago 0 replies      
Berkshire Hathaway derives no small part of its advantage from extremely cheap internal financing (insurance float) and very small leadership organisation (Buffet and Menger).

I don't think this yet qualifies as the Buffet of Baud. But I admire the excellent marketing.

apapli 1 day ago 1 reply      
It sounds great, but I'm not sure the space needs to be "disrupted".

I get it that there are a number of steps needed to buy a business, but this isn't exactly buying a chocolate bar from the supermarket where all we need to check is the "best before" date has not passed.

Proper, thorough, due diligence takes time, and knowing how much you are paying for a business versus how much you are handing over for goodwill (which is far from guaranteed) is hard, let alone being able to understand cultural fit, internal politics, market forces and the myriad of other internal and external factors that all influence a business on a daily basis.

Warren Buffet apparently does it quickly (I didn't check beyond the claim in this article), but he has a talent for it, and I will guess a fairly large research team that are fully prepared and informed before they approach a prospective target.

Comparing Warrent Buffets approach to this service to me comes off like a very well-intentioned but frankly naive initiative.

Of course, if the author is planning to buy businesses at a massive discount to what they are truly worth then he indeed can probably skip some of the diligence - but that's a pretty poor outcome for the seller.

bluetwo 1 day ago 1 reply      
I think every time someone uses a comparison to Warren Buffet or Berkshire Hathaway to market their own product, they should pay WB a commission.

That said, it should be noted that WB doesn't like tech companies and invests in very few. He likes businesses that have steady cash flow, containable costs, and reliable yield/growth. This isn't tech.

markgavalda 1 day ago 0 replies      
This is awesome! I can imagine these kind of buyouts becoming really popular in the future. The usual process (detailed in the post) is awful and has to be disrupted.
phkahler 1 day ago 0 replies      
Finally the reason for the different approach is covered in the P.S. at the end:

>> They are fiduciaries managing money for huge institutional investors.

Most of the due diligence is the intermediary covering their ass. In fact, a huge institutional investor is also an intermediary for a bunch of other people and they like to cover their ass. It's all about minimizing risk of getting sued by someone.

bradgessler 1 day ago 0 replies      
Its brilliant. Andrew wrote the post in a way that makes it feel so obvious, even though its not. Reading it felt analogous to some of early 2000's YC writings and coverage.
zekkius 1 day ago 1 reply      
This post wholly ignores why Berkshire Hathaway is successful. The company is not successful because they make selling companies to them easy. They are successful because they buy good, cash rich businesses and find excellent management to run them.
louprado 1 day ago 1 reply      
> he has amassed a collection of over 65 wholly owned companies

Just a side note, the above likely referenced this Wikipedia page[1], but it misses that MiTek, which is listed on that page, represents holdings of another 20 companies that aren't on that page. I didn't research if other BRK company are also holding companies.

I currently consult for two companies, a PaaS company and a CRM/SaaS company. Both were recently purchased by MiTek and focus on the Builder Industry. They both continue to operate independently as do the other companies held by MiTek. This follows the traditional BRK business approach to keep the existing management and corporate culture in place of the acquired company.

Prior to that acquisition I thought the BRK team was not tech savvy. I can tell you first hand (at least when it comes to the MiTek team) that is not the case.


Exuma 1 day ago 0 replies      
As someone currently selling their internet company for several million... I could definitely appreciate a BH type buyer.
akhatri_aus 1 day ago 0 replies      
I immediately thought of Naspers. Not a single mention of it still.
rimliu 1 day ago 0 replies      
The best thing about Berkshire Hathaway in the context of internet is their website. I've given it as example numerous times.
quanticle 1 day ago 4 replies      
Are the emoji on every list item really necessary? I find that they don't add any meaning to the post, and they're just distracting.
tardo99 1 day ago 0 replies      
Intels first Optane SSD: 375GB that you can also use as RAM arstechnica.com
314 points by xbmcuser  1 day ago   134 comments top 19
jpalomaki 1 day ago 3 replies      
Somebody linked an "Intel to mislead press on Xpoint next week" article from SemiAccurate to another thread on the same topic. Interesting read and adds some context to the announcement.http://semiaccurate.com/2017/03/10/intel-mislead-press-xpoin...
ccleve 1 day ago 19 replies      
This is a big deal.

We've always made a distinction between memory and disk. Much of computer science is about algorithms that recognize the difference between slow, persistent disk and fast, volatile RAM and account for it somehow.

What happens if the distinction goes away? What if all data is persistent? What if we can perform calculations directly on persisted data without pulling it into RAM first?

My guess is that we'll start writing software very differently. It's hard for me to predict how, though.

gjm11 1 day ago 6 replies      
Originally Intel claimed that this new technology would offer 1000x shorter latencies and 1000x better endurance than NAND flash, and 10x better density than DRAM. The figures they're quoting now are more like 10x shorter latencies and 3x better endurance (compared with flash), and 2.5x better density (compared with RAM).

The article linked here says "3D XPoint has about one thousandth the latency of NAND flash" but I don't see any actual evidence for that. The paragraph that says it is followed by a link to actual specs for a "3D XPoint" device, saying: "the Intel flash SSD has a 20-microsecond latency for any read or write operation, whereas the 3D XPoint drive cuts this to below 10 microseconds." which sounds to me more like a 2x latency improvement than a 1000x improvement.

So I ask the following extremely cynical question. Is there any evidence available to us that's inconsistent with the hypothesis that actually there is no genuinely new technology in Optane? In other words, have they demonstrated anything that couldn't be achieved by taking existing flash technology and, say, adding some redundancy and a lot more DRAM cache to it?

[EDITED to add:] I am hoping the answer to my question is yes: I'd love to see genuine technological progress in this area. And it genuinely is a question, not an accusation; I have no sort of inside knowledge here.

sologoub 1 day ago 2 replies      
At previous employer, we built a system using Druid as the primary store of reporting data. The setup worked amazingly well with the size/cardinality of the data we had, but was constantly bottlenecked at paging segments in and out of RAM. Economically, we just couldn't justify a system with RAM big enough to hold the primary dataset. As the result, we had to prioritize data aggressively, focusing on the more recent transactions and locating them on the few servers with very high RAM that we did have. Historic data segments had to go through a lot of paging in/out of RAM. User experience on YTD (year-to-date) or YOY (year-over-year) reports really suffered as the result.

I don't have access to the original planning calculations anymore, but 375GB at $1520 would definitely have been a game changer in terms of performance/$, and I suspect be good enough to make the end user feel like the entire dataset was in memory.

ChuckMcM 1 day ago 1 reply      
Yay! Nice to see these things becoming more real. The choice of U.2 is interesting, it might force wider adoption of that form factor.

This is definitely going to change the way you build your computational fabric. Putting that much data that close to the CPU (closeness here measured in nS) makes for some really interesting database and distributed data applications. For example, a common limitation in MMO infrastructure is managing everyone getting loot and changing state (appearance, stats, etc). The faster you can do those updates consistently the more people you can have in a (virtual) room at the same time.

piinbinary 1 day ago 5 replies      
With drives like this, the approach of "throw more hardware at it" continues working for databases to the point where most database loads in the world can be handled on a single machine.
xt00 1 day ago 1 reply      
The smart thing that Intel is doing is making stuff that they know big cloud providers like AWS etc. will pay crazy amounts for and buy in huge volumes. The "use it as RAM" is incredibly valuable -- especially to bring down costs for databases. For example, running a database with an allocated 32GB of RAM is pretty expensive per month.. And if somebody like AWS sold a cheaper DB instance version that ran from this drive as its memory (or was smart paged), then that could bring down the cost of allocating huge databases to memory with a performance hit that many people would be willing to take to save the money.
olavgg 1 day ago 1 reply      
I wonder what the performance of these are with PostgreSQL's pg_test_fsync, which is one of the proper tools to benchmark a SSD. I get 4000 iops with my Intel 320 SSD and 9000 iops with Intel S3700. For comparison, Intel 600p maxes around 1500 iops, and Samsung 750 Evo at 400 iops.
0x0 1 day ago 0 replies      
What's the endurance like if you actually use it as ram? How many times can you do a "label1: inc [rax]; jmp label1" loop having rax pointing to a particular byte address on the SSD? (With GHz CPUs, wouldn't that mean giga-writes per second? Isn't NAND rated for 10k-100k writes total, and if this is rated for 1000x more than that, you'd still hit a 10m-100m total writes in like a second?)
Animats 19 hours ago 2 replies      
The big limitation of this device is wear. "Optane SSDs can safely be written 30 times per day", says the article. That implies a need for wear monitoring and leveling. Although you can modify one byte at a time, the need to monitor wear implies that just memory-mapping the thing isn't going to work.

Wear management could be moved to hardware, though, using a special MMU/wear monitor/remapping device. If you're using this thing as a level of memory below DRAM, viewing DRAM as another level of cache, something like that would be necessary. That's one application.

This device would make a good key/value store. MongoDB in hardware. Then you don't care where the data is physically placed, and it can be moved for wear management.

SergeAx 1 day ago 0 replies      
I wonder why no one mentioned application of this technology in mobile phones. Most obvious case: it will be posible to bring entire system from hybernate state while user pulling phone from the pocket and pressing "awake" button. Power and computing cost of hybernating/restoring system would be slightly north of zero, which leads to dramatic increasing of battery life.

Not to mention size factor and lower power consumption of chip itself.

floatboth 1 day ago 2 replies      
Meanwhile, the Raspberry Pi is using a garbage microSD controller that corrupts cards the divide between cheap and high end stuff is just mindblowing these days.

Also, normal (NAND) NVMe M.2 SSDs are still TWICE as expensive as good old SATA ones, at least in my country And they want to push Optane into the consumer space later this year who even needs that much performance at home?

faragon 1 day ago 1 reply      
So finaly we're going to get from Intel what HP promised years ago as "memristors".
aikorevs 21 hours ago 0 replies      
Tried to look up modern memory latency numbers but could not find. Because "numbers every programmer should know" are about 10 years old as I understand.
gbrown_ 1 day ago 1 reply      
It feels like Intel are putting this out in lieu of actual hardware. As always remain skeptical until real silicon appears on the market.
nimos 1 day ago 0 replies      
Seems like this would be ideal in cases where you are waiting for file sync in multiple locations which I assume a lot of banks/corporations do.

Interesting it seems to be marketed as cheaper memory. You'd think at first they'd try and rip super high margins out of banks/corps by selling it as "persistent" memory.

Although I guess if your waiting for file writes in multiple locations the network overhead makes the actual write sort of irrelevant.....

mrfusion 1 day ago 0 replies      
Is this based on memristors? That's pretty amazing. I thought hp owned that.
dgudkov 1 day ago 1 reply      
Ramdisks are back. This time persistent.
frozenport 1 day ago 1 reply      
I would opt for a more conventional solution, you can get 256GB of ram for under $2k, and then enable write caching.
Building a Hackintosh Pro dancounsell.com
347 points by milen  2 days ago   258 comments top 40
jwr 2 days ago 6 replies      
I used to use a Hackintosh. It was a time-consuming thing, I feared OS updates, and I could never be sure about the reliability (you can get really weird problems sometimes. I then switched over to a Mac Pro (cheese grater) and was very happy with it. Unfortunately, since the trash can Mac Pro I'm left out in the cold, so I'm considering running a Hackintosh again...

Unfortunately, the information is still very fragmented and it takes a lot of time and effort to get one running. Actually building the thing is the easiest part, it's the booting, installation and OS patches/fixes that eat up time.

doctorpangloss 2 days ago 4 replies      
As a Hackintosh user, I'd recommend an NCASE M1 v5 or the Shuttle SZ170R8 for a significantly better chassis.

With the NCASE, you can get a LGA 2011-v3 mini-ITX board and get as fast of a computer as you want. Though truthfully, I don't think there's much of a point.

Likewise, an old NVIDIA card for a Hackintosh? Also not much of a point. The web drivers are so bad. Who knows when Apple will ship another NVIDIA chip in a Mac?

If you need "CUDA stuff," use Linux or Windows. Software like Octane is so buggy and suffers from worse performance on Mac anyway. Final Cut and After Effects both support OpenCL acceleration. Besides, the RX 480 is $189.

If you're doing "VR stuff," well pity on you if you're developing for an Oculus on a Mac. The Vive doesn't, and probably never will, support Mac. Whatever VR solution Apple releases will obviously run great on the video cards they support, so that again strongly points to purchasing an AMD card at this time over your alternatives.

With regards to this specific build, a high DPI display will greatly improve the enjoyment of this computer. The Dell P2715Q is the best deal. Mac OS has such good HiDPI support compared to Windows (and especially Linux). Enjoy the features you pay for!

Truthfully, I'm hard pressed to see the point of a Hackintosh, and I own one.

jrnichols 2 days ago 2 replies      
I'm in the same boat this author is. It's disappointing to see the desktop market lagging so far behind. For all of the 68k/PPC years, we could at least say "different architecture" and Intel hadn't caught up. Now we're all Intel, and it's Apple that isn't keeping up. It's frustrating, and I'm not necessarily a "Pro user" anymore at all. I have a current generation Mac Mini, and it's over 2 years old now. What I can buy from the Apple Store right now for the same price is what I got 2 years ago.

I wish I know what Apple was thinking. One would have thought that the "iPhone halo effect" was something that they would have wanted to give momentum to. Instead people are looking at Windows units again.

ak217 2 days ago 4 replies      
Just to provide a counterpoint, I recently did a new desktop build and installed Windows 10. It's not bad, and very different from the Windows 10 beta that I gave up on in frustration two years ago. With the Ubuntu subsystem I can do useful work right away. After turning off some of the annoyances (via the Services and Group Policy control panels), it really does a decent job of just working. You can download and install with a USB stick (no more stupid DVDs). It still demands a license key, but runs indefinitely without registration with a little watermark.

If you haven't built a desktop in the past few years, the performance boost from PCIe NVMe SSDs is great, and Intel i5-7600K (now retails for $200) can run at 4.5 GHz reliably and stay cool. I'm impressed.

binaryapparatus 2 days ago 2 replies      
This is both informative and interesting from other angle. If Dan Counsell, well known owner of Realmac Software, has to build hackintosh then wtf is Apple doing?
coned88 2 days ago 2 replies      
My concern with these hackintosh systems is safety. Tools like Unibeast/Clover and whatever else. They manipulate the OSX install image. It all seems to work but then you type in your credentials for the bank or work into a browser and who knows if the OS is compromised.

Is it safe? That is the question.

nottorp 2 days ago 3 replies      
Problem is, Apple doesn't have anything for power users or developers right now. I'm currently hackintoshing, but with every "dumb it down" decision (like remove the battery life time estimation), hardware mishap (LG monitors not working near wifi, come on) or new hardware announcement that still doesn't serve my demographic i wonder if i shouldn't just switch back to Linux.

I used to say that their laptops are fine, and it's only the desktop where i need to hackintosh, but ... touch typing hostile emoji keyboard?

tangue 2 days ago 1 reply      
I started to build a Hackintosh using this kind of setup but I realized that I no longer love the OS (I already own a Macbook) and that there's no reason for such a hassle.
thyselius 2 days ago 1 reply      
I built a crazy fast Hackintosh using the intel 8 core 5960x CPU. 17000 on Cinebench. However I sold it a week ago.

Being iOS developer really sucked, as I needed to upgrade OS X for Xcode but the CPU wasn't supported with Sierra for 6 months.

Also I spent at least 2 weeks of work on it during the year I had it. So not worth it. But there are slower cpus that are better supported. It was fun the days it worked though:)

kilroy123 2 days ago 1 reply      
While this PC looks great, visually, and spec wise; isn't this missing the point?

It seems like a company needs to build a _very_ good linux distro with design first principles. It needs to work on a number of devices. More importantly, it needs to be a paid OS. It can have an open source distro underneath, but the UI needs to be created by people who are paid well.

elmigranto 2 days ago 4 replies      
I'm wondering whether it would be easier to run macOS in VM. No more fear of updates with snapshots, and I imagine ease of installation and less compatibility issues.

Anyone does it, how's performance and keyboard "tunneling" (CMD vs CTRL)?

kuon 2 days ago 1 reply      
I have been using a similar (a bit lower specs) hackintosh for 2 years.

About two weeks ago, I decided to stop and look elsewhere.

I started a small serie on my experience if you are interested:https://medium.com/the-missing-bit/leaving-macos-part-1-moti...

I am still testing my current setup, but I guess I'll soon publish the last bit, with the setup I found and my conclusions on the switch.

haltandsleep 2 days ago 0 replies      
I bought my Mac the first month they appeared in 1984. I won't bother you with the ensuing history, but I will raise a question related to it:

I remember when Apple could not offer a retail OS, not without cannibalizing hardware sales. If Pro desktops fade, is that still true for that segment? Or could they offer something Xeon only (to keep out commodity laptops) as a legal Hackintosh? Or do certified configurations a la Oculus? If their profit is in mobile, and cloud services, it might help more than hurt.

FWIW I like Debian now, and the non-intrusive UI.

atemerev 2 days ago 0 replies      
I have a Ubuntu desktop (i7 6700K, nVidia GTX 1080, like everybody else's), and a recent MacBook Pro.

Each time I open the latter, I have a mixed feeling of "how beautiful everything is!" and "how slow everything is!"

(I am a Scala software engineer occasionally working with GPU-based machine learning)

poyu 2 days ago 1 reply      
I'm a Hackintosh user for almost 10 years (from 10.6!), though I do have a MacBook Pro with my when I'm out.

As other people stated, yes, it is very time consuming to get it straight. Treat it as a hobby, you'll understand things about the Mac (or computers in general) other people don't, such as DSDT patches, how drivers are loaded, and Mac power management, etc.

If you use Clover, and get all the patches right, you can almost get an update-proof setup (Except when you go from say 10.11 to 10.12). But even at the worst case, usually people on the Internet will figure out fast enough for you to apply the new patches. Minor updates are really really easy and fine. I always click Update without batting an eye.

It's a tinkerer's hobby. If you like doing researches and being fine with spending time figuring out stuff on the Internet, I will say go for it and try it out! The process is fun and the result is very rewarding.

bluedino 2 days ago 2 replies      
>> Maybe Apple have been waiting for the recently released Ryzen CPUs from AMD?

You can't even run the stock kernel on AMD chips. How much QA and other work Apple would have to do, I have no idea.

djsumdog 2 days ago 0 replies      
I had a Hackintosh for years. I used the stock/retail CD from my former mackbook, used some custom kexts and a guide to generate the right plist edit to unlock my nvidia card. I never had issues with updates either.

It ran Snow Leopard and I used it for all my development, video editing, photo editing, etc. Eventually I left for Australia and decided to get a real MacBook and unfortunately it had Lion on it.

I hated Lion. Gone was Expose. Gone were rows of virtual desktops (Missing Control had one row with multiple columns. I hated that shit). There was no way to get the old functionality back. Eventually I started using Linux again in a VM as my primary OS.

Today I'm back on Linux with i3 as my tiling window manager and I don't think I'd ever go back to macos. I think many of their design decisions since Lion and onward have been terrible. I just keep around a Windows laptop or VM for when I need commercial products or to play games.

jlgaddis 1 day ago 2 replies      
I know I could Google and read about some experiences but since this made me think about it... have any of you HN'ers tried to virtualize OS X or run it virtualized (on anything other than an OS X host) on a regular basis?

I've got a (still pretty new) high-end MacBook Pro sitting at the end of my desk but -- after putting together a new, extremely over-built workstation a few months ago -- I haven't even turned it on since I don't know when. I've got KVM/qemu, VMware Workstation, and VirtualBox all installed on my workstation, though, and it might be interesting to try to get OS X running under one of them.

gfiorav 2 days ago 1 reply      
Best mac I ever had is my Hackintosh. I built a fusion drive for it and use Clover instead of Chimera for a boot loader, so I can update from major versions without trouble.

I've never had any issues, i just shop for compatible chipsets. I've had tons of issues with Linux on the desktop in comparison to Hackintosh. Never understood how people say it's time consuming to do it!

Xeoncross 2 days ago 2 replies      
One day I want the Adobe suite to run on linux (natively) then I can leave OS X just like Windows. Local Media editing is the last reason for windows/os x to exist.

Everything else (games, calculations, social networks, model rendering) is better served by a web or mobile app backed by a server running linux.

rallycarre 2 days ago 0 replies      
I custom built my PC before knowing of what a hackintosh was so I didn't purchase my hardware specifically for it. According to the compatibility wiki my config was compatible.

I remember during college of wanting to do some Ios development but afford a MAC. I spent atleast 20 hours of tweaking kewts settings and trying different distros. I finally got it to boot but it crashed whenever I tried to run the emulator.

I haven't touched hackintosh stuff for several years but the grief and time wasted makes it not worth it. It's a shame apple is limiting their development tools to MacOS. They could learn a thing or two from Microsoft.

mark_elf 2 days ago 0 replies      
There are some comments here as to motivation: why go through all this? I make video money using after effects, mainly. I'm chained to the oars. The choice of intermediate codec (ProRes) is surprisingly important, there are other solutions that don't make it for my workflow. And it's not just codec, lots of things about windows 10 make it an unprofessional choice. I was planning to build a windows box anyway, with i7 6900K, 128GB ram, ASRock X99 Taichi, GTX 1080, NVMe, all that. When I overlaid that over a hackintosh, it seemed a bit past what is possible, at least from a cursory look at tonymacx86. It's worth a week of work to me, maybe I'll look harder. Thanks for listening!
mmrr88 2 days ago 0 replies      
I have a Toshiba Satellite Radius 11 L15W-... its a cheap 300 dollar laptop runs windows 10. changing the OS is not supported and actually blocked my the manufacturer. I was able to find a few tools to remove some of the counter measures set in place. It is now a Ubuntu Laptop, I would use a Mac for development but Macs are expensive. I have built 2 PC gaming rigs that were more expensive then the 1500 price tag of a hackintosh. Bottom line is you can get any modern OS working on almost any machine. It just starts to get really hacky and information can be hard to find.
taksintikk 2 days ago 2 replies      
Wonder why OP didn't use PCIe NVMe drives.

Night and day better than standard ssd on homebrew macs.

redsummer 2 days ago 3 replies      
Apple have made some terrible design decisions since Jobs died.

A 'thin iMac' - whose thin-ness was absolutely pointless.

The MacBook (2 lbs) and MacBook Pro (3 lbs) were designed to weigh arbitrary weights instead of thinking about features: http://www.apple.com/mac/compare/results/?product1=macbook&p...

And the MacPro was possible the worst shape for upgrades.

Jobs inspired people to come up with great machines, but he also had a pragmatism which seems to have been lost at Apple.

1011_1101 17 hours ago 0 replies      
Unlocked CPU and Z-Board but no OC? With that config you should be easily able to run that CPU at 4.6GHz
flyingfsck 1 day ago 0 replies      
This seems reasonable: http://hackintoshmethod.com
notlisted 2 days ago 0 replies      
I'm not a Hackintosh builder, have considered it many times. From the video's I've watched, the big trick, especially if you use Final Cut Pro X with OpenCL, is to avoid NVidia and use AMD Radeon video cards. Supposedly this makes the build process a lot easier/less finicky, and even faster. Can anyone confirm?
UpDownLeftRight 2 days ago 0 replies      
Or you can get a Windows 10 machine from a reliable integrator (like Supermicro or ThinkMate) and it will "just work."
lisper 2 days ago 0 replies      
Has anyone here tried running OS X under Parallels Desktop on Linux?


crazy5sheep 1 day ago 1 reply      
a bit off topic. NVIDIA driver for mac is kind of suck. the bug, which showing a transparent window whenever you try to open an epub file with ibook.app, is been there for a very long time, and still no fix yet.
adamnemecek 2 days ago 2 replies      
> I've switched off auto-updates in Sierra. While system updates should work just fine, I prefer to hold off until the community over at tonymacx86 have confirmed there are no issues.

Does anyone know why some of the updates can brick the machine? Also how often does this happen? Or like what percentage of updates break things.

softinio 2 days ago 1 reply      
Is the motivation to use a hackintosh over linux apps you cant get for Linux mainly?
robk 2 days ago 3 replies      
Those cases still look really ugly. Reminds me of the look I was going for twenty years ago when i was a teenager w all the neon. The Macs at least look elegant and like something I'd want in my house.
amelius 2 days ago 3 replies      
Curious: is there a window manager for Linux that behaves much like OSX?
jeffjose 2 days ago 2 replies      
After years of making fun of Linux folks for having to "know everything about the computer" to get things working, Mac folks seem to be embracing the culture these days.
wiradikusuma 2 days ago 0 replies      
Be careful with OS upgrades. I recently upgraded my hackintosh to macOS (Sierra) and now it can't shutdown/restart.
toodlebunions 1 day ago 0 replies      
Eventually you'll likely give in and install windows.
xaduha 2 days ago 0 replies      
If you're not doing VFIO, you're doing it wrong.
finchisko 2 days ago 1 reply      
so you're running pirated Windows to?
The Cult of DD eklitzke.org
267 points by eklitzke  3 days ago   165 comments top 37
cat199 2 days ago 4 replies      
"This is a strange program of obscure provenance that somehow, still manages to survive in the 21st century."

-> links to wikipedia page with direct discription of lineage back to 5th ed research unix

"That weird bs=4M argument in the dd version isnt actually doing anything specialall its doing is instructing the dd command to use a 4 MB buffer size while copying. But who cares? Why not just let the command figure out the right buffer size automatically?"

Um -

a) it is 'doing the special thing' of changing the block size (not buffer size)

b) Because the command probably doesn't figure out the right size automatically, much like your 'cat' example above which also doesn't

c) And this can mean massive performance differences between invocations

> Another reason to prefer the cat variant is that it lets you actually string together a normal shell pipeline. For instance, if you want progress information with cat you can combine it with the pv command


 dd if=file bs=some-optimal-block-size | rest-of-pipeline
that was hard.

>If you want to create a file of a certain size, you can do so using other standard programs like head. For instance, here are two ways to create a 100 MB file containing all zeroes:

 $ uname -sr OpenBSD 6.0 $ head -c 10MB /dev/zero head: unknown option -- c usage: head [-count | -n count] [file ...]
well.. guess that wasn't so 'standard' after all.. I must be using some nonstandard version...

 $ man head |sed -ne 47,51p HISTORY The head utility first appeared in 1BSD. AUTHORS Bill Joy, August 24, 1977. $ sed -ne 4p /usr/src/usr.bin/head/head.c * Copyright (c) 1980, 1987 Regents of the University of California.

> So if you find yourself doing that a lot, I wont blame you for reaching for dd. But otherwise, try to stick to more standard Unix tools.

Like 'pv'?

edit: added formatting, sector size note, head manpage/head.c stuffs.. apologies.

viraptor 3 days ago 6 replies      
There's one good (?) reason to use dd with devices: it specifies target in the same command. For devices, writing to them usually requires root privileges, so it's easy to:

 sudo dd .... of=/dev/...
But there's no trivial cat equivalent:

 sudo cat ... > target
Will open target as your current user anyway. You can play around with tee and redirection of course. But that's getting more complicated than the original.

stirner 2 days ago 4 replies      
This article is full of Useless Uses of Cat[1] that could just use redirection operators. For instance,

 cat image.iso | pv >/dev/sdb
could be rewritten as

 pv < image.iso > /dev/sdb
A related mistake is the Useless Use of Echo, since any command of the form

 echo "foo" | bar
can be written using here strings as

 bar <<< "foo"
or even

 bar <<WORD foo WORD
[1] http://porkmail.org/era/unix/award.html

gens 2 days ago 1 reply      
The Ignorance Of Err Ignorant People

dd is a tool. dd can do a lot more then cat. dd can count, seek, skip (seek/drop input), and do basic-ish data conversion. dd is standard, even more standard then cat (the GNU breed). I even used it to flip a byte in a binary, a couple of times.

New-ish gnu dd even adds a nice progress display option (standard is sending it sigusr1, since dd is made to be scripted where only the exit code matters).

> Actually, using dd is almost never necessary, and due to its highly nonstandard syntax is usually just an easy way to mess things up.

Personally I never messed it up, nor was confused about it. This sentence also sets the tone of the whole article, a rather subjective tone that is.

edit: Some dd usage examples: http://www.linuxquestions.org/questions/linux-newbie-8/learn...

colemannugent 3 days ago 4 replies      
One thing I'll often use dd for is recovering data from a failing drive. Can head ignore read errors? dd can.

As far as I'm concerned, dd is lower-level than most of the other utilities and provides more control over what's happening.

The author does have a point that the syntax is strange though.

hvs 2 days ago 1 reply      
For those of you that are blissfully unaware of what the JCL DD command looks like, here's a example (with only the DD section of the JCL shown):


tambourine_man 3 days ago 1 reply      
But who cares? Why not just let the command figure out the right buffer size automatically?

Because it can be a lot slower. dd is low level, hence powerful and dangerous.

And, if we are going down that rabbit hole, you don't need cat[1]

The purpose of cat is to concatenate (or "catenate") files. If it's only one file, concatenating it with nothing at all is a waste of time, and costs you a process.


electrum 2 days ago 1 reply      
Don't cat a file and pipe it into pv. Use "pv file" as a replacement for "cat file" and it will show you the progress as a percentage. When it's in the middle of a pipeline, it doesn't know the total size (unless you tell it with -s), so it can only show the throughput.
gunnihinn 3 days ago 2 replies      
A counterpoint: dd survives not because it's good or makes sense, but explicitly because it doesn't.

You wanna format a usb key? Google this, copy/paste these dd instructions, it works, move on with your life.

You wanna format a usb key using something related to cat you once saw and didn't fully understand? Have fun.

Both approaches have their weak points, but in any OS the answer to "How do I format a usb key" should not start with "Oh boy, let's have a Socratic dialog over 10 years on how to do that."

knz42 3 days ago 1 reply      
What about the `seek` argument which skips over some blocks at the beginning but still allocates them (unix "holes")?

Also note that there are still unix systems out there which do not support byte-level granularity of access to block devices. On those devices you must actually use a buffer of exactly the size of the blocks on the device. Heck, linux was like this until at least v2.

chrisfosterelli 3 days ago 2 replies      
I think dd is primarily so popular because it is used in mostly dangerous operations. Sure, using cat makes logicial sense, but if we are talking about writing directly to disk devices here I'll trust the command I read from the manual and not explore commands I think would work.

dd's "highly nonstandard syntax" comes from the JCL programming language, but it's really just another tool to read and write files. At the end of the day it's not more complex or incompatible than other unix tools. For example, you can also use tools like `pv` with dd no problem to get progress statements.

donaldihunter 3 days ago 1 reply      
Cult of pv. It looks to have more command-line complexity than dd. https://linux.die.net/man/1/pv
angry_octet 2 days ago 1 reply      
This is a great example of why downvoting submissions should be a thing. Or at least showing the up/down tuple. I would say every upvote represents someone misled and likely to further propagate this nonsense.
merlincorey 3 days ago 0 replies      
The author doesn't even give correct invocations of dd (on BSD, at least, for their last example with head).

I certainly agree the syntax of the arguments is strange, due to its age, but I don't agree that learning it is difficult or a waste of time.

All I've learned is that the author doesn't like dd well enough to learn it.

betaby 3 days ago 2 replies      
Author is wrong bs IS useful, try to dd one hard drive to another without reasonable bs (1-8M) with and without and you will see a difference.
snickerbockers 3 days ago 1 reply      
OP, your alternatives to DD are more complicated, not less complicated. I shouldn't need to pipeline two commands together just to cut off the first 100MB of a file.
ocschwar 3 days ago 2 replies      
Dude's missing an important point:

If you mess up the syntax on a dd invocation, a nice thing happens: nothing.

Use a shell command and pipes, and your command better be perfect before you hit return.

sndean 3 days ago 1 reply      
Somewhat related short story: Earlier this week my friend said that he dd'd away just over 50 bitcoins, back when they were worth ~$3 each.

"One of the biggest regrets of my life."

AdamJacobMuller 3 days ago 0 replies      
I'll point out that dd also allows you to control lots of other filesystem and OS-related things that other tools do not. See: fsync/fdatasync. I'm not aware of any shell tools that allow you to write data like that.
kev009 2 days ago 0 replies      
Ignorance on the blocksize arg.

Also, I only need to remember one progress command for my entire operating system: control+t. I also get a kernel wait channel from that which is phenomenally pertinent to rapidly understanding and diagnosing what the heck a command is doing or why it is stuck.

I hate what Linux has done to systems software culture.

gravypod 3 days ago 5 replies      
An even easier solution: don't make people fall into the command line to format a USB reliably.

The command line should be reserved for times where you need the fine grain control to do something that DD is meant to do. A GUI should implement everything else in a reliable way that doesn't break half the time or crash on unexprected input.

gabrielblack 2 days ago 0 replies      
I think this article is full of "alternative computer science" and reminds me other article, published here as well, about the obsolescence of Unix. The only good thing is this discussion thread.
emmelaich 3 days ago 1 reply      
Specifiying a large block size used to help a LOT with performance. From memory shell redirection used a tiny blocksize. On Solaris at least.

And if you use dd then you probably should specify a bigger block size than the default of 512 bytes.

But yeah, most usage is obsolete.

ori_b 3 days ago 0 replies      
To be fair, dd was mostly a toungue in cheek reference to the overly baroque JCL command for IBM mainframes.
jsd1982 3 days ago 1 reply      
Interesting assertion. Can you show me a shell invocation without using dd that cuts off the first 16 bytes of a binary file, for example? This is a common reason I use dd.
tardo99 2 days ago 0 replies      
One of the charms of dd is its hilarious syntax. And, used properly, it's a bit of a swiss army knife for a few different disk operations.
kazinator 2 days ago 0 replies      
dd precisely controls the sizes of read, write and lseek system calls. This doesn't matter on buffered block devices; there is no "reblocking" benefit.

Some kinds of devices are structured such that each write produces a discrete block, with a maximum size (such that any bytes in excess are discarded) and each read reads only from one block, advancing to the next one (such that any unread bytes in the current block due to the buffer being too small are discarded). This is very reminiscent of datagram sockets in the IPC/networking arena. dd was developed as an invaluable tool for "reblocking" data for these kinds of devices.

One point that the blog author doesn't realize (or neglects to comment upon) is that "head -c 100MB" relies on an extension, whereas "dd if=/dev/zero of=image.iso bs=4MB count=25" is ... almost POSIX: there is no MB suffix documented by POSIX, only "b" and "k" (lower case). The operator "x" is in POSIX: bs=4x1024x1024.

Here is a non-useless use of dd to request exactly one byte of input from a TTY in raw mode:


Wrote that myself, back in 1996; was surprised years later to find it in the Bash distribution.

noir_lord 3 days ago 1 reply      
not sure status=progress is that obscure a command, it was added relatively recently as well (in terms of dd).
paulddraper 2 days ago 0 replies      
My most common use of dd is warming up AWS EBS volumes. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initi...

Though fio is better because it can work in parallel.

dom0 2 days ago 0 replies      
dd is for handling blocked data, while cat, redirection and pipelines are completely useless for that, since they are not meant to manipulate blocks of data, but streams. They do not compare (apart from really simple cases where either will do, like copying a file into some other file); this blog posts mainly highlights that neither the author nor many tutorial writers now the difference.
rurban 2 days ago 0 replies      
Instead of

 cat image.iso | pv >/dev/sdb
just do

 pv image.iso >/dev/sdb

diegorbaquero 3 days ago 0 replies      
Question: Will cat do a bit-to-bit copy between disks?
nwah1 3 days ago 2 replies      
Someone should write a wiki bot to crawl through the wikis for Arch, Debian, and so forth to help rewrite all these bad instructions.
gbin 2 days ago 0 replies      
Instead of `cat file | pv > dev` why not `pv file > dev` ?
jeffdavis 2 days ago 0 replies      
What about writing a block into the middle of a file?
number6 2 days ago 0 replies      
This is cat abuse
badatusernames 3 days ago 0 replies      
TLDR This has nothing to do with dunkin donuts
Google Glass is getting a second life in the manufacturing industry npr.org
300 points by happy-go-lucky  2 days ago   134 comments top 26
4258HzG 2 days ago 5 replies      
It's kind of clear that their quoted expert Tsai didn't interview anyone while they were using Google Glass.

"With Google Glass, it may look like you're listening to the person in front of you, but you could actually be watching a movie or looking up sports stats."

Unfortunately the problem is the opposite and more offensive. It's very obvious if you're having a conversation with someone while they're using Glass while looking at you from their eye movements, and it makes the user look really weird ignoring any fashion issues. I found the experience quite a bit more offensive than having someone reading emails on their laptop while you talked to them. Unlike the laptop case, with glass you get a very clear direct view of their eyes as they scan whatever glass is showing them, while there is the obvious false pretense of giving you their full attention.

I think one of the big problems with glass was that they picked the wrong sort of people to be early public users that then set the tone for the product. A process that ensured only super-enthusiastic users would bother applying is also the sort that would select for the least willing to notice how other people might find certain uses of it rude and annoying.

jerryr 1 day ago 4 replies      
Using Glass in a factory environment surprises me. Factories are notoriously loud, so unless Glass is really good at filtering out background noise, I'd imagine that voice commands wouldn't work very well. My experience has mostly been consumer electronics factories in Asia, but I also see the following problems that may or may not apply to domestic factories:

* Equipment with clear resale value has a tendency to "walk away". We worked hard to avoid using consumer electronics such as PCs or phones on the line. When we did need to use them, we had to establish strict policies and secure storage for when the equipment was not in use. Glass seems like it'd fall in this category.

* Even dedicated, laser-based, handheld barcode scanners could be finicky with part labels. Camera-based scanners were unusable due to poor accuracy and latency.

* Internet connectivity is poor or non-existent. WiFi coverage is usually terrible due to physical and electrical interference from factory equipment.

AR work instructions would be a dream--especially if the technology could flag errors. The environment just seems especially hostile for consumer-oriented technology such as Glass. I don't have firsthand experience with Glass, but I'm really surprised this company is reporting success with it.

ansgri 2 days ago 1 reply      
I wonder, how does one approach Google for such business tech? Or do they only find customers themselves?

I couldn't find the landing page for prospective enterprise Glass customers, though there's Glass at Work partner list with companies which do have real contact address.


Apocryphon 2 days ago 2 replies      
There's something ironic and lovely about the much-maligned Google Glass becoming a blue-collar tool.
benjaminjackman 2 days ago 1 reply      
This reminds me of the Manna story by Marshall Brain: http://marshallbrain.com/manna1.htm
captainmuon 1 day ago 0 replies      
I'm a bit disappointed that Google didn't continue Glass as a niche product. If I had money, I'd maybe buy one :-)

Does anybody know if there are any "inconspicuous" AR glasses (meaning that they look like e.g. sunglasses and not like the Borg)? Preferably full visual field?

And are there any glasses that work on holographic / light field principles? (AFAIK Hololens doesn't, despite the name!) A few years ago at university, I looked into computer generated holograms. We generated holograms of simple objects on the CPU, printed them out, and shone a laser throught them. Viewing them at a slight angle (so you wouldn't be blinded), you could see the object floating in space - as if the hologram was a window. Back then, we thought maybe you could use a GPU to generate the holographic pattern, and a LCD matrix to display it - but the technology was not there yet. Now you have shader GPUs, and high-DPI LCDs, I feel it's almost just a matter of combining the pieces and you'd have immersive glasses, or a holographic screen.

std_throwaway 1 day ago 0 replies      
Was Google Glass ever actually meant to be a big success in the short term?

It looks like typical Hen-Egg-Problem where you first have to build it in order for them to come (slowly). Then you have to wait it out until the actually useful applications arise from the dust.

While the smartphone has established its place in everyday life (after years and years of trying and mostly failing with similar approaches), the glass probably will be a tool for a thousand niche applications mostly in the corporate world who need a finished solution before they start acting.

glitcher 2 days ago 0 replies      
Seems like a great opportunity to use video capture from all employees' GG cameras in order to train their robot replacements.
sand500 2 days ago 7 replies      
Google Glass is nothing compared to real AR like Microsoft Hololens. And for Enterprise solutions, 2k for these glasses vs 3k for a hololens Dev kit is nothing.
mondoshawan 2 days ago 2 replies      
Glass didn't die because of privacy -- it died because the teams walked out after being threatened with their jobs.
kovacs 2 days ago 0 replies      
I'll be expecting my RSUs from Google for telling them this exact strategy in my PM interview 2 years ago when I was asked "What would you do with Google Glass?". Then again enterprise is a fairly obvious answer even before they tried the consumer angle. But, if they offer Gmail for people that colonize the moon, (another interview question I got), then I'm definitely going to need some retribution for telling them how to do that too :-P
MarkMc 1 day ago 1 reply      
The only problem with Google Glass was that it was ahead of its time. I fully expect Apple to release a 'revolutionary new product' in 10 years time which will essentially be Google Glass with more advanced tech and better styling.
fencepost 1 day ago 0 replies      
I could be misremembering this, but haven't head-mounted displays of one sort or another been used in industrial settings for something like 20 years? I don't remember when it was, but I seem to recall something with monochrome (and lower-resolution) displays like this being used in things like airplane repair[1].

[1] After a little searching, I may be remembering reading about the 1996 Boeing conference: https://www.media.mit.edu/wearables/lizzy/timeline.html, or it could be any of the other late-90s systems listed here: https://en.wikipedia.org/wiki/Optical_head-mounted_display that got press at the time.

Edit: Looking at another page, I may actually be remembering something from computer magazines back while I was in college or shortly after - the Private Eye from Reflection Technology (1989) (https://glassdevelopment.wordpress.com/2014/04/17/hmd-histor...) - which would fit, because that plus the Twiddler chorded keyboard became part of the MIT Wearable Computer stuff and I remember wanting one of those keyboards.....

goatsi 2 days ago 2 replies      
Another area where Google Glass has shown to be very useful is in medicine. You can put a pair on and have a specialist walk you through a interview or examination. The poison review toxicology podcast had an interview with a who had significant success using glass in this manner.


Obviously it needed to be de-googlified to make it HIPAA compliant

nojvek 2 days ago 1 reply      
Is there an affordable Google Glass like floating screen. With the number of people walking looking at their screens, it makes sense to just gave a floating screen in a nicely design spectacle. No camera, just a ui to see your phone screen and something to track finger movements and tap.

We already have people who have Bluetooth speakers, no reason to also not have a tiny screen near the eyes.

NamTaf 2 days ago 0 replies      
This is where it always should've been. The ability to look at engineering drawings, etc. as you work on something, or being able to look through work instructions without having to down tools and clean off your hands is invaluable. This is always the dream product I had for something like Glass.
saycheese 2 days ago 2 replies      
Beyond me why Google just simply did not add a red LED on the consumer edition to show if the camera was recording in an attempt to address the privacy concerns many people had.
digi_owl 2 days ago 2 replies      
No surprises there, Vuzix have been selling a less stylish version of it in the same market for years.

Frankly i see little use for the likes of AR and VR in civilian life. Hell even the smartphone of today is something of a bleak shadow of the business tool it once was, thanks to every OEM trying to cater to consumers that barely message and access social media.

gkanai 1 day ago 0 replies      
Same thing with the Segway. Remember that device which was going to revolutionize how we travel? Now it's for park police and airport staff and whatnot. Segway is now owned and made in China iirc.
delbel 2 days ago 0 replies      
a similiar idea is romanticized in the 2008 mexican sci-fi movie sleep dealer, only the concept was exploiting cheap labor in full immersion controlling remote robots.
xbmcuser 1 day ago 0 replies      

 I am not surprised Google Glass was a product way ahead if it's time. It can still be useful in closed environment like a factory floor. To be really successful in the consumer market it needed an AI capable of identifying every object it sees which we haven't reached yet. I am personally of the opinion that augmented reality is the future of computers rather than virtual reality. People keep pointing towards privacy as the reason it was not successful which is I feel is wrong. They don't have the tech yet of making it truly useful in the consumer market.

dannylandau 1 day ago 0 replies      
I wonder how this compares to Lumus -- https://lumusvision.com/?
taurath 2 days ago 0 replies      
Whatever happened to their big cargo ship docked in the bay?
toodlebunions 1 day ago 0 replies      
Medicine still seems like the best use case to me
qrbLPHiKpiux 2 days ago 1 reply      
Do we have to worry about the wifi antenna being right next to ones head for a full shift?
hkmurakami 2 days ago 0 replies      
And medical offices.
How Braintree destroyed a successful taxi startup from Serbia facebook.com
300 points by bressian  1 day ago   136 comments top 24
jimnotgym 1 day ago 1 reply      
We put a new ecommerce site live on Magento2. We were short on time, so to make everything go more smoothly we chose to move our payments to Braintree as they have an pre integrated checkout with Magento2. Of all the things I thought I would be sitting up all night sorting out, I really didn't expect it to be Braintree, but it was. I'm sorry if this sounds harsh to Americans, but the fact it was thanksgiving did not interest me in the slightest, as we are at work that day, and so are our banks. If you want to sell your service in the UK you should be available 9-5 UK time. Braintree's 11 am opening time, was the most infuriating thing when I could see customers failing at the checkout. Other US companies I speak to have operators on from 9am our time, even if it is 3am over there. We already have a high volume Paypal acocunt, and I had to call everyone I could to pull strings to get them to answer at Braintree. Turns out the account I set up with them some weeks before had not been pushed live, except they told me it had, so we only had the introductory level of payments.

Their integration with Magento sucks too...Set up all of the anti-fraud tools and you will barely get a payment to go through...

zitterbewegung 1 day ago 4 replies      
The post rambles for a long time and slings some mud against braintree. Their claim is that they couldn't do a wire transfer from their merchant account to their business account. Looking on braintree's website they don't appear to support Serbia. https://www.braintreepayments.com/en-si/country-selection?re...
justin66 1 day ago 3 replies      
As they're a subsidiary of PayPal, it really would require extraordinary evidence to convince me that Braintree doesn't suck. Sadly, no such evidence was offered in the linked post.
foxylad 1 day ago 1 reply      
I think developers come at payment services with a mindset that it's such a simple thing (it's really just decrementing a number in one account and incrementing a number in another account) that they leave it to last to implement. Like security, it should be part of every decision you make, from day one. Particularly if you live outside the US.

CarGo's experience is a great example. For an app that lives and dies on efficient payment processing, they would have had been wise to have two payment processors coded up and ready to go. They should have been completely honest with their payment processors, built up a relationship, and sought assurances that their operation was acceptable.

The payment processor's biggest problem by far is fraud. That is why they have to charge significant margins, and why you have to send all your great-grandparent's birth certificates to them, and why they will suspend your account if there is the faintest whiff of suspicion. That's what makes the decrement-increment hard - really hard.

Treat the payments problem with the respect it deserves - as CarGo has found out, it can kill your business dead overnight.

flurdy 1 day ago 0 replies      
Braintree is now a subsidiary of Paypal which makes me slightly suspicious of them from past Paypal experiences. Though up till now I had only heard good things about Braintree.

Without more exact details it is hard to cast an objective judgment, for me at least. Though the dude seems pretty pissed.

everydaypanos 1 day ago 1 reply      
In a world where Stripe is used in very few countries - Braintree seemed like a great alternative(It was just about the time it was being bought by PayPal).

Six months into our "cooperation" they sent an email breaking the contract with us(e-commerce retailer) because their "bank" was protesting too many of our sales..

The truth of the matter is that Braintree was literally accepting all credit card transactions and had almost 0 fraud protection(for example we had orders being shipped to a Greek city, ordered by Iraqi-named citizens using US based credit cards..they all went through fine).

So they just breached the cooperation and they kept a large chunk of the owed money for like 4 months until their "bank" verified some thing or the other.

In my personal experience you cannot rely on Braintree to block fraudulent transactions for you - you have to do it yourself and pray that still you don't miss on too many frauds passing through to Braintree. Makes you wonder what use all these "credit card gateways" have if you still have to bear all the cost of doing the fraud-prevention yourself..

whalesalad 1 day ago 2 replies      
Sounds like you need to stop complaining and pick a new payment provider. Get a local loan, start giving away free rides, continue to pay your drivers and start a countdown clock to doomsday. That's how long you have to implement a new solution and recover from this.

Or you can whine about it on Facebook and a few weeks from now you'll be forgotten.

no_wizard 1 day ago 1 reply      
After reading the comments, and seeing what others have gone through, and generally speaking I'm not the biggest fan of PayPal. I gotta say: +1 to stripe! https://stripe.com/

They are fantastic to work with, and its super easy to get setup with them. Hats off!

mdekkers 5 hours ago 0 replies      
I have been looking to move away from Braintree for a while, they are eyewateringly expensive, and are now Paypal, which I am extremely uncomfortable with. I was previously based in a country where there is no Stripe support, but that is changing, so looking at alternatives. Can't wait to move away...
retreatguru 1 day ago 2 replies      
It's hard to gather from this post what exactly happened. Does anyone have details?
ManuAloha 1 day ago 1 reply      
We actually had the exact same situation here in Hong Kong with Braintree. They did not transfer the funds to our account and always blamed "banking partners" for the problems. The worst customer service I've dealt with and switching to Stripe as soon as it became available in Hong Kong was a no brainer.
wav-part 1 day ago 3 replies      
Payment Processing/Network is the most innovation-less thing IMO. But still there is not a non-profit leader. Why ?
bythckr 13 hours ago 0 replies      
Car:GO is 100% at fault for relying on a service not present in the same country for a vital service as payment. Plus a startup like Car:go should pair up with a start up like itself where they are the main customer.

We need to have multiple options for vital services like payment.

penetrarthur 1 day ago 0 replies      
I wonder if Stripe could now cover their operational costs in a form of a loan and process payments from now on. Would be a great PR move.
brentm 1 day ago 1 reply      
Unrelated to the payment subject but it always blows my mind when companies not just copy a business model (which is fine and expected) but also more or less copy the identity along the way.
Buge 1 day ago 1 reply      
The comparison to soldiers killing people is pretty ridiculous.
kumarski 18 hours ago 0 replies      
This happened so often, that I assembled a group of 100+ fintech entrepreneurs on facebook just to counter situations like this. Urgh. I feel bad for him.


I know this is awful, but the general reputation/expectation is that a foreign company can't handle payments infrastructure in Serbia, I never thought it would be the backwards version of this.

whazor 10 hours ago 0 replies      
If you are searching for a payment provider, you can also look at Adyen. While their integrations and API's are a bit rough, they have many payment providers and important international customers using it.
dorianm 16 hours ago 0 replies      
Instead of complaining there are many solutions:

- Let the drivers accept cash in the meantime?

- Move to another payment processor? (e.g. Stripe?)

- Accept bitcoin?

- Use some direct payment between passengers and drivers? (e.g. Venmo, bitcoin, etc.)

(I messaged him that)

zod50 1 day ago 2 replies      
if using Braintree would lead to problems like the ones mentioned by OP, what are the alternatives, in US and elsewhere, square cash?
albertico 1 day ago 2 replies      
this isn't new, big companies love to screw small ones. They are assholes at Braintree, that is well known, i recommend the guys at cargo to switch to Stripe. Those people are enterpreneurs like you..... hope the best for ya mate!
snackai 1 day ago 2 replies      
nodesocket 1 day ago 1 reply      
This guy get's absolutely zero sympathy from me when he starts his rant, complaining in a condescending tone.

"It sure is a weekend. It is probably nice and cozy for most of you over there in Chicago, enjoying a ball game, visiting friends and family, grilling that steak, feeling good about yourself."

It is funny how people like this want to blame and paint American corporations as evil, yet they build their business on startups and companies that could have only been founded and successful here in the US. They hope and dream of making it to Silicon Valley, raising money from US investors. It is total hypocrisy. Don't like it, why don't you use only Serbian companies, banks, and investors to power your startup? Oh that's right...

Windows 10 is bringing ads to File Explorer how to turn them off thenextweb.com
257 points by doener  2 days ago   321 comments top 42
grawlinson 2 days ago 13 replies      
Here's a better idea - show Microsoft that you don't accept this type of behaviour by switching products/services.

I'm aware that Windows 10 is a "free" OS, but trying to ram this type of behaviour down my throat is despicable unethical behaviour and I don't condone it. It is why I've switched to Linux for my daily computing needs.

csdreamer7 2 days ago 6 replies      
> how to turn them off

Switch to Linux.

Windows 10 Home is now $119 and tracks you. Microsoft is trying to sell you a product and monetize your behavior. They are pursuing a way to maintain their cash cow and have an avenue to push Google out.

Keverw 2 days ago 4 replies      
Are the ads only for storage service? At first I was thinking we'd start seeing banner ads like we do on blogs. If it's only Microsoft pushing their own integrated services I don't really have much of a problem, but for software I paid for showing third party ads when I never ever had an expectation of such a thing then I do agree it is very horrible.

The "Not now" button does seem like a anti-pattern instead of a simple no. Reminds me on how iOS apps bug you over and over again to rate them. I assume in X amount of time, it would show the banner again? I think it should be like "Learn More" or "Dismiss". Then have Dismiss never show it again on that device(unless they did like a reinstall of the OS). I'm sure it's useful to know it's available, especially to non tech people but you shouldn't have to jump into a long list of settings to stop it.

Imagine some day your car dashboard updates and starts showing banner ads for deals at near by places when the car never did it in the first place. That'd be kinda creepy too.

kabdib 1 day ago 1 reply      
We're doing some qualification of Windows Server 2016, which we use to run some business logic [insert history here]. Imagine my surprise when I found some Cortana-related processes running on one of the test boxes.

Cortana. On an OS that's supposed to be a server. I can tell that I'm going to have a ton of fun locking this stupid thing down. Also, seeing a process named "NetworkSpy" got fun, fast; whoever named that thing sure knew how to get my attention.

I like the core Windows OS; it's a nice kernel. But there's a lot of crap on top of it, and their testing is getting worse. Ask me about Remote Desktop regressions... well, don't. It's the weekend and I want to forget.

frik 1 day ago 3 replies      
When will Microsoft stop with this shit, and do a 180 degree U-turn? What does it need?

It's unbelievable how MSFT turned Windows from the great Win7 to the worst spyware operating system in human history aka Win10 in just 5 years. Is it just greed? Is it that they failed with Mobile (WinPhone is dead and has 0.1% world wide market share) and Android amd iOS have a bigger market share than Windows 7? (Win10 has way smaller market share). On desktop Windows 7 has like 50% market share, where as Win10 is around 20%. Is it that they failed with XBoxOne, now Sony and Nintendo eat MSFT former console market share as breakfast.

I bought two highend notebooks one a MacBookPro, the other with Win7 Pro preinstalled. The will last for the next five years. I stay with Office 2010 and additionally installed LibreOffice. I skip Win8 and Win10, and I am running Linux in VMs.

What's really disgusting is that in future many smaller companies like law firms, doctors will leak your private confidential data to Microsoft - as Windows 10 collects audio from connected microphones, collect keystrokes from connected keyboards, and indexes all files on local drives and sends a lot of data home to many different domains (some are owned by Microsoft, some have very dubious names and owners).

rebootthesystem 2 days ago 4 replies      
This sucks, of course. Yet all the comments on this thread saying the solution is to simply switch platforms ignore something very fundamental.

I like to say "Nobody goes to Home Depot to buy a drill bit; what they want to buy is a hole."

I don't use Windows because I want to use Windows. I use it because there are things I need to do that can only be done with it.

The "just switch to Linux" reaction is an understandable engineering reaction. And, frankly, if your world is limited to web development and some forms of embedded development it might make a ton of sense. Yet, from a business perspective it makes no sense outside of scenarios that fit Linux well.

For others it isn't that simple. What would the Linux proponents suggest we do with hundreds of thousands of dollars invested in software not available on any other platform? Solidworks, Siemens NX, Altium Designer, FEA and other simulation tools, CAM and other manufacturing tools, an myriad other applications ranging from business to medicine to engineering that depend on this platform.

The goal of a business is to deliver goods and services, not to play with operating systems. The operating system is irrelevant. It's all about the software that runs on it and the utility it provides to users. As an aside, this, I believe, is exactly where Apple has gone wrong. By insisting on having totalitarian control over everything on their platforms they have effectively limited the utility of their devices. At the same time, this is what makes them so good for a certain range of utilization scenarios.

We use Windows because it is the only way to buy a hole. We could not care less what OS is under the hood. All we care about is the hole we need to drill.

And, frankly, Microsoft has been making a better product with time. I've been using MS products since DOS on the original IBM PC (I bought one when they came out for $3,200). Save missteps here and there, MS software has always improved with time. What they need is our feedback in order to make it even better. The criticism is fair, of course.

NelsonMinar 2 days ago 1 reply      
Note there's ads in lots of other parts of Windows 10 too: the Start menu tiles, the preloaded programs, and the Notifications area. There are different ways to turn each of these off. They are multiplying.

Fuck everything about this. My computer is my tool and my work environment.

funkyy 1 day ago 1 reply      
Funny that Satya Nadella is praised on HN for his business sense and revolutionary thinking, yet when something like this happens, no one seems to remember about him. He is a CEO, it's his signature that allows this. Think about all Windows 10 spying, stupid updates policy etc. next time you praise him.
AsyncAwait 1 day ago 0 replies      
I again see a lot of outdated comments about Linux here. Not saying there aren't problems, but as somebody who uses macOS every day, it's nothing I haven't experienced on other OSes as well. Give it a shot yourself and make up your own mind.

If you're thinking about switching, I'd recommend you looking past stock Ubuntu, it is no longer the premiere distro in the community, instead consider:

First off, pick supported hardware - you'll do that for Windows and macOS as well.Use known hardware, like Intel, use DELL XPS or system76, it will save you a lot of frustration.

Ubuntu MATE - https://ubuntu-mate.org - If you need the Ubuntu ecosystem, UM offers a more polished experience, if not as much eye candy.

Arch Linux - https://www.archlinux.org - Before you skip this, no it's not unstable, at all, I clock one of my systems at 3+ years without an issue, the rolling release is great and the AUR makes any software just a command away. No fuss, just works.

The ArchWiki is the best resource on all things Linux, it' great.

Fedora - https://getfedora.org - If you want stock GNOME.

KDE - https://neon.kde.org - If you want upstream KDE.

aphextron 2 days ago 5 replies      
I'm so divided on Microsoft these days. On the one hand they are doing all the right things in terms of supporting FOSS and making Windows into a truly great OS. But then they pull stuff like this and you remember it's Microsoft.

I think this is just the price we are going to pay for "evergreen" operating systems. As a consumer, you now only buy Windows once. Microsoft has to make up for these lost sales so it makes sense. I don't agree with it, but having to toggle off advertising is not going to keep me from using what I consider to be the best operating system for my needs.

askvictor 1 day ago 0 replies      
'Sync provider notifications' sounds like a useful API for cloud file services such as Dropbox or Drive to be able to communicate with the user. If that's what is actually for, and some other part of ms thought it might be a good place to push OneDrive, it will probably have the unfortunate effect of people turning off an otherwise useful part of windows.
alistproducer2 2 days ago 4 replies      
Serious question: does MS sell a version of Window 10 outright that doesn't include all of the snooping? I've been a Windows user my whole life but I will NEVER use an OS that collects data on the OS level.
owebmaster 2 days ago 2 replies      
> how to turn them off

If you feel the urge to turn it off, switch to GNU/Linux. Turn annoying things off is not a solution, it is an excuse.

ksk 1 day ago 0 replies      
With everyone salivating at shoving ads in peoples faces at every opportunity, not doing so is probably 'leaving money on the table' for MS. It sucks that digital advertising is a moneymaker. IMHO it would be beneficial for the tech world if Google/FB become unprofitable.
mankash666 1 day ago 0 replies      
To be honest, I don't mind a plug or two about relevant Microsoft products within the OS. However, tracking usage and serving customized ads at the OS layer is unpleasant. So, I'm OK with how things are. I'm not disabling anything and giving MS the benefit of doubt that they won't turn my OS into an advertising and tracking channel
cadecairos 2 days ago 1 reply      
I ditched File Explorer for Directory Opus a while back, no regrets, especially now that there are ads baked in.
lutusp 2 days ago 2 replies      
There's an opportunity here for a young, ambitious programmer -- create a PowerShell script that automatically goes through the Windows registry, creating/resetting options to disable all advertising (Start menu, File Explorer, OneDrive, Edge's annoying gripes when you launch a competing browser, and others). I suggest this because as time passes the manual approach to disabling the many sources of Windows advertising is becoming somewhat baroque. Such a script would be a great public service.

I won't personally benefit -- I've run Linux exclusively for almost 20 years -- but when I visit Windows in connection with my work, I feel compassion for those who put up with it. I think, "Wait ... people pay to be abused this way?"

unholiness 2 days ago 5 replies      
In browsers, ad blockers are being used more and more commonly for even non-technical folks.

In native apps mobile apps, there are some okay ad-blocking solutions.

But when the ads are baked into the operating system itself... I have a hard time seeing Microsoft or Apple losing the ad blocking battle.

So, instead of getting ads somewhat sandboxed inside a browser, in the future we may be getting ads inside the start menu, inside spotlight, on the toolbar... and instead of semi-directly funding content providers, the ad space will only ever be funneling money to the these same huge companies to spend as they see fit. Bleeech.

ap46 1 day ago 0 replies      
All the more necessitates the need to create a light-weight VM on top of which 7 + macOS can run. With better hardware we could abstract the hardware underneath the OS for legacy apps & switch to *nix at a click/shortcut.
motyar 2 days ago 0 replies      
Switch to Linux.Its easier than you think.

or Switch to Mac if you can.

stevebmark 2 days ago 0 replies      
Feeling better every day about ignoring upgrade from 7 > 10 constant spam.
smcl 1 day ago 0 replies      
Is this on Pro as well? I was tempted to get a dedicated windows laptop but if even Windows 10 Pro contains this nonsense then I may have to reconsider.

Also - I know an OS is different from a webpage, but slamming a company for having "shitty" ads when you've got some obnoxious full-screen ads of your own is pretty funny: http://imgur.com/a/3eJ4H

xkxx 2 days ago 1 reply      
It was my first thought and after I checked the comments it seems like a general consensus of HN that the best way to turn them off is to switch to some other OS.
PleaseHelpMe 1 day ago 0 replies      
When some articles bashing Edge or some other Windows features appear on Hackernews, there is always someone in the comment stating "I work for Microsoft". "Microsoft employee here"... then protect the product. I am dying to see one that protects this.
alephu5 1 day ago 0 replies      
When I get a new computer with Windows preinstalled, the first thing I do is split the hard disk and put a nice Linux distro on. On the odd occasion I need to use Windows it's really no big deal having to navigate the ocean of bloat and whatever else. I don't even mind being tracked on these disparate occasions.
bubblethink 2 days ago 1 reply      
Is it possible to run Windows Server on desktop/laptop in order to avoid all the windows 10 nonsense (cortana, telemetry, ads etc.) ?
good_vibes 2 days ago 1 reply      
I guess have to stay with Mac and make it a Linux machine. Glad I saw this, I had a ZenBook in my Amazon 'save for later' cart.
jaxn 2 days ago 1 reply      
This is very disheartening.

As a direct result, I am spending some time tonight researching running Linux on my Suface Book.

I have been impressed with what seems like a new Microsoft. I even made the switch from OSX to a Surface Book as my primary laptop. I used OSX for a decade and ran Linux as my primary OS before that.

mherrmann 1 day ago 1 reply      
If you're fed up with Explorer, you might also look into alternative file managers. I'm developing one: https://fman.io
youdontknowtho 2 days ago 1 reply      
I actually like windows 10, but this is really lame.

* yes, I'm being pedantic, but the ad for one drive is in the file explorer, not the file system. Its doesn't appear in the command line.

besselheim 2 days ago 3 replies      
Doesn't seem that big a deal, it's an upsell for Microsoft's own file synchronization services, not some third party ad. And you can easily disable it.
randiantech 2 days ago 0 replies      
The very same info was published just a couple of days ago.
eveningcoffee 2 days ago 0 replies      
This is not acceptable
ForFreedom 1 day ago 0 replies      
But then would they be reading any content on the computer itself if the adverts are targeted?
AdeptusAquinas 2 days ago 0 replies      
Some irony in finding it hard to read the article through all its ad popups on the mobile version
Pica_soO 2 days ago 0 replies      
A funny game of whack the registry and group policy. Best of all its free.
taf2 2 days ago 0 replies      
Sucks to be a windows user
shmerl 1 day ago 0 replies      
Switch to Linux. Problem solved ;)
asadlionpk 1 day ago 1 reply      
Everyone panicking should know that macOS has this too.
Sir_Cmpwn 1 day ago 1 reply      
No, we're talking about an OS shipping with ads (that you can turn off) and an OS shipping with a theme that some people don't like (that you can change). One of these is a gross breach of trust on the part of the OS. You know the difference, cut the crap.
amaks 2 days ago 2 replies      
Interesting. Chromebooks are cheaper and ChromeOS doesn't shove ads to users' faces. Who's scroogled now?
perfectstorm 1 day ago 2 replies      
Not sure why this is a big deal. you can turn it off.

HBO subscribers are shown ads (for other HBO shows) at the beginning of the play. I believe that happens with Amazon Prime Videos as well.

Google shows ads for Chrome browser when you google from a different browser.

how is this any different ?

What makes gambling wrong but insurance right? bbc.com
271 points by sohkamyung  22 hours ago   234 comments top 28
MR4D 22 hours ago 15 replies      
Article gives a good background, but doesn't clearly answer it's own question. The simple answer (although not followed as closely as it should be in regulation):

Gambling involves the creation of risk where none previously existed, while insurance is solely about the transfer of risk from one party to another (or more than one).

sverige 19 hours ago 14 replies      
After working in or for the insurance industry for 15 years, I came to the conclusion that it is not right, and that it is the moral equivalent of gambling.

I understand that the vast majority of people rationalize insurance as a good thing because the cost is relatively low when compared to catastrophic loss, and even have a couple of personal anecdotes to add to those of the advocates of insurance that demonstrate the wondrous utility of insurance, but I am convinced that it is wrong.

I quit the insurance business and moved to a completely different career. I only buy the insurance that is required by law (what a ridiculous concept! does the government have to mandate that you also buy food?) and have paid the tax penalty for foregoing Obamacare.

I know this will likely not be well received here, but I thought I'd share my position for any who read this and wonder if they're the only ones who understand it this way.

gpm 20 hours ago 0 replies      
Money doesn't have a linear utility function. The more money you have, the less the next $X are worth.

This means, even for "0 sum" games where there isn't a house taking a cut, gambling will typically have an average net utility loss. You make as much money as you lose, but that's money worth more when you lose it then when you make it (past the first $epsilon).

Insurance on the other hand will typically have a net utility gain despite being a net dollar loss. When you "win", you would otherwise have very little money, so the money is worth a lot of utility. When you lose, you have lots of money so it only costs you a small amount of utility.

jknoepfler 9 hours ago 2 replies      
I honestly don't understand why insurance is seen as a private affair. Everyone who is not very wealthy requires risk management, because otherwise they run a constant, non trivial risk of complete personal ruin (not just financial ruin, but inability to pay for medical care). Humans are not capable of correctly computing the acceptability of small amounts of risk of ruin. The straightforward solution is to pool risk so that everyone pays a fraction of each catastrophe, which should just be a federal tax.

At a minimum, if you have children, lack of insurance should not be an option.

Then again, I'm also in favor of decreasing U.S. freeway/highway speed lilims and enforcing them strictly (like Japan) for risk reduction reasons... I suspect that my antipathy to American libertarian machismo is far from universal ;)

iambateman 20 hours ago 0 replies      
The answer to the headline question is in the book "how to lose a million dollars", which I recommend.

The author describes four classes of financial risk-taking1. Investing2. Speculating3. Market making (I think)4. Gambling

Investing is risking capital with a strong probability of maintaining the principle while receiving a reasonable return.

Gambling is when you have a negative-sum chance at keeping your principle.

As a result, one person with limited information and portfolio diversity could "gamble" on apple stock while another person with more diversity is "investing."

Fundamentally, they're more about information and risk than anything else.

zhengiszen 18 hours ago 1 reply      

The purpose of this system is not profits, but to uphold the principle of "bear ye one another's burden". The principles of takaful are as follows:

- Policyholders cooperate among themselves for their common good.- Policyholders contributions are considered as donations to the fund (pool)- Every policyholder pays his subscription to help those who need assistance.- Losses are divided and liabilities spread according to the community pooling system.- Uncertainty is eliminated concerning subscription and compensation.- It does not derive advantage at the cost of others....

Marazan 17 hours ago 2 replies      
Gambling is an adversarial relationship. Insurance is co-operative.

When I insure my house against fire neither I nor the insurance agent want my house to burn down.

When I gamble on Man Utd to win 2-0 with Pogba to score the bookie doesn't want that to happen and I do.

Anything that is co-operative is insurance, anything that is adversarial is gambling.

ucaetano 20 hours ago 0 replies      
You can look at gambling and insurance from a portfolio theory perspective. You want to hold the portfolio that maximizes return for the level of risk (variability) you're willing to take.

Both insurance and gambling lower your returns, but the change in variability for insurance has a -1 correlation with a risk you currently have (canceling it out), therefore significantly reducing the overall variability of your portfolio.

Gambling isn't related to any other risk you already have, therefore introducing more variability into your portfolio.

So insurance is just a form of gambling where the payout is correlated with a risk you currently carry, instead of being "random".

Spooky23 13 hours ago 0 replies      
It's pretty obvious. Gambling by its nature creates a situation where the "game" is there to induce a biological response where you're either rewarded or punished asked on your performance.

Insurance is different -- its there to pool and quantify risk. That's why we don't have people hopelessly addicted to buying car insurance. It's not there to give you a rush.

tabeth 21 hours ago 4 replies      
This raises an interesting question: is there a way to perfectly insure yourself? Or rather, a company that you pay to insure you for everything?

My understanding of insurance is that if there's say 1/100 chance of some event costing you 100 dollars, then you might pay $5 at some rate for some period of time to protect you in the case that the event occurs, costing you 100 bucks.

However, if you knew the odds of certain somewhat expected events in your life were, how could you capitalize on that? You could save, but generally people don't make enough for that to be worthwhile.

PaulRobinson 17 hours ago 0 replies      
I spent a year back in the 2000s living as a professional gambler. I know professional poker players. I write and operate "bots" that gamble on betting exchanges automatically (effectively the sports/games version of algotrading) as my main hobby - 20+ hours/week, sometimes - and my accounts are net positive to my advantage (a feat accomplished by less than 0.5% of gamblers).

Whilst I enjoy a good old punt on a horse like many a Brit or Irish racing fan, and enjoy the odd bet on a football game or cricket match, I believe that I - and fellow gamblers I know who make a profit - do not actually gamble the way most people do.

We treat it more like insurers see the issuance of insurance. First, we'll look at data. Second, we realise money management - and liability exposure vs. income - is critical to success. We look for situations where there is an edge in our favour which we call "value". When we see an edge we can quantify, we'll exploit that using Kelly or Maximal Exponential Growth strategies.

Genuinely, I look at some sports events using techniques and strategies that would not look alien to a statistician who has trained to become an actuary. They have more data with stronger statistical significance, and their thinking may be more rigorous, but they are my inspiration: with the right numbers, you can model risk, and identify what odds you're prepared to accept and what odds you're not.

I will play in casino games for recreational fun, not for profit. Same with slots/fruit machines, lotteries, and so on. Tiny amounts of money. It is impossible to beat these beyond the medium term without some form of luck. In the long run, games with house advantage will only ever be won by the house who has the advantage.

Sport though? Something where I can do some data analysis and start finding informational arbitrage over others in the market? That's potentially investment. And so far, it's done me well.

And so the honest answer is, nothing much is different, it's just that most gamblers and most people who buy insurance are people who do so without thinking and without understanding mathematics and liability, etc.

A few of us though, a small number on Betfair or perhaps inside the sacred walls of Lloyds of London, see the World a little differently and play a game with maths at our side that few others are interested in playing.

analog31 11 hours ago 1 reply      
I wonder if an alternative to moral and economic analysis would be empirical: Look at how gambling and insurance affect people and communities. How many lives are ruined by gambling? How many lives are ruined by insurance? There must already be known issues with both gambling and insurance, because both are heavily regulated.
kemonocode 12 hours ago 0 replies      
Unlike with betting though, it is seen as acceptable for insurance companies to weasel out of their obligations when paying out insurance.
peternilson 17 hours ago 0 replies      
My opinion is that insurance is vastly misunderstood, just as gambling is vastly misunderstood. In insurance, it's the low premiums on high valued assets that obscure peoples thinking and in gambling it's often small wagers offered in return for potentially large payoffs. I know of people who could afford to replace a phone if it was stolen, but still take insurance on it because the small premium on the insurance seems like a good deal. There is just no way that these type of insurance contracts have a positive expected value to the individual. However, if you're a single Mom making ends meet, who needs to drop her kids at school every morning, please insure your car. The utility of losing the car and not being able to get to work etc. is just too negative.
kirykl 20 hours ago 2 replies      
I think the right/wrong is in the apparatus. Gambling is usually repetitive and involves operative conditioning, which can be addictive.

Insurance is too but to a much lesser degree. If I buy fire insurance and my house burns down, I get rewarded (as in operant conditioning). But only punished when my house lasts until I die or sell it.

kriro 19 hours ago 0 replies      
Inn-sewer-ants, repeated Rincewind. Thas a funny word. Wossit mean?

Well, suppose you have a ship loaded with, say, gold bars. It might run into storms or, or be taken by pirates. You dont want that to happen, so you take out an inn-sewer-ants-polly-sea. I work out the odds against the cargo being lost, based on weather reports and piracy records for the last twenty years, then I add a bit, then you pay me some money based on those odds

Color of Magic, Terry Pratchett, page 45

That's the passage I found googling. It actually goes on and compares insurance and gambling. Highly recommended book (as is almost everything by the author of course)

Nomentatus 19 hours ago 0 replies      
Not mentioned is that life insurance began as a form of gambling during the Civil War; but the original form was the reverse of the modern idea of insurance: soldiers formed a pool and the last man alive (long after the war, presumably) would inherit it all.
pmoriarty 20 hours ago 2 replies      
Could "betting" with insurance be made as fun as gambling?
milkers 19 hours ago 0 replies      
ASAP(as simple as possible), for the first you consider to pay money expecting that you are lucky, for the letter you pay money expecting that you are unlucky.
k__ 13 hours ago 0 replies      
Insurance is betting on your bad luck and gambling is betting on the bad luck of others.
colinmegill 21 hours ago 2 replies      
Uh... the downside isn't inevitable?
SomeStupidPoint 21 hours ago 0 replies      
Insurance guarantees that you pay the expected value of the cost (plus some premium and buffer), dropping variance in exchange for a fee.

Gambling does the reverse, turning a fixed amount of money in to bursty payouts in exchange for a premium, adding variance.

One is basically the opposite of the other.

You can even use roulette as an event randomizer to make your trips to Vegas more exciting. (Games with probabilistic loot are more "fun" than predictable loot.)

eveningcoffee 19 hours ago 0 replies      
Difference is in the intent.
jlebrech 13 hours ago 0 replies      
yes it's gambling, and so is a pension.
brilliantcode 10 hours ago 0 replies      
because the world is corrupt and created after men who seemingly look to exploit and take, like their forefathers did.
lucisferre 21 hours ago 1 reply      
In a word. Math.
id122015 20 hours ago 0 replies      
Sometimes dealing with HR departments is like gambling. The house always win and you loose.

With insurance you might win exactly when you need it.

kazinator 8 hours ago 0 replies      
> What makes gambling wrong but insurance right?

Is this serious?

In insurance, I cannot say, "I want to quintuple-down on having a car accident this year; let me pay a $5000 premium instead of $1000 for a 5X payout!"

Insurance typically covers specified perils. When they occur, the payout is in proportion to the actual damages, not to the rarity of the odds.

If insurance were like gambling: "Oh man, I won the insurance jackpot! My house was leveled by a rare meteor---500:1 payout---not just your everyday 4:1 fire."

Also, there is the obvious general observation: insurance compensates for losses. You don't win; you lose stuff and are compensated. You don't get compensated unless you lose first. People don't always feel adequately compensated by money. Money won't bring back the memorabilia you lost that got burned up in the fire. It won't grow back a severed arm, or replace people who died.

In gambling you don't lose anything to win; just your time and the bets, which loosely correspond to insurance premiums. The win is a pure win, not a compensation for loss.

`Nuff said.

Three Months of Go, from a Haskellers perspective (2016) barrucadu.co.uk
355 points by slikts  1 day ago   348 comments top 24
willsewell 1 day ago 0 replies      
I worked with Michael on the same project, after also working with Haskell previously. On the whole I agree with the pros/cons stated in the article. Having said that, my conclusion would be a bit different: I would err on the side of Go for the majority of commercial projects.

The article mentions the impressive worst case pause times of Go's GC. Since then we have performed some additional benchmarking. The conclusion was: it is impressive, but there are still a couple of issues that break the sub 1ms claims. We blogged about this here: https://making.pusher.com/golangs-real-time-gc-in-theory-and.... It's hard to guarantee low latency in all cases...

Michael also mentions that there is nothing like ThreadScope, or at least nothing that's easy to find. The latter is true. There is an impressive runtime system event visualiser which can be opened with `go tool trace` https://golang.org/cmd/trace/. You can see a screenshot of this in the GC blog post I linked to above. Unfortunately the only documentation is this Google Doc: https://docs.google.com/document/d/1FP5apqzBgr7ahCCgFO-yoVhk... which is tricky to find, and could be more in-depth.

I'm in the middle of writing a blog post on how to use this too. Watch out for it on our engineering blog in the next month or two. https://making.pusher.com/

jwdunne 1 day ago 10 replies      
It is, I think, going to be very difficult to enjoy writing code in a less powerful language when you are exposed to languages that hold awesome power.

In fact, this has been the basis for much writing on Lisp too. Paul Graham has written entire essays along the same lines.

If you work in a job that forces the use of a less powerful language than what you've been exposed to, you can, I think, go through a sort of depression. You simply long to use the tools that you know hold much more power yet must resign yourself to the tools you have.

You can overcome the problem by being present in your work. The language might be ugly. It might be totally underpowered with limited means of abstraction and a type system that snmacks you and, sometimes, your client over the head. What you can do is, despite that, commit to writing great software with the tools you have. Commit to improving your work. Perhaps expand the ecosystem with tools borne of insights from your adventures with high-powered tools.

You will have a much more enjoyable time in most languages this way. Perhaps except MUMPS but there might be hope.

jonnybgood 1 day ago 4 replies      
> Go is just too different to how I think: when I approach a programming problem, I first think about the types and abstractions that will be useful; I think about statically enforcing behaviour

I see statements like this a lot from Haskellers and I think its overstated. Anecdotally, after going from Python to spending 3-4 years in Haskell then going back to a dynamic language (Elixir) I've come to the conclusion that how you think when programming is very much a learned trait that works for that language. It's neither good or bad, but it's educational nonetheless. Haskell and other languages like it forces you to have a very unique mindset that can overpower previously learned languages not like it. And it's in no way permanent.

After I stopped using Haskell I was like "ugh, I need types! wth!". I wanted to go back to Haskell only out of familiarity, but as I continued after a short while I wasn't even thinking like that anymore. The thought rarely occurred. I stopped thinking in Haskell and started thinking in Elixir.

gothrowaway 1 day ago 4 replies      
When I look at a programming language, I look at the community and how it gets stuff done and projects that are noteworthy.

Something about Haskell strikes me as different. Despite the buzz about it, I don't see many projects for it other than shellcheck, pandoc and xmonad, and for two of those, there's better solutions around (sphinx, awesome/i3).

The other thing is the general flow I've see with Haskell programmers, many really tie down their identity do it. They see programming as a crossword puzzle for them to solve in short term, not as something other programmers have to read later on. They're not very empathetic to the idea the runways dwindling and you have to ship sooner rather than later.

In addition, I found that the Scala / Haskell developers I knew took golang to be quite the nuisance. They find gophers pesky. I think the reason why is years of Haskell teaches them to overengineer and complicate things needlessly. It's frustrating because they're not aware of it themselves and take offense, even blame you when you point it out to them.

Maybe I've just been unlucky. In 10 years, I've never had people who consistently failed to ship, been mean and arrogant as scala / haskell programmers. They take the slight criticism as an assault on their identity.

jrobn 1 day ago 2 replies      
I started out liking Go. It looked like a fairly pragmatic language. As I got deeper into my evaluation project (simple api stuff) it felt more and more like cutting wood with a dull saw.

I started out liking Haskell too! But has I moved along with my small evaluation api project it felt more and more like I was trying to cut wood with gyroscopic laser saw. It worked but it was a lot of fan fair for sawing some wood.

Picked up erlang/elixir. Looked pretty decent. Felt like cutting wood with a Japanese pull saw, so I had to use the vise clamps that came with it. It cut some fucking wood.

~ Ron Swanson out.

demarq 1 day ago 6 replies      
I hate to sound like the rust evangelist strike force... I really do. But your complaints are exactly what it would solve... Sigh I hate to say this I really do. But here goes...

So have you checked out rust?

twblalock 1 day ago 0 replies      
This article's comments on code generation and generics are a good exposition of what I don't like about Go: it seems internally inconsistent, as though there is one set of features for the Go development team, and a lesser set for everyone else.

The nice thing about Haskell and the Lisps is that they are consistently the same thing all the way down through every layer of abstraction, even at the bottom. There is no point where you reach the "magic" or "forbidden" layer where the paradigm changes and it turns into a different language.

The problem with Go is that the code we programmers get to write feels like a DSL on top of the "real" Go, which uses constructs and "magic" we aren't allowed to take advantage of.

joaodlf 1 day ago 3 replies      
I feel like some of the criticism is unwarranted, specifically:

> The tooling is bad

I feel like the tooling is really impressive, considering the age of the language. Remember that Haskell is something like 25+ years old. Go has done quite a bit in short time - I can only hope it will get better too.

> Zero values are almost never what you want

I've always felt the defaults to be spot on. Anyway, it's my responsibility to initialise these values, even on struct fields.

> A culture of backwards compatibility at all costs

I agree the current (package management) landscape is dire, but this should change, hopefully this year, with godep. As it is now, I have found success using glide for dependency management, so that's what I would recommend for now.

Apart from that, I can agree on a lot of things - I'm specifically annoyed by the whole generics thing, the way I interpret the people involved with the project is "it would be nice to have, but the implementation is awkward, so we won't admit to it being nice to have".

PaulRobinson 1 day ago 6 replies      
TL;DR: "Go isn't like Haskell, and that means it's not as good"

I get that we all have favourite languages, but it is not amazingly helpful to try and compare them like this, for me. I'm sure if you're a Haskeller and you're eyeing up Go, being forewarned might be helpful, but here's another idea:

Don't compare. Just use. Take it at face value. Figure out what becomes easy, what becomes hard.

I came at Go from 10+ years of professional Ruby development, and it was a tough punch to the stomach at times. I still think in Ruby more often than I think in Go. But I know the two aren't really comparable for most things.

bogomipz 1 day ago 3 replies      
The author states:

>"Strict evaluation is typically better for performance than lazy evaluation (thunks cause allocation, so youre gambling that the computation saved offsets the memory cost), but it does make things less composable. "

Can anyone tell me what a "thunk" is in this context and also why it causes performance problems? The article linked to in the sentence results in a 404.

krylon 1 day ago 0 replies      
I liked this conclusion: "Go is just too different to how I think"

It is in a way a mirror image of my experience with Go: It is not so much that Go is a great language, it has its fair share of flaws, but it is quite compatible with the way I think.

amelius 1 day ago 3 replies      
> GHCs garbage collector is designed for throughput, not latency. It is a generational copying collector, which means that pause times are proportional to the amount of live data in the heap. To make matters worse, its also stop-the-world.

This is pretty much unacceptable in today's world of low-latency (web) apps.

How active is GHC's development?

Would it be possible to efficiently run Haskell using Go's runtime environment, i.e. by changing the compiler backend?

bboreham 1 day ago 1 reply      
> There is no way to specify a version [of an imported package]

You can do this in your version-control system (e.g. git), via a process called "vendoring". It's ugly but, using one of the popular tools, quite workable.

SkyMarshal 1 day ago 0 replies      
Good writeup, but not an unexpected conclusion. I'd be very interested to see a similar writeup by a Haskeller on Rust, since the two are closer in some aspects (static enforcement of behavior, strong typing, well designed concurrency), but different in other key ones (strict vs lazy, GC vs manual, etc).
solidsnack9000 1 day ago 1 reply      
Pulling no punches:

Other than that, I will probably never choose to use Go for anything ever again, unless Im being paid for it. Go is just too different to how I think: when I approach a programming problem, I first think about the types and abstractions that will be useful; I think about statically enforcing behaviour; and I dont worry about the cost of intermediary data structures, because that price is almost never paid in full.


daxfohl 1 day ago 1 reply      
Haskell always makes me sad. In real-world apps you always need hacks. I do anyway. In an imperative language I can be proud when my code only has a couple hacks in it. With Haskell I just end up feeling nasty about my code if there's even one hack there, and it makes me like coding less. (For toy apps with no deadline Haskell is great; the above regards Haskell-at-work, as a freelancer).
shadowmint 1 day ago 1 reply      
I don't think anything in the 'The Bad' is unfair, and they're certainly not written from a position of ignorance.

Those are things that suck about go; it's not specifically that they suck about go when compared to haskell; they just generally suck (particularly the type system stuff).

However, I don't think that the situation is so bad I would go as far as to say, "Other than that, I will probably never choose to use Go for anything ever again..."

I maintain that while go might not give smart programmers the flexibility to express their FP dreams, it has a grungy practicality both for developing and maintaining code that is quite effective.

Introducing go and building applications with it in a team is easy because its quick to pick up for the whole team, regardless of background, its simple to use and its (relatively) difficult to shoot yourself in the foot in terms of distributing binaries or developing applications that run at modest scale.

When gogland (the IDE by jetbrains) comes out of EAP, it'll even have a modestly good professional IDE with integrated debugger (no atom, you don't count when your debugger never works on any platform).

...but hey, if you don't like it, don't use it.

Haskell is pretty great too.

Thaxll 1 day ago 0 replies      
Well the fact that pretty much no one uses Haskell anywhere should give you some hints about the language iteself.
j2kun 1 day ago 2 replies      
> But Go does have generics, for the built-in types. Arrays, channels, maps, and slices all have generic type parameters.

Doesn't this mean you could implement a generic tree type if you fix the underlying data structure to be an array/map? (Not a go programmer yet, but honestly curious)

dorfsmay 1 day ago 0 replies      
I'll argue that Python PEP 8 was probably the precursor to gofmt.
pmarreck 1 day ago 0 replies      

> I will probably never choose to use Go for anything ever again

luigi23 1 day ago 1 reply      
Offtop: what kind of static website generator did he use? Looks clean, looking for something similar for my usage.
fpoling 1 day ago 3 replies      
The author mentioned that code generation "introduces additional, non-standart syntax". One can say exactly the same about generics.

Most useful typesystems with generics are Turing-complete. Essentially they introduce own language for types with often very weired rules and syntax that one has to master on top of the basic language. With code generation I can program my types using the same language I use for code with less things to learn.

blacksoil 1 day ago 3 replies      
I think comparing Go and Haskell are like comparing two incomparable different species -- a fish vs. a cat.

Why? Because Haskell is an interpreted language while Go is a compiled one. Interpreted language doesn't care much about performance as it isn't designed for that purpose, while in the other hand, compiled language does. As a result, interpreted language tends to be more 'elegant' and has lots of convenient features at the cost of performance. A concrete example is when you talk about preventing unacceptable data type in Haskell. They could make it so in Go, but the performance cost would be undesirable.

IIRC, I read that they designed Go to be practical instead of 'elegant', the reason is so that people can learn it easily, making it a good alternative for other compiled languages like C++ whose learning curve is hugeeee and ugly!

I will not log in to your website scottaaronson.com
295 points by seycombi  1 day ago   119 comments top 26
lucb1e 1 day ago 5 replies      
I do not recognize the problem the author talks about, but it seems weird. From the article:

> Prof. Aaronson, given your expertise, wed be incredibly grateful for your feedback on a paper / report / grant proposal about quantum computing. To access the document in question, ...

It seems odd to want feedback and then ask someone to go and register somewhere, probably requiring to accept a bunch of legalese in the privacy policy and terms of service... Just attach the document you want feedback on, right?

At least if I'd email someone (out of the blue or an acquaintance) for feedback due to his expertise, I'd be grateful for the time taken and try to make it as easy as possible to do.

Edit: it has been made clear to me that it's not about individuals contacting the author, it's some big corporation that probably sends this out, probably in an automated manner. I still don't understand why anyone would bother with this when "peer reviews" can happen between "peers" (i.e. sending each other documents for review, rather than going through the middleman that everyone seems to hate such as Elsevier, if blog posts linked on HN are to be believed).

always_good 1 day ago 2 replies      
Recently, in my customer support tickets, more and more of my users have given me email addresses "protected" by http://boxbe.com.

When I write my reply to them and submit, I get an ACTION-REQUIRED from boxbe.com telling me to register + captcha so that I can get on the receiver's whitelist.

It's so invasive that I don't bother. They'll have to check their spam folder for my email.

mrspeaker 1 day ago 1 reply      
Ha ha, I'm at the same "get off my lawn!" moment in my internet life to. The barrier "first, create an account and login..." is a one that very very few products can tempt me to do.

I realized recently that I have space in my life for three log-in websites (HN, a gamedev site, one subreddit), three web apps (gmail, github, slack), and three non-built-in phone applications (instapaper, ride sharing app, twitter). If there's something new in town - it needs to be more valuable than these to knock someone else out of rotation!

hyperpape 1 day ago 3 replies      
This is, in principle, no different than the fact that you have to log in to Github to create issues or add comments.

What makes it different is that as a profession, we have decided that Github is nice, good, and ubiquitous. Unfortunately, the portals that he's describing are crappy, bad, and balkanized.

droithomme 1 day ago 0 replies      
On the topic of "the humans failed to engage them through the intermediary of their bureaucratic process", we long ago stopped accepting any purchases for under $20,000 if the customer insists we apply to their organization, sign contracts, and fill in paperwork to obtain a vendor account with their organization.
mnm1 1 day ago 0 replies      
Yup. In addition to not creating accounts, I've stopped filling in Captchas, especially Google's notoriously horrible re-captcha (I've already clicked all the storefronts about a million times and still it's not good enough), turning on JS for sites that don't present content without it, using sites that don't work with ad blockers, etc. except when I have no choice (banks, work). To me, all these sites are broken. If they want content, they need to fix themselves, and present something useful and secure. Most won't due to their business model.
bostik 1 day ago 0 replies      
I applaud this attitude, not least because it reminds me of the UX design story on allowing guest checkouts: https://articles.uie.com/three_hund_million_button/

Add a hurdle, any hurdle, to your potential users' workflow and you are doing yourself a massive disservice.

65827 1 day ago 0 replies      
The worst are the companies who seem to get completely new systems every few years, and if you didn't log in recently you effectively have to create an entire new sign in, and guess what you can't reuse that email and you have some silly new password restriction because SAP or whoever says so. Just awful experiences.
TuringNYC 1 day ago 0 replies      
From the article: Oh, Skype no longer lets me log in either.

Funny, I've had the same issue. Between legacy Skype passwords, Microsoft accounts, and what not, for a period of time it became almost impossible to log into Skype. It has improved, but the reset process was designed almost as a maze to help shed all but the most determined. I was not determined enough and eventually gave up and forced Skype contacts to reach out to me via WhatsApp/GChat/Signal/Duo/Allo/FBMessenger. Anything but Skype.

Side note: Same thing happened to Wunderlist after they too got purchased by Microsoft.

bgrohman 1 day ago 0 replies      
"Whenever my deepest beliefs and my desire to get out of work both point in the same direction, from here till the grave theres not a force in the world that can turn me the opposite way."

Words to live by.

cknight 1 day ago 0 replies      
I'm no scientist, but I was involved in a couple of projects with a research group to develop web apps that others could use to run biophysics simulations.

When submitting them for peer review, there was an absolute requirement from the journals in question that the sites did not require a login to use, and not even an email address to be entered to alert the user to results/completion. Result pages and download links were to be provided at a hidden URL which was linked to from the submission page after the form was submitted. So while we did this, we also ended up maintaining emails for job alerts, but optionally so. Most users have since used their emails to run jobs as it is more convenient for them.

But for the reviewers, their requirements made sense. We were submitting to journals which had entire dedicated editions for online scientific apps. Hundreds of them, all of which required peer review by scientists who were being very generous with their time. For a free service, such requirements don't seem at all unreasonable.

Animats 1 day ago 0 replies      
Google now hosts web pages on Google Drive you can't even read without a Google account. Please don't use or link to those.
a_bonobo 1 day ago 0 replies      
The Journal Of Open Source Software does the review process rather nicely: You send them a PR with your software/description, and then a reviewer will publicly go through the review process (described here: http://joss.theoj.org/about#reviewer_guidelines )

Of course that doesn't work with all of science, you don't always want open peer review, usually because several people are working in various stages on similar or related things, or you don't want to publicly criticize the reviewed party, or you don't want to make the reviewer look bad when the reviewer doesn't know what s/he is talking about

InquilineKea 10 hours ago 0 replies      
Lol this reminds me of the Demolition Ship Captain incident at AoKH where DSC hacked into Angel THS's account (and then ban half the active users on Age of Kings Heaven) by just creating a website where he could get Angel THS to use the same password he could use everywhere (HUNTER)
glangdale 1 day ago 0 replies      
On a related, but more trivial level, I note that a lot of places that used to have a nice little punch card or whatever for a loyalty program now have accounts you can log into. So I can, if I want, choose to have to remember a burrito password. Awesome.
kerouanton 1 day ago 1 reply      
This is a global issue. We all experienced friends or people asking for joining them on their social network, or their IM app, which unfortunately you're not on. Same for vendors and partners and such that ask you to create accounts on their websites for sometimes just a single event or document to sync. XKCD recently published a fun illustration of it (1810).

On the other hand, most of us want to split between work and friends, between private and public. So we have different accounts for this purpose. I don't use Twitter and Linkedin the same way, and I don't have the same circle of relations connected by those means. So it may be "convenient" to have separate accounts, but at the same time this becomes a burden to maintain and check every of these (not counting data breaches and so).

My current practice is the following :- an email address for my close friends & family.- some public accounts for infosec usage (linkedin, twitter...)- some undisclosed accounts for my professional usage.- some undisclosed accounts for my private usage (ecommerce etc.)- all the rest (a vast majority) uses throw-away emails (I own a domain, enabling me to generate unique email addresses per website) and random passwords, so I don't care to monitor them or if they are breached. If I know I won't use the site frequently I don't even remember the password, I just do a "recover password" if I need it in the future.

My rules: 1. never reuse the same email twice for websites. That also helps me monitor breaches and/or spam and/or db resellers.2. never reuse the same password twice. Obviously.3. never use 3rd party authent such as "Login with FB, Twitter or Gmail", as it breaches the first rule.

It generates some work to maintain all of this, but I've been doing it since probably over a decade, and it's now an habit I can't quit, considering the benefits.

So, back to the paper, I'd tend not to follow this guideline, even if I'm tempted to do so.

joshuaheard 1 day ago 0 replies      
The author was me before I started using LastPass password manager and form filler. I can input my name, address, credit card number in seconds, and it will automatically track all my logins. There are many other such apps out there besides LastPass, so this is not a particular endorsement of that product. And, of course, LastPass or other password manager will not fix all the bad websites out there.
ibgib 1 day ago 0 replies      
> Why didnt I call myself? Mostly, because I hate making unsolicited calls of any kind, a phobia that I admit isnt entirely rational and that often causes inconvenience.

Interesting. I hadnt thought of a reservation as being unsolicited. What about online reservations that are more pubsub-like?

kapauldo 1 day ago 7 replies      
Unreadable for the over 40 crowd on mobile.
jasonkostempski 1 day ago 0 replies      
The unnecessary accounts I hate the most are the ones that are needed to send feedback. An account should not be required for that if you want feedback from outside your happy users bubble.
lutusp 1 day ago 1 reply      
This describes an increasingly common practice among online businesses -- aggressively monetize visitors, turn them into clients and corporate assets.

When you visit a typical modern website, within 15 seconds an overlay appears encouraging you to sign up, give away your email address, and become part of what's really happening.

In a hypothetical parallel universe where telling the truth is mandatory, you would visit a website and ask, "So, what are you selling, what is your product?" The website will be forced to reply, "You."

All this apart from the present state of the scientific-technical publishing business (also discussed in the linked article), which uses different methods to obtain the same result: monetize people's wish to communicate with each other.

dredmorbius 1 day ago 0 replies      
I agree with all the advice and sentiments given. Moreover, the proliferation of user accounts, the stickiness that implies for registration email addresses, the general failures of password-based security systems, and the unconscionably high level of tracking implied by indivdually registerd, client-side tattling interfaces, are all rapidly reaching a crisis point.

Some months back, another HN user mentioned as an aside in comments that he had over seven hundred site authentication credentials. This is a slight inflation over ordinary users, but not tremendously -- the typical citizen will have a score or several accounts -- social media, email, various vendors, and quite easily 100 or more.

There's also the problem of multiple worlds colliding. As YouTube's founder famously noted when faced with a "Please create a G+ account" prompt a few years back. After being reasonably assured that G+ and YouTube activity were separate, I've just learnt that they are not, with results that 1) I'd inadvertantly changed my G+ identity and 2) I've yet again blown away a YouTube profile I really don't care for.

I'm not sure what we're going to replace this system with, but extending the current path ain't gonna work.

As for the haircuts, a $25 set of electric clippers addresses that need. Or a blade. A 35 year old man is old enough to learn to cut (or shave) his own hair.

EGreg 1 day ago 0 replies      
And that is why we have implemented this in our platform:



hujouo 1 day ago 1 reply      
Narcissistic proclamation in a blog that brings no discussion or interesting thoughts.
paulcole 1 day ago 8 replies      
If you ever wondered what people meant by out of touch "ivory tower" academics, just read this post.

Just imagine telling a client, "sorry I don't open Google Docs on principle. Life is too short and too precious."

Chuck Berry has died bbc.com
267 points by Perados  2 days ago   77 comments top 19
jbuzbee 2 days ago 2 replies      
A brilliant showman. If you can make your way through the corny "Rock! Rock! Rock!"[1] movie of 1957 you'll see some pretty awful performances from artists of the day. But then on comes Berry[2] His segment was heads-and-shoulders above the rest and still enjoyable today.

[1] https://www.youtube.com/watch?v=RCt4_Dwt-Lk

[2] https://www.youtube.com/watch?v=9jKrHzps0XM

bamboozled 2 days ago 0 replies      
It's hard to imagine how much of an positive impact Chuck's music has had on the world over the last several decades.

I think it's worth mentioning that his music was selected to represent earth on "Music from Earth" [1] which is a pretty great honour in my opinion. Whether or not any other extra-terrestrials get to groove out to some Chuck one time is another thing, but I sure hope they do.

Rest in peace Chuck Berry, thanks for all the great music!

[1] http://voyager.jpl.nasa.gov/spacecraft/music.html

Brendinooo 2 days ago 1 reply      
I always found it amazing that the guy who could be credited for the birth of rock'n'roll more than anyone else was still alive. Longevity isn't as much of a thing in the rock circles, and the genre has changed so much over time.

Also, major props for putting out a new studio album almost 40 years after his previous one.[1]

[1]: https://en.wikipedia.org/wiki/Chuck_(Chuck_Berry_album)

rmason 2 days ago 1 reply      
Chuck Berry was so much a part of my childhood. As a kid in Detroit I was a fan of WXYZ disk jockey Lee 'the horn" Allen. He was good friends with Berry and often told the story that Chuck wrote several of his hits in the back seat of his Caddy as they went from bar to bar.

As I went through school his music was never far behind. Even in his eighties he could still rock the joint. Hoping I could see him do it in his nineties but I guess not.

acheron 2 days ago 1 reply      
Will never forget that time Marty McFly taught him how to play rock and roll.
cyberferret 2 days ago 2 replies      
Chuck would have been (in one way or another) the inspiration for almost everyone who plays guitar today. Aside from "Stairway to Heaven", the opening riff to "Johnny B Goode" is one lick that almost every guitarist learns to play. I've been playing that lick for 40 years, but still can't get it to sound as cool as Chuck did... RIP.
coldcode 2 days ago 0 replies      
Many people don't realize that the only #1 hit Chuck ever had was the novelty song "My Ding-a-Ling". It still cracks me up.
mgkimsal 2 days ago 0 replies      
"the great 28" was part of my musical "coming of age" in the 80s, and I definitely annoyed the heck out of everyone else in the house practicing along to that double album.

this is one of those people whose influence and impact will be felt for decades to come (like les paul, hendrix and many other icons)

bengg 1 day ago 1 reply      
One of the first celebs to create a sex tape, Berry famously coined the phrase "I like to do that" after farting in the face of a hooker. The video was recovered in the late 1980's after a drug raid on Berry's house by Police. A great man has left the world.


bboreham 1 day ago 0 replies      
One day, I thought "I should go see Chuck Berry, because he's getting old and I'd like to see him before he dies"

So I did. That was in 1987.

Unlike standard rock concerts, Chuck Berry came on stage at exactly the time printed on the tickets. He played exactly one hour, then he went off again. No encores, no banter.

It was great, though. RIP.

brudgers 2 days ago 2 replies      
"Send more Chuck Berry"
exabrial 1 day ago 0 replies      
A bit of interesting history about the es-345 Marty used to play Chuck Berry's Johnny be good in back to the future: http://bit.ly/2mHQzFG

Spoiler: The guitar also traveled through time

maus42 1 day ago 0 replies      
I think it's remarkable that a recording of Chuck Berry's Johnny B. Goode [1] printed on gold is on its way towards Gliese 445 [2].

[1] http://voyager.jpl.nasa.gov/spacecraft/music.html?linkId=356...

[2] https://en.wikipedia.org/wiki/Voyager_Golden_Record

grabcocque 2 days ago 0 replies      
This was one of those TIL xxx was still alive moments.
vasira 1 day ago 0 replies      
RIP ! very sad to hear this. His guitar style was unique and popular. We all will miss you
slg 1 day ago 4 replies      
Reading this obituary reminds me of Michael Jackson's death. Someone dies and suddenly all the bad things in their life are expunged. Berry was a very influential and talented musician but he spent time in prison related to being caught with a 14 year old suspected prostitute. He spent time in prison for robbery. He spent time in prison for tax evasion. He settled a lawsuit for video taping 50+ women including underage girls in the bathroom of his restaurant. I get it, no one is perfect and a person's death is a time to celebrate their life, but these crimes are part of his legacy. Not even giving them a passing mention in his obituary sends a horrible message that if you are talented enough you can get away with anything.

EDIT: This is currently the top comment, which likely isn't fair either. These things are a part of the story of his life but they also probably shouldn't be the top line. My point is simply we should be talking about his entire life and this is a poor obituary for not even mentioning it. The New York Times has a better obituary available at https://www.nytimes.com/2017/03/18/arts/chuck-berry-dead.htm...

alistproducer2 2 days ago 4 replies      
Sort of relevant. I was watching an old movie that featured the original writer and performer of "I put a spell on you." His name is screamin' j Hawkins. I actually thought he was doing a cover. Apparently he is the real father of shock rock. I consider myself to be a music afficionado and I was completely unaware of this guy's contributions. I'm sure there are many that don't know how much Chuck contributed to the creation of rock. Rest in peace man.
jumper2444 2 days ago 1 reply      
Pro_bity 2 days ago 3 replies      
I am curious, when HN became reddit? This seems like a total social shit post not in keeping with HN.
Show HN: How to write a recursive descent parser craftinginterpreters.com
281 points by munificent  12 hours ago   132 comments top 22
Drup 11 hours ago 8 replies      
I don't understand why so many people glorify hand-written parsers. Maybe because they never used good parser generators (not unlike type systems and Java) ?

Personal opinion: writing parsers is the least interesting part of an interpreter/compiler, and your grammar should be boring if you want your syntax to be easy to understand by humans.

Boring, in this case, mean LL(*), LR(1), or another well known class. Just pick a damn parser generator and get the job done quickly, so you can spend time on the really difficult tasks. The grammar in this article is LR(1) and is trivial to implement in yacc-like generators, locations tracking and error messages included.

Bonus point: since your grammar stays in a well known class, it's much easier for other people to re-implement it. You can't introduce bullshit ambiguous extensions to your grammar (C typedefs, anyone ?). This article gives a good explanation of this: http://blog.reverberate.org/2013/09/ll-and-lr-in-context-why...

panic 9 hours ago 4 replies      
Here's a recursive descent trick worth mentioning: instead of decomposing expressions into levels called things like "term" and "factor" by the precedence of the operators involved, you can do it all using a while loop:

 function parseExpressionAtPrecedence(currentPrecedence) { expr = parseExpressionAtom() while op = parseOperator() && op.precedence < currentPrecedence { if op.rightAssociative { b = parseExpressionAtPrecedence(op.precedence) } else { b = parseExpressionAtPrecedence(op.precedence + 1) } expr = OperatorExpression(op, expr, b) } return expr }
The parseExpressionAtom function handles literals, expressions in parentheses, and so on. The idea is to keep pushing more operators on to the end of an expression until a higher-precedence operator appears and you can't any more. This technique (called precedence climbing) makes parsing these sorts of arithmetic expressions a lot less painful.

CJefferson 11 hours ago 3 replies      
One piece of advice I have for people writing a new language.

Consider making your language so you can do one-character lookahead, with something like the 'shunting algorithm' to handle precedence.

I work on a system called gap ( www.gap-system.org ), and when parsing we only ever need to look one character ahead to know what we are parsing. This makes the error messages easy, and amazingly good -- we can say "At this character we expected A,B or C, but we found foo". It also makes the language easy to extend, as long as we fight hard against anyone who wants to introduce ambiguity.

If your language is ambiguous, and you have to try parsing statements multiple times to find out what they are, then your error handling is always going to be incredibly hard, as you won't know where the problem arose.

Of course, if you are parsing a language given to you, you just have to do the best you can.

w23j 10 hours ago 1 reply      
Oh my god, a dream come true!

I had always hoped Bob Nystrom would write a book about interpreters/compilers.

Back when I tried to learn how to write a recursive descent parser, the examples I found either ignored correct expression parsing or wrote an additional parse method for each precedence level. Writing a parser by hand seemed just too much work. Along comes this great article about pratt parsers http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-e... and all can be done with two simple functions, a loop and a map. :) Saved my enthusiasm right there.

Another great example is this article about garbage collection: http://journal.stuffwithstuff.com/2013/12/08/babys-first-gar...Instead of making things more complicated than they are, Bob Nystrom simplifies daunting topics and makes the accessible.

Thanks so much for your work! Really looking forward to this one. Will there be code generation too? :)

tbrock 11 hours ago 4 replies      
Love the confidence of this author:

> Writing a real parserone with decent error-handling, a coherent internal structure, and the ability to robustly chew through a sophisticated syntaxis considered a rare, impressive skill. In this chapter, you will attain it.

mishoo 11 hours ago 3 replies      
I wrote some time back a tutorial on this: http://lisperator.net/pltut/ but the implementation language is JS, and the language we implement is quite trivial (no objects, inheritance etc.; but does have lexical scope, first-class functions and continuations).

I think Java is ugly, but putting this knowledge in an accessible book is great. Typography is also very nice.

WhitneyLand 10 hours ago 2 replies      
Why is it called recursive descent, isn't that redundant?

I normally think of any recursion as some kind of descent into deeper levels. Maybe I'm being biased due to awareness of stackframes being a common way to implement recursion.

Or maybe it's a known phrase made popular by a paper or researcher. like "embarrassingly parallel". There are other ways to say it but comp sci people know by convention what an embarrassing problem is and that it's usually a good thing.

If everyone started saying only "recursive parser" would there really be any confusion?

rzimmerman 8 hours ago 0 replies      
If anyone would be helped by an example, I worked on this recursive descent-based compiler a few years ago with a focus on clarity and readability:


The actual parsing is here:https://github.com/rzimmerman/kal/blob/master/source/grammar...

It's somewhere between a toy language and something professional, so it could be a helpful reference if you're doing this for the first time.

Beware, the project is no longer maintained and probably doesn't work with modern node.js runtimes.

grabcocque 11 hours ago 9 replies      
It's a shame parsers are such a PITA to write. So many problems could be trivially solved if writing a grammar and generating a parser for it were in any way a pleasant process.
megous 10 hours ago 1 reply      
Another fun way to write a recursive descent parser is to abstract and parametrize the recursive descent algorithm.

Best done in dynamic languages. I wrote an abstract recursive descent parser in JS which accepts an array of terminal (regexp) and non-terminal definitions (array of terminal/non-terminal names + action callback), and returns the function that will parse the text.

The parser "generator" has 125 lines of code and doesn't really generate anything. It's an extremely lightweight solution to quickly produce purpose made languages in the browser without need for any tooling.

Together with `` template strings to write your custom made language code in, it makes for a lot of fun in JS. :D

e19293001 11 hours ago 0 replies      
I learned about recursive descent parser by reading Anthony Dos Reis book: compiler construction using java, javacc and yacc[0]. I'm a bit lazy {tired} now I'll just refer to my previous comment. It has been my favorite book. Trust me, you'll learn about compiler technology with this wonderful book.


[0] - https://www.amazon.com/Compiler-Construction-Using-Java-Java...

asrp 8 hours ago 0 replies      
I've found parser expression grammars (PEG) to be a good solution to avoiding the ambiguity stated at the very beginning. This is done by making all choices (|) into ordered choices.

I've recently used PEGs to write a Python parser (parsing all of Python, except for possible bugs) in ~500 lines of Python [2]. Its entirely interpreted. No parser is generated, only trees.

I'll also add that Floyd's operator precedence grammar [3] includes an algorithm which can deduce precedences from a grammar.

[1] https://en.wikipedia.org/wiki/Parsing_expression_grammar

[2] https://github.com/asrp/pymetaterp

[3] https://en.wikipedia.org/wiki/Operator-precedence_grammar

zem 8 hours ago 0 replies      
how do recursive descent parsers compare to parser combinators? so far i've tended to use a combinator library when one of my small projects needs a parser, but now i'm wondering if i should just get good at doing recursive descent parsers instead.
wideem 7 hours ago 1 reply      
Haha, I had assignement to built a recursive descent parser for a simple Ada-like language just 3 days ago. It was fun task, but if this guide was posted earlier, I could have used it.
coldcode 10 hours ago 0 replies      
My first project ever as a professional programmer was writing a recursive descent parser - in 1981, in Fortran, of Jovial language code. Of course there were no other choices. Thankfully today you can avoid writing your own, though sometimes it still pays to write it by hand.
JustSomeNobody 11 hours ago 2 replies      
I didn't see mention of first/follow sets. Might be handy to have that in there?
fjfaase 6 hours ago 0 replies      
Ever thought about using an interpreting parser, which takes a grammar and parses a string/file according to it. (To parse the grammar you use the interpreting parser itself with a hard coded version of the grammar of the grammar.) Have a look at: https://github.com/FransFaase/IParse
lhorie 9 hours ago 0 replies      
I'm writing a Javascript parser right now so this is actually super useful. Thank you!
rch 11 hours ago 0 replies      
I enjoyed experimenting with this project, back when it was active...


-- A recursive descent parser for Python

-- Grammars are written directly as Python code, using a syntax similar to BNF

-- New matchers can be simple functions

amorphid 11 hours ago 1 reply      
I worked on a recursive descent JSON parser. That was a valuable learning experience.
prions 10 hours ago 0 replies      
If you're serious about writing a parser, why not go for a bottom up parser?

It eliminates a lot of the headaches with top down/recursive descent parsers like left recursion and backtracking.

hasbot 10 hours ago 0 replies      
Been there. Done that. Yuck! I so much prefer bison or yacc.
The Logic Behind Japanese Sentence Structure 8020japanese.com
307 points by tav  1 day ago   148 comments top 25
kazinator 1 day ago 3 replies      
Hey all, on a topic related to this: here is another way to get some feeling for the different sentence structure.

I recently finished making English subs for a 45 minute Japanese rock concert video from the 1980's.


Here I introduce a concept in subtitling whereby a subtitle template with dashed ("------") blanks appears for an entire English sentence, and the blanks convert to words and phrases as the corresponding concepts appear in the Japanese audio, in that order.

The viewer has a better idea of what is being sung at the moment it is sung, and which words are receiving the emotional emphasis in the song. Also, the revelation of meaning is delayed for the English viewer in the same way. The "kicker" phrase at the end of a verse or a meaning-altering particle (such as an entire sentence negation) isn't prematurely revealed in the translation.

whym 1 day ago 6 replies      
This article does a great job at presenting a gist of the Japanese sentence structure. Nevertheless, it makes me want to point out that it's not the whole story. If you take into account topics such as modality and conjugation, some of the information you add to a verb is placed after the verb and cannot be freely reordered.

Japanese verbs are "greater" than English verbs in the sense that you conjugate/suffixate a verb to express negation, conjunctions, conditional forms etc, making it longer and longer: https://en.wikipedia.org/wiki/Japanese_verb_conjugation

In contrast, English has a relatively simple set of inflections of verbs. Many of those Japanese verb forms and suffixated long verbs are translated into multi-word phrases. Compare:

Anata wa kyou nemuru. (You sleep today.) -- verb is in normal form ("nemuru")

Anata wa kinou nemurenakatta. (You were not able to sleep yesterday.) -- verb is in continuative form ("nemuru""nemu") + possibility suffix ("reru""re") + negation suffix ("nai""naka") + past suffix ("ta""tta")

iamnotlarry 1 day ago 7 replies      
I learned Japanese very much the same way I learn programming languages and found it to be very easy to learn spoken Japanese.

As far as languages go, Japanese is structured a lot like a programming language. If you learn five or six "bunpo" or grammar rules, you can go a very long ways. Then, to improve, just add rules to your mastery.

When I first learn any programming language I start with basics: variable binding/assignment, types, conditionals, looping, etc. Japanese fits very nicely into the same learning method.

Does a language have if/then? is it 'if (<expression>) { expression }'? Or 'if <expression> then <expression> end if'? Is there an 'unless' form? What about 'else'?

For Japanese, it's <expression> naraba <expression>. That's it. Unless? <expression> nakeriba <expression>.

How about while? <expression> nagara <expression>

For people who can learn the gist of a programming language in a week, you could learn the gist of Japanese in a week or two. That doesn't mean you would be fluent. You'd still need to learn thousands of vocabulary words. But the basic mechanics can be mastered in days or weeks. More mechanics can be layered as needed.

dbshapco 1 day ago 2 replies      
Anyone actually recommend the book from which the article is taken? I also tried to read the wa v. ga blog post on the site to get a further sense of the author's approach, but the server returns an out of memory error (from a blog post?!).

I've been in Tokyo now 18 months, took private lessons twice costing about $2,000, and feel I learned 10 words. That's $200/word. I joke with people I stopped taking lessons because learning Kanji would bankrupt me. Japanese just doesn't stick in my older and very Western brain. It doesn't help that my office does business in English and one can get by in Tokyo with minimal Japanese and a lot of pointing and gesturing. The glacial progress becomes discouraging.

I tried Rosetta Stone. It takes the same phrasebook approach as the first textbook I was given, Nihongo Fun & Easy, which was neither. The textbook at least had short sidebar discussions of grammar and somewhat useful phrases. I had no idea where I'd get to use the phrase "The children are swimming," that Rosetta offers.

The 8020 article was the first discussion of particles that actually made sense. When I'd asked teachers about particles before the answer was usually something like "Don't worry about that yet, just memorize the phrases." If the remainder of the book is in the same vein I'd pay twice the asking price. I flipped through parts of Nihongo Fun & Easy after reading this article and it suddenly made much more sense. I wasn't staring at a list of phrases I was supposed to memorize and slowly reverse engineer the language, but could deconstruct the basic sentences.

It's much easier for me to learn construction, and use the break down of other sentences to construct my own, even if the rules fail sometimes and lead me to construct sentences no native speaker would utter. That's the other 80% of language idiosyncrasies that takes time.

I don't expect to be fluent in Japanese any time soon, however moving past "sumimasen kore onegeihshimasu" while pointing at a menu item would be awesome.

myrandomcomment 1 day ago 2 replies      
So I am sitting in Ebisu in Tokyo right now. I spend about 3-4 months a year here and have for about 8 years. My understanding of spoken Japanese is pretty decent. However it is by pure memorization over time. This just sorted a whole bunch of things out in my head as to the why. Very good stuff. Thank you.
ThinkingGuy 1 day ago 0 replies      
One minor quibble with the author's example sentences: They use "watashi wa hito desu" to mean "I am a person."

I'm not a native Japanese speaker, but I'm pretty sure that "hito" is only used to refer to other people, never to oneself (source: the excellent "Nihongo Notes" series by the Mizutanis).

Maybe watashi ha ningen desu (I am a human) would be a better example that still illustrates the grammar pattern.

tempodox 1 day ago 3 replies      
I love how precise and detailed this article is written. If only more documentation were like this.

It was always my assumption that grammar is the most important thing to learn about a language. Vocabulary accumulates almost automatically over time, with practise (and a dictionary). Interesting to see how that holds in this case.

stephengillie 1 day ago 1 reply      
Using particles in this way almost sounds like using flags to specify parameters when calling a function. This allows them to be placed in any order. A Powershell-like example:

 Construct-Sentence -subject Taro -object Noriko -verb to_see -time Past > "Taro saw Noriko." Construct-Sentence -object Noriko -time Past -verb to_see -subject Taro > "Taro saw Noriko."
The original sentence is "Tar wa Noriko wo mimashita." And "masu" appears to be the root verb "to see".

 Construct-Sentence -wa Taro -wo Noriko -verb masu -time Past > "Tar wa Noriko wo mimashita."
Something more Bash-like:

 csent wa:Taro wo:Noriko masu -past_affirmative # "Tar wa Noriko wo mimashita."
Meanwhile, subject-object-verb (SVO) and similar patterns depend on the order of inputs:

 Construct-Sentence Taro Norkio to_see Past > "Taro saw Noriko" Construct-Sentence Norkio Taro to_see Past > "Noriko saw Taro"
This allows for invalid outputs:

 Construct-Sentence Taro to_see Norkio Past > "Taro Noriko'ed see"

a_c 1 day ago 0 replies      
This article resembles two different ways of designing protocols. In tcp [0], information are encoded in position, e.g. the first 16 bits are for source port and the next 16 bits are for dst ports. While in, say, FIX [1], information are encoding be delimiters and position doesn't matter.

[0] https://en.wikipedia.org/wiki/Transmission_Control_Protocol

[1] https://en.wikipedia.org/wiki/Financial_Information_eXchange

ehsquared 1 day ago 0 replies      
Filipino is really similar in terms of the use of particles/markers! For example, to say: "The cat is eating the fish", we say: "Kumakain ng isda ang pusa". The verb (is eating, kumakain) always comes first. The subject (cat, pusa) is identified by the "ang" marker, while the object (fish, isda) is identified by the "ng" marker. We could also say "Kumakain ang pusa ng isda", although that's rarely used.

The "-um-" affix in "kumakain" makes the verb active ("is eating"). If we instead used the "-in-" affix (as in "kinakain"), it would make the verb passive ("is being eaten by"). So we could alternatively say: "Kinakain ng pusa ang isda" to mean: "The fish is being eaten by the cat".

creamyhorror 1 day ago 0 replies      
I quite like the headlining diagram. It's a simplified view that shows the schematic approach of the languages - Japanese relies on case particles rather than ordering (unlike English). Of course, there's a lot of complexity that goes on under the hood when you start to figure out the appropriate verb conjugations to use (which aren't shown in the figure).

Small side comment, if anyone's learning Japanese and wants to ask or answer questions about it, you're welcome to join a little Discord chat group (including native speakers and advanced learners) at https://discord.gg/6sjr3UY

scarygliders 1 day ago 0 replies      
I'm wondering if the main factor is one's ability to learn a new language - itself affected by many factors such as age, for example?

I've been married to a native Japanese for going on 19 years now.

I have tried to learn the language. I have lived in Japan for 6 years, hoping that full immersion would help. I even embarked in the Kumon Japanese course whilst in Japan, from beginner level to more advanced. I have piles and piles of the work books cluttering my home.

I ended up being able to read katakana, hiragana, and learned some 250 Kanji.

What I didn't end up managing was being able to have decent conversation in Japanese. Sure, I could ask for a beer, directions, talk about the weather, but that was about it. I had reached some plateau and could go no further.

In the end I gave up. It was basically something I couldn't do. I tried many different ways of learning, found none which could not prevent my sheer frustration at not being able to take the knowledge in.

Are some people simply 'wired' to learn language more than others? Is there an age limit, for example? Was it my low tolerance for frustration? Was it my perfectionist tendencies? Probably a 'yes' to most of those.

But I stopped after more than a decade of trying.

Grue3 1 day ago 0 replies      
Yeah it's "logical", except in casual language the rules are broken all the time. A lot of things can follow the verb.

The article doesn't mention subclauses at all, but it's where things become hairy. There's particle "ga" which is similar to "wa" except it works as a subject of subclause, except sometimes it means "but". There are dozens of ways to incorporate subclauses into main sentence, using different particles. It's very common for the entire sentence to be a subclause ([something] no/n desu).

migueloller 1 day ago 1 reply      
The article says:

> What this means is that the sentences, This is a car, and, This is the car, would both be, . There is no differentiation.

This is not always true. The latter could be . The and particles are very similar but are still different. Fully grasping this small difference is one of the biggest problems Japanese learners encounter when studying grammar.

Closer to the beginning, the article also mentions:

> The topic of a Japanese sentence is very similar to what other languages refer to as the subject. The subject of a sentence is the person or thing that does the action described by the main verb in the sentence. These are, in fact, slightly different concepts, but for now, we will treat them as being the same so as to keep things simple.

It turns out that marks the topic and marks the subject. I feel that many times the confusion between and in Japanese learners happens because the learning material tries to make this simplification in the beginning. When it's time to learn , it's hard to retrain the brain.

randomgyatwork 7 hours ago 0 replies      
Learning Japanese, it's been hard to realize that the most important part of a sentence is always at the end.
akssri 1 day ago 2 replies      
Interestingly, many Indic [2] languages follow similar verb-centric grammars. In the canonical Vyakarana tradition (of Panini), sentences are seen as revolving around the verb [1].

The "noun-cases" or (karaka) are generally equivalent to the "particles" in Japanese. The genitive (eqv. ) is not a karaka, since it has no relation to the verb. Of course, since there is technically no syntactic difference between adjectives and nouns in Sanskrit, the semantics of the genitive in particular can be very undeterministic. This is not the case in others though.

I wish there were more studies on how Indic traditions affected East/SE Asia [3]. Sadly, most academics/people here don't believe there exists a world outside N. America & W.Europe (often no India either!).

[1] There is a competing tradition of semantics called "Nyaya" where sentences are seen to be Noun-centric. These discourses are generally not easily accessible.

[2] Dividing the languages based on presence/absence of noun inflections would appear not to have much discriminative power to claim anything about historical origins. Historical Linguistics, I believe, is mostly a politicized pseudoscience.

[3] This documentary highlights the kind of things I mean.


It is also fascinating to look at the Thai/Khmer scripts and realize these are related to current day Telugu/Kannada scripts.

RayVR 1 day ago 0 replies      
Anyone that would benefit from this style of learning (rapid, focused on structure and rules) may actually be hurt by the rush to cover many topics without treating any precisely. I'm by no means an expert but here are some issues in just the first section.

* example that glosses over the difference between a topic and a subject is frustrating because, in fact, the similarity is fairly superficial.

* there is no "a", "an", or "the" in Japanese, however to specify "this is the car" (implying that it is in answer to some question about which car) one would say Using the particle instead of .

I'm always on the lookout for useful resources. So far, Tae Kim's guide [1] has been the best I've found. Kim doesn't assume much about the reader's pre-existing knowledge yet he is able to remain succinct.

[1] http://www.guidetojapanese.org/learn/grammar

iamnotlarry 1 day ago 5 replies      
Remember those English diagramming classes everyone hated? I'm not very familiar with the education system in Japan, but I doubt they have diagramming classes. In Japanese, the diagramming is built into the language. You tag the subject, the direct object, the indirect object, etc. Everything gets markup.

Which part of the sentence is the direct object? Uh... the part with the direct object tag hanging off it? Correct!

Have you ever heard a programming language described as "designed for teaching"? Japanese is a language designed to be as simple as possible to learn.

Coming from English, the idea that a natural language could actually be designed was a shock to me. I thought they just evolved sloppily and haphazardly. Well, Japanese is proof that it doesn't have to be that way. Clear rules and not too many of them. No exceptions. Rigidly consistent. It's like a language created in a lab that never got dirtied up by real world usage. Except, oh wait, it's a real language used by millions of people every day.

glandium 1 day ago 2 replies      
Something I like about the whole "verb at the end of the sentence" thing is that you can totally flip over the meaning of what you're saying, right at the end. In English, you can achieve the same effect with awkward forms (like "not" at the end of the sentence), but in Japanese, it's just the natural form.

Try to imagine the kind of snarks you could do if you could put things like "I don't reckon" on hold until the end of the sentence.

Sadly (ironically?), that tends not to be the kind of language subtlety/humor the Japanese go for.

unscaled 1 day ago 1 reply      
I have to enter a caveat here though. Spoken Japanese is a little bit different, and in some cases, arguments will follow the verb. It's pretty rare, but I did hear things like "Dou sureba ii ore" or "nani yatteru omae"?

This is VERY rough and informal though, and not Japan being very polite, it's not something that I'd hear everyday. Maybe on TV or from really close friends, and even then I'm not sure if everyone would say that.

But it just goes to show that natural languages are very complex creatures, and even the tidiest rules have exceptions sometimes.

panorama 1 day ago 5 replies      
Beginner question: In casual, spoken Japanese, I've been taught that I can drop the particles (including pronouns). Hence "watashi wa tabemasu" can be colloquially shortened to "tabemasu".

Thanks to this article, I've come to understand particles much better and why they're important, but does it change in casual spoken Japanese? Are some particles okay to drop whereas others are kept? Thanks in advance.

dasfasf 1 day ago 2 replies      
A interesting property of Japanese is that a sentence is also a subordinate clause. For example

Tarou wa Noriko wo toshokan de mimashita. (Tarou saw Noriko at the library.)

Tarou wa Noriko wo mimashita. (Tarou saw Noriko.)

Tarou wa Noriko wo mimashita toshokan (The library where Tarou saw Noriko)

Generally "<sentence> <noun>" means "the <noun> such that <noun> <particle> <sentence> is true for some choice of <particle>".

rootsudo 23 hours ago 0 replies      
I feel like this is being shilled too much. I see it everywhere on facebook, reddit japan topics and general.

Of course, generally speaking I am learning Japanese.

lisper 1 day ago 1 reply      
Japanese structure is very reminiscent of Forth.
       cached 21 March 2017 02:11:01 GMT