hacker news with inline top comments    .. more ..    27 Jul 2016 Best
home   ask   best   3 years ago   
Show HN: Web Design in 4 minutes jgthms.com
1176 points by bbx  9 hours ago   119 comments top 48
primigenus 8 hours ago 10 replies      
This is a pretty nice demo of the process of turning a basic page into a "design" (in the sense that applying positioning, spacing, contrast, and things like typography is visual design - I might call it layout instead).

However, if you run Chrome's Accessibility Audit (https://chrome.google.com/webstore/detail/accessibility-deve...) on this page, you get warnings about low contrast for 100+ elements and a link to https://github.com/GoogleChrome/accessibility-developer-tool....

So although you claim black text is harsh on the eyes and gray is more comfortable, it in fact is not - it just makes it harder to read. The very first time you load the page and see black Times New Roman on a white background is actually a better user experience for a larger number of people, purely from the point of view of legibility.

Try having someone with less than stellar eyesight look at this page. Or someone who's trying to read it on a smartphone outside in sunlight or with the brightness of their screen set at less than maximum. Design isn't about what looks nice, it's about what works well - pages that a portion of your audience cannot read don't work well.

kylemathews 6 hours ago 3 replies      
I've been working on a related project Typography.js that vastly simplifies web design. It ships with 30 pre-built designs and I'm working on tools to make it really simple to create custom typography themes in-browser.

CSS is a very low-level language for expressing design intent. It's great if you want to set the background color but if you say: "I'd like to add white space to my typography" it could take dozens of recalculations + css changes to test your idea.

Typography.js's goal is to create the most elegant/powerful API possible for defining your site's typography and remove a lot of the tedium/difficulty around experimenting with your design.

Would love feedback / help!


K0nserv 8 hours ago 3 replies      
Lovely! Definitely like the style here and the focus on content first. Like other people have said it's similar to Motherfucking Website[1], and a Better Motherfucking Website[2]. I'll need to start including this when I link to those two.

For my website[3] I'm really trying to keep weight on the wire down too so I am opting to skip the custom font and header image. It's quite nice to have 10-15kb pages in the age of the web obesity crises[4]. A header image and a custom font does have a big effect on how personal the content ends up though.

[1] - http://motherfuckingwebsite.com/

[2] - http://bettermotherfuckingwebsite.com/

[3] - https://hugotunius.se

[4] - http://idlewords.com/talks/website_obesity.htm

virtualized 8 hours ago 1 reply      
> Black text on a white background can be harsh on the eyes. Opting for a softer shade of black for body text makes the page more comfortable to read.

No, it doesn't. The low contrast text is definitely harder to read in direct comparison with black. This is the point where I suspected that the page might be a parody of modern web design. Unfortunately it seems to be serious.

The syntax highlighting is similarly awful and the grey background makes it even worse.

Cbeck527 8 hours ago 2 replies      
Reminds me of a Motherfucking Website[1], and a Better Motherfucking Website[2]

[1] - http://motherfuckingwebsite.com/

[2] - http://bettermotherfuckingwebsite.com/

tracker1 7 hours ago 1 reply      
Please, do not specify "Arial" or "Helvetica" in your font-family... just use "sans-serif" by itself... Just use sans-serif as your main fallback font after specific web-font. This will use the browser default (often Arial in windows, Helvetica on OSX) which is usually the best looking Helvetica-like font, or the user's preference.

Yes, sometimes it's a different font, but usually a better looking default. Helvetica looks hideous on windows, which is why it's often "Helvetica Neue", Arial, sans-serif... that said, just use sans-serif unless you want a specific font (with webfont option).


-- edit to be less inflammatory.

qznc 4 hours ago 0 replies      
My litmus test for web design typography is mixing fonts. For coding-related things especially mixing fixed-width with normal fonts. This page clearly fails [0]. Not even the baseline aligns due to 2px bottom padding. Usually only the x-height mismatches, which means "<code>x</code>" (fixed-width) does not have same height as "x" in normal font (serif or sans-serif).

Here is a good example from my website [1]. Note how "use" and "mu" have the same height. It is not perfect, but matching cap height as well is maybe impossible without buying fonts.

[0] https://imgur.com/a/Qs6dJ[1] https://imgur.com/a/ij1uI

drchiu 8 hours ago 0 replies      
Sort of reminds me of what Bob Ross from the Joy of Painting does in a couple of strokes... :)
achairapart 8 hours ago 0 replies      
Good work!

This is when you design in the browser with the medium in mind.

Problem is when someone gives you a PSD made by someone who badly ripped off someone else website with no idea about what the box model actually is, asking you to make a 1:1 replica in the browser.

Oh, and of course it should work on mobile too!

webscalist 7 hours ago 3 replies      
Can't go back. Usability points: -1

Why not make it into multiple web pages?

int_handler 4 hours ago 0 replies      
> Long lines of text can be hard to parse, and thus hard to read. Setting a limit of characters per line greatly enhances the readability and appeal of a wall of text.

I agree with this 100%.

What I don't understand is why many developers argue that this principle does not apply to code and that we shouldn't have line length limits "because it's not 1970 and we have large monitors." If long lines of prose is difficult to parse, then long lines of code are even more cumbersome to parse, especially when you are either 1) slowed down by having to scroll horizontally all the time or 2) distracted by the awkward naive line wrapping done by the editor.

Procrastes 6 hours ago 0 replies      
Interestingly, I can't reach the site because our corporate security software (Sophos) categorizes this site as "Weapons." I suppose, in a sense, good design can be a secret weapon... :/
mattherman 8 hours ago 0 replies      
Love it. One typo I noticed in the image section, "Graphics and icons can be used either as ornaments to support your content, or take actively part in the message you want to convey".
cel1ne 6 hours ago 0 replies      
A good framework to help you build response, accessible design with proven typography: http://tachyons.io/
mknocker 9 hours ago 0 replies      
Clear, quick and simple. It would be interesting to have such elegant introduction for other topics. Well done !
SmellTheGlove 6 hours ago 0 replies      
For a web noob like me, I thought this was pretty awesome. I'm trying to build myself a website now, in 2016, when the last time I legitimately did it was 1999. This actually helped quite a bit with the way I think about it. Sure I'm using a Bootstrap template, but this made me think about why things are the way they are by default, and why I might want to consider some changes.
TheAceOfHearts 6 hours ago 2 replies      
I've wondered this for a while: does actually ever actually use those share buttons at the end? I don't think I've ever seen anyone using em.

The way I see it... You can just share the URL. And if you're the type of person that frequently shares content, presumably you'd use a browser plugin of some sort, just so you can get a consistent experience across sites.

abglassman 5 hours ago 0 replies      
His longer tutorial http://marksheet.io is great, and his CSS framework, http://bulma.io, is also really handy.
aban 9 hours ago 0 replies      
Beautiful. Personally, I would've probably stopped at the "Custom font" step (I don't really like header images and the share buttons) to keep it simple, but hey that's just me.

Nice work!

willemwijnans 9 hours ago 0 replies      
Like the simplicity of this, also nice work on Marksheet.io!
baliex 5 hours ago 0 replies      
"What is the first thing you need to work on?" Apparently the first thing is to enable Javascript... not quite what I had in mind.
qwertyuiop924 6 hours ago 0 replies      
MFW and BMFW did this better. Why? because most of the stuff this added over BMFW is actively user hostile. Besides, I just want my freaking content to look good, with minimal page load.

Don't add any unecessary weight. If your content (not SPA, not "web 2.0," CONTENT, the stuff that actually makes up most of the web) takes longer to load than HN, you're doing it wrong. Thankfully, this site seems to understand that.

As for colors, just leave them the user's default, set black on white, or use Solarized or another high-contrast theme.

johnm1019 8 hours ago 0 replies      
Love this! Step by step explanation of how to do something which literally shows rather than tells.
jyotiska 7 hours ago 1 reply      
I have created a static site generator "minni" that powers my blog at - http://jyotiska.github.io/blog/. It is super simple, has less than 10 CSS declarations and no JavaScript. I don't really expect anyone else to use it since it was created according to my own design preferences. Here is the link in case any one wants to take a peek - https://github.com/jyotiska/minni
DLA 5 hours ago 0 replies      
Impressive work to say the least. Can't wait for the etcd integration. Bravo for nice build instructions on you github page - if only more projects did that. Love the choice of SQL and JS, and the native HTTP interface. And, for a beta, the documentation and architecture information is solid as well. High quality work. Keep going! And THANK YOU for releasing this to the world.
rralian 3 hours ago 0 replies      
This is really well done, kudos! It actually feels a little like I'm learning by doing here as the design firms up in place as I proceed. Very cool idea!
daveheq 6 hours ago 3 replies      
"Times" font isn't "unstyled", it's styled to a newspaper, which I've found o be easy to read in a newspaper and hard to read on a website, I don't know why, but it has nothing to do with one being "styled" and another not; in fact, serif is more "styled" because of the serifs.
noisem4ker 7 hours ago 0 replies      
>Providing space [...] around [...] your content can increase the appeal of your page.

Just like black bars when the aspect ratio of a movie doesn't match that of my monitor increase the appeal of said movie.I find margins on mobile pages a complete waste of horizontal space. My device screen is only 6cm wide and already surrounded by plenty of space in my field of vision. Why make it even narrower?

pitchups 8 hours ago 0 replies      
Wonderful! For a moment at the start though, I wondered how such a poorly designed page, could teach anything about web design. Then I clicked the first link... :)
Frank2312 8 hours ago 1 reply      
FYI, in IE 11, only the header gets centered and not the whole body.

I'm not sure if this was intended or just a bug with IE though.

Nice intro to web design. I love the simple presentation.

TheMagicHorsey 8 hours ago 0 replies      
That was beautiful. Thanks for sharing.

I wish there was a similar simple demo showing how to style mobile apps.

konschubert 8 hours ago 1 reply      
The back button is broken. Also, not sure if changing the fragment identifier is the right abstraction.
reimertz 9 hours ago 0 replies      
Ha, such a smart way of introducing web design for a novice. Nice work homie!
asp_net 4 hours ago 0 replies      
Not perfect in every detail, but absolutely lovely. Well done!
chejazi 8 hours ago 0 replies      
Love it except when I came back here to upvote the navigation was totally broken!
muhammadusman 7 hours ago 0 replies      
This is great! thank you for sharing!
Capira 4 hours ago 0 replies      
Awesome Job!
gabrielcsapo 7 hours ago 0 replies      
Very cool, nice job Jeremy!
dwenzek 9 hours ago 0 replies      
Thanks. Nice, simple and useful !
vegabook 5 hours ago 0 replies      
Very nice, but I have to disagree on two points:

First using grey blocks for code, while widespread, in my opinion is a visual interruption to reading flow. Indentation is preferable (and using the courier/monospaced font).

Second sans-serif. I understand the history - serifs are delicate and look clunky on <200 dpi screens. But as we move to widespread retina displays, isn't the old lesson of typography, namely that serifs help legibility and convey an image of "seriousness", about to make a comeback? Some will argue about the legibility point, but even from a styling standpoint, aren't we getting just a little bit tired of helvetica and its brethren everywhere?

JustSomeNobody 8 hours ago 0 replies      

But, wait! Where's all the javascript? Don't you need React?

chm 6 hours ago 0 replies      
Hi OP! These were the best spent 4 minutes of my day :) At first I was confused but you really presented it well.
ajroas 8 hours ago 0 replies      
Awesome... nicely done.
rsyntax 8 hours ago 0 replies      
Very well done! kudos
SonicSoul 7 hours ago 0 replies      
um, nice but the title is misleading. It took me only 1 minutes. 40 seconds of Times New Roman related profanities followed by 20 seconds of clicks :)
cloudjacker 7 hours ago 0 replies      
Web design in 1 minute:

Bootstrap template #581




bttf 7 hours ago 0 replies      
ChicagoBoy11 8 hours ago 0 replies      
Create React Apps with No Configuration facebook.github.io
766 points by vjeux  4 days ago   238 comments top 53
thereact 4 days ago 2 replies      
This is great since it provides an OFFICIAL opinionated set of tools for building React apps which is typically the largest barrier of entry for new developers looking to experiment with this technology.

However, it is missing a lot of core features that typically come standard with Webpack/React boilerplates. Directly from their Github:

Some features are currently not supported:

 Server rendering. Testing. Some experimental syntax extensions (e.g. decorators). CSS Modules. LESS or Sass. Hot reloading of components.
So a great first set of features for a simple React starter project, but for those of you looking to expand the development toolkit from this currently limited configuration, check out the following link to search React boiler projects on github based on a number of criteria like the ability to search by features included such as CSS Modules, Hot Module Replacement, etc.


For those looking to learn more about the ecosystem, the following resource lists might be useful.

More React resources: https://github.com/enaqx/awesome-react

React/Redux resource links: https://github.com/markerikson/react-redux-links

orf 4 days ago 8 replies      
> Having just attended EmberCamp a week ago, I was excited about Ember CLI. Ember users have a great getting started experience thanks to a curated set of tools united under a single command-line interface.

This is one of the best things about Ember. `ember new`, `ember serve`, ember generate component my-component`, `ember build`, `ember deploy`, `ember install`. It's opinionated but it lets you get productive right off the bat. I tried React but after a couple of days I just couldn't get it working, waaay to many options. So I switched to Ember and haven't looked back.

seangrogg 4 days ago 2 replies      
I think one of the best things I've done to date is actually distance myself from the React community. While I love some of the tooling that has come out of it (Redux, React-Router) I think the community (as an amorphous entity) over-emphasizes the need/desire for transpiling, linting, testing, etc.

Since then, I've "reverted" to building things in ES5, working in multiple files without bundling, etc. and I have to say the enjoyment I get out of using React has cranked up considerably.

I am happy to see they are converging on some standards - that will definitely make building new apps much easier from a common starting point. I just hope they can walk the fine line between "opinionated" and "bloated".

jfdk 4 days ago 1 reply      
This is actually pretty huge. #1 complaint/barrier/hate with getting started with React is all the tooling to do it "the right way"

Kudos to React team for bringing a superior pattern and making it actually practical to use.

firasd 4 days ago 0 replies      
This is great. React has this weird dual nature in that on one hand, you can drop it in as a <script> tag and it 'just works'. On the other hand, if you want to build using it, you're going to end up needing things like webpack, babel, etc, not to mention other common libraries, to the extent that it takes over your front-end stack (also because it takes over any rendered document nodes, and if you want to build a SPA you eventually use it to render everything inside <body> if not the whole document). So it's good to resolve this conflict by providing quicker ways to get started with common tools.
andrewstuart 4 days ago 1 reply      
This is the most exciting thing to come out of the ReactJS project since it started. The very best investment a technology platform can make is onboarding new developers. ReactJS is recognising that and it's great news that they are making the hardest bit easier.

I whined a while back on exactly this topic.

"Babel 6 - useless by default - a lesson in how NOT to design software. "


The last line of the above griping blog post says: "The right amount of configuration is none."

So it is awesome to see someone who DOES know how to design software.

Dan Abramov's blog post says: "Zero Configuration. It is worth repeating: there are no configuration files or complicated folder structures. "

Babel gets it precisely wrong, this new ReactJS tools aims to correct the Babel complexity error.

msoad 4 days ago 3 replies      
This is lacking tons of features that other boilerplates already have but I think this was a great move because we needed a source of truth for doing app structure in React.

In a different note, I think if you write it yourself from scratch you'll have more control and knowledge down the road when it comes to nasty bugs but I won't blame you for choosing this over spending weeks setting up a React app.

amavisca 4 days ago 1 reply      
Under the hood this is Webpack + Babel + ESLint with sane initial configuration.Love it.
tlrobinson 4 days ago 1 reply      
I think the best part of this is the "eject" feature. It's great to be able to spin something up quickly but migrate to a custom solution if you outgrow it.

However, it would be nice to be able to tweak some of the configurations (Babel, ESLint, Webpack) without completely "ejecting".

vlunkr 4 days ago 1 reply      
My team has wasted so much time configuring webpack. This is a big win for React IMO.
vicapow 4 days ago 0 replies      
I'm sucker for self descriptive "boring" names like "create-react-app"

Also, sweet project!!

hoodoof 4 days ago 0 replies      
This is a great idea and sorely needed. Too many frameworks rigidly avoid integration with the ecosystem because they do not want to be seen to bless any given third party technology.

In the case of reactjs however it is extremely important because the ecosystem is absolutely necessary and absolutely damn complicated.

This is precisely what needs to be done to help people get started. Well done.

thegayngler 3 days ago 0 replies      
So I was looking through the modules in react-scripts module and I noticed postCSS, HMR is activated (I tested this myself). I also installed react-router with ease and it appears to work just fine. So far so good from where I'm sitting. I was able to start coding right away while I was half assed looking through the modules to see what is actually in there and just play while I talked with my roommates. It took me all of 10 minutes.
stoikerty 4 days ago 0 replies      
I'm fairly close to finishing the conversion of my `dev-toolkit` into an npm-module. It is almost no-config, has scss, server-side-rendering, hot-reload and more. I'm a one man band but will get there. It's all on https://github.com/stoikerty/universal-dev-toolkit

The npm-version sits in a feature branch, just look for the corresponding PR if you're keen.

mohsinr 4 days ago 1 reply      
Loving it! I always wanted to get started with ReactJs but looks like time has come! I tried the module and I am loving the "Welcome to React" page on my localhost! Thank you!!!

PS. Already in /src/App.js , and wow live reloading without gulp or browersync , it is so simple to get started! Thank you!

venuzr 4 days ago 1 reply      
As someone new to React, I wonder

a) How is this different from getting a custom starter kit/generator from Yeoman. Searching in yeoman, I see several for "React" with the top one having over 9.5k stars http://yeoman.io/generators/

b) Is Facebook planning to maintain and keep this generator current? Why don't they just contribute/recommend an existing generator

kcorbitt 4 days ago 3 replies      
Sane defaults and pieces made to go together are critical to lowering the adoption barrier and building a community, so huge props for that. But no ability to configure anything at all? I think that for most people, at some point there will be some small change to the default configuration their environment will require[1], and that means they'll need to jettison the entire project. It's nice that this is easy to do, but it would be better if it weren't necessary.

[1]: For example, I run my app from within a Vagrant Virtualbox machine that doesn't forward filesystem notifications correctly, so I have to configure Webpack's hot reloader to poll for changes instead of listening for fs events.

dustinfarris 3 days ago 0 replies      
Incredible turnaround! I remember seeing Dan Abramov's tweet [1] a while back saying React could learn from Ember's CLI. Two weeks later, here it is! Impressive!

[1]: https://twitter.com/dan_abramov/status/752863664290553856

griffinmichl 4 days ago 0 replies      
After spending hours yesterday teaching a colleague about webpack, babel, configuration, etc, this is exactly what the React community needs. Finally some fucking sanity in the ecosystem.
fdim 4 days ago 0 replies      
Finally something that may convince me to switch from https://github.com/thardy/generator-ngbp - all I want is to focus on writing components not figuring out how to link gazillion dependencies
marknadal 4 days ago 0 replies      
The day we now see "compiled successfully" in the Command Line as the necessary "easy" starting point for frontend web devs.
koistya 1 day ago 0 replies      
There is an alternative solution that supports CSS Modules, PostCSS and HRM with React Hot Loader. Give it a try! Create a new folder for your project, then run:

 npm install -g react-app-tools react-app new react-app start

pests 3 days ago 0 replies      
Do not forget React is not equivalent to an SPA.

Almost all SPAs give the entire body over to React but its also possible to choose a smaller DOM node and add React progressively to any existing website view that would benefit from the React paradigm. In this setup (at least) server-side rendering is no longer needed and thus simplifies setting up the build process.

So its not all or nothing, you can pick and choose where to use React based on your needs and requirements.

joemaller1 4 days ago 1 reply      
This is great and I will be moving my React projects in this direction. At very least this project represents a de-facto standard and guidance about how to work with React.

However I do wish the React team would pick between ES6 classes and `React.createClass`. I think I remember the main React tutorial was rewritten in ES6 at one point, but then switched back. I've read arguments both ways, but I suspect they ES6 is still too much of a barrier to entry.

People who aren't up to speed with ES6 will still be shaving a lot of yaks before actually jumping into React.

hex13 4 days ago 1 reply      
it seems like a solution to the Vjeux's challenge: http://blog.vjeux.com/2015/javascript/challenge-best-javascr...

(If we don't count sharable requirement).

Bahamut 4 days ago 1 reply      
This is great! Toolchain pain really sucks, and makes being able to get started on a project harder for many, when all you want to do is get a setup running and start creating app code. Having an opinionated CLI start up a scaffold is great - one can also peel apart this whenever one has to prepare for modifying the build chain for specific purposes (for example using Rollup to optimize bundled code, adding new build steps, etc.).

The only thing I disagree with here is not allowing it to be pluggable - IMO it should be flexible and allow users to tweak the setup as desired. Of course, it should focus on getting the core experience right, but in the long term I absolutely think it would be better to have a pluggable CLI.

matthoiland 4 days ago 1 reply      
> Some features, such as testing, are currently missing. This is an intentional limitation, and we recognize it might not work for everybody.

With Ember CLI you get a great testing setup with Qunit. While I prefer Mocha over Qunit, I'm at least glad that testing is a first class citizen in the CLI.

bruth 3 days ago 0 replies      
This is a great start. I too got frustrated with the overwhelming complexity of the "boilerplate" or "starter kits" that have all the bells and whistles. Having something very simple that you can exit if necessary is nice. The approach I took was just to document each tool or plugin that I may want to add to my project since it is generally very quick to do so: https://github.com/bruth/modern-javascript-starter
ola 4 days ago 1 reply      
I created something similar a month ago


Doesn't seem like this project differs that much, although this looks to have the backing of core React developers.

silasb 4 days ago 0 replies      
Very very awesome. This is very much needed. I work with a lot of older Java people and showing them the ins/outs of webpack/eslint/React is killing productivity. Thanks FB.
dack 4 days ago 1 reply      
This is really great! However, I think this speaks to the need for a better API in general for this sort of stuff.

At the moment it's "all or nothing" in that you can decide to let everything be configured, or nothing be configured ("ejecting"). This makes perfect sense, but I think a more ideal solution would be having layers of configurability that let you more gracefully set your preferences without completely abandoning this tool's utility.

I'm not saying that's easy, but it's a direction I'd be excited to see.

robertfw 4 days ago 0 replies      
I've solved my webpack config woes by using HJS-Webpack[0] which describes itself as "Helpers/presets for setting up webpack with hotloading react and ES6(2015) using Babel."

It provides you with a base configuration object, which has been setup with any loaders that it has detected in your node_modules. You can then extend and customize as needed.

[0] https://github.com/HenrikJoreteg/hjs-webpack

thegayngler 4 days ago 0 replies      
This was one of the downsides to us using React at work. I did a presentation on Webpack and React and my manager who is the VPE said having to figure out and choose tooling was a concern for him. I responded we should know what the tooling is doing and introduce pieces into our stack rather than go all the way in. This allows us more freedom on how and when to upgrade or change different tools in our front end stack.
mfrye0 4 days ago 0 replies      
This is awesome. Learning the whole modern build ecosystem was such a headache. It's great it see best practices rolled up for new users to experience.
codenamekt 4 days ago 0 replies      
This is great. One of the biggest hurdles is getting started which is why there are so many react boilerplates. It would be awesome to see projects like this grow so that it would auto configure based on libraries you would like to use. Want to use Redux? Just run the `create-react-app -m redux hello-world ` and you would get everything with the addition of redux and it's configuration.
brooklyndude 3 days ago 1 reply      
Is it me, but is not Angular just taking over? Kind of Google vs Facebook thing. Just think Google won this one.
platonichvn 4 days ago 1 reply      
Definitely a great way to lower the barrier to entry. The eject feature is sweet since it removes the risk of lock in. Looking forward to integrated unit testing libraries in a future release. While you're at it let's add redux. :)
ralusek 3 days ago 0 replies      
For CLI generation utilities, I haven't found anything that comes close to this guy:


JoeCortopassi 4 days ago 6 replies      
This is great for someone who wants to get started to learn React, but is missing a ton that is needed for a real world production app.

 * No isomorphic rendering * No hot module replacement * No generators * No dockerization * No Sass support * No test environment setup * No code splitting
It would be cool to have a production ready tool from Facebook, but I'll stick with gluestick for now https://github.com/TrueCar/gluestick/blob/develop/README.md#...

lucaspottersky 4 days ago 2 replies      
Expectation:- "Hey, look, this can SOLVE ALL THE PROBLEMS"

Reality:- "Hey, look, this actually BRINGS IN A WHOLE LOT OF OTHER PROBLEMS too!"


arianvanp 4 days ago 0 replies      
This is really neat. especially the fact that I can 'eject' at any time when I need more power. Love it!
deepsun 4 days ago 1 reply      
Is there something like this for React Native? I'm interested in recommended directory structure.
kjhughes 4 days ago 1 reply      
Does this help with React Native too?
uptownhr 4 days ago 1 reply      
wanted to share http://github.com/uptownhr/kube. I also wanted to tackle this problem but also handling SSR as well.
crudbug 4 days ago 2 replies      
Having a consistent API with ember-cli will make this more useful.

$ react-cli <>

wrong_variable 4 days ago 2 replies      
Just wanted to know, am I the only person who is unhappy with react ?
smrtinsert 4 days ago 0 replies      
This is not a react problem, this is a nodejs problem.
rhinoceraptor 4 days ago 3 replies      
What the heck is that terminal font?
smcgraw 4 days ago 0 replies      
Exuma 4 days ago 0 replies      
Looks cool
mcs_ 4 days ago 0 replies      
Thanks !!!
mderazon 4 days ago 1 reply      
In the spirit of zero configuration, it would be nice if it included Standard JShttps://github.com/feross/standard
Don't add your 2 cents sivers.org
699 points by dhruvkar  1 day ago   235 comments top 65
haasn 1 day ago 14 replies      
I can't agree with this article at all. From my experience on contributing to FOSS projects, I feel much better when somebody senior makes adjustments to my code rather than leaving it as-is.

Not only does it tell me that they actually read my code and spot errors (the added bug safety net makes it much less stressful for me to write new code), but it also makes me feel like I'm learning something new that I wouldn't otherwise have. Finally, it inadvertently means that the rest of my code passed their high standards for quality, which is gratifying - especially for large commits in which I only need to change little.

I guess the key difference between my experience and this article, though, is that the article seems to be mostly focused in a non-technical boss commenting on benign/arbitrary opinions (like shades of color), rather than a technically skilled superior commenting on his area of expertise. That might explain why I have such a 180 reversal from this article's stance.

JacobAldridge 1 day ago 2 replies      
I think it's good to note, as Derek does, the distinction between "2 cents' worth" and larger changes that do require senior input - otherwise you're just being the manager that the team create ducks for [1].

This is where coaching skills as a manager can prove useful. If you feel there are some minor changes that could be an improvement, but don't want to impose your will/opinion, coaching ('ask') can be a better response than managing ('tell').

For example, you might ask "If you had to improve anything, what would you change?" It's an open-ended question that will encourage your team member to think. They can reply "Nothing" if they're confident in the final solution, or they may propose some tweaks they weren't fully happy with - "I'm not sure if that's the right shade of blue" or "I think that's the right call to action, but maybe we could get another opinion". If those are reasonable improvements, empower them to implement the additional change; if you disagree with the extras they raise, tell them you consider the version they proposed to be superior, which empowers their original decision.

Just don't be the manager who expects a detailed response and change every time ... then you're right back to where you started.

[1] See point 5 https://blog.codinghorror.com/new-programming-jargon/

tobtoh 1 day ago 5 replies      
As a manager, I tend to frame my feedback/opinions as 'Have you considered <something>?' or 'Can you explain your thinking behind this <feature/function/design>?'

Doing it this way, I get an understanding of their rationale and if I still think my idea is good, I can debate the worthiness of it against my employees reasoning. I feel this approach fosters a 'best idea wins' rather than a 'manager opinion trumps all'.

I agree with Derek's implied point that 'manager opinion trumps all is bad', but think it's a discredit to his staff if he doesn't challenge their ideas if he thinks he has something more worthwhile.

ojosilva 1 day ago 10 replies      
I find the advice highly condescending.

> Because of that small change, that person no longer feels full ownership of their project.

What kind of person is that who 1) thinks the ownership is 100% theirs when working in a team? 2) can't handle a little nitpicking? 3) feels it's less their work just because of a little change? 4) can't defend their work and resist those 2c?

This is advice for managing 2 year olds. As a manager, just be your reasonable self. The truth is key for a functioning team. Giving people feedback and letting them know where they stand helps build trust.

> Its perfect. Great work! Lets ship it.


timv 1 day ago 2 replies      
I think the suggested comment Its perfect. Great work! Lets ship it. has its own set of issues.

Firstly, while the conversation started with "I'm looking for input", the manager has suddenly moved it into a push for delivery.

If the design was ready to ship, then that won't be an issue, but if all you're looking at is a mockup, or a slapped together stylesheet, etc, then what was an attempt an encouragement has just lumped more pressure on.

Also, the comment assumes that the designer thinks it's "done". The request for input could mean "this is the direction I'm going in, does that look right". Telling them that you think it's "ready to ship" still takes ownership from them. You've just moved from being the boss who provides 2 cents on everything to the boss that wants everything to be done right now without taking the time to do it right.

Much better to say "I think it's fantastic. Great work! Is it ready to ship, or do you have more to do on it?"

jasonkester 1 day ago 2 replies      
I like the way Joel Spolsky describes managers taking this even further at Microsoft back in the day.

They wanted to make sure the engineers knew that they were the ones designing the software, to the point where they would refuse to even step in and resolve a conflict between two engineers about the design. Even when those two engineers came up and asked for help resolving said conflict.

Now you've got three people in the room: a designer, a developer, and a manager. Who's the person who knows least about the problem?

Solve it yourself, guys. Perfect.


Gustomaximus 1 day ago 2 replies      
Something a very smart person advised me was to "Tell people what you want, not what to do"

It sounds so simple yet is surprisingly hard to practice. It really puts the onus on you to think carefully about outcomes you desire and explain it clearly.

ctur 1 day ago 2 replies      
What this article misses is that genuine feedback helps us grow, and being open to it is as important as being able to deliver it in a way that doesn't take something away from the recipient. Getting others' input and adapting to it (or learning when to accept but not heed it) is crucial for getting better at whatever endeavor one is engaged in.

If you have a suitable level of trust and respect between you and the person requesting approval or feedback, then your input can be valuable without it being undermining of their ownership of their creation. In fact, the opposite; by soliciting feedback (preferably early, not just at the end of a project), you can help build a sense of ownership from the person giving feedback.

fixermark 1 day ago 0 replies      
Related: Parkinson's Law of Triviality, and the Queen's pet duck in Battle Chess (when developers become aware of management's need to unnecessarily "finishing touch" all work and begin making slightly-inefficient choices intentionally to give the work a "shear point" where the management can feel like they're contributing by removing something obviously incorrect).


Similar processes have been used for decades by movie and television creatives to move the Overton window on media censorship---early cuts of a project will have something obviously grotesque and culturally repugnant, so the censors lock onto that and miss the risqu thing the creator wanted to get to their audience.

exolymph 1 day ago 0 replies      
This seems a bit condescending to me. I can take suggestions and feedback without losing sight of my own accomplishments. Because I'm an adult.
shocks 1 day ago 0 replies      
I am reminded of the story about the duck!


agentgt 1 day ago 0 replies      
I think the article/blog post is missing a key point in that the employee came to the manager asking what he thought.

This is a critical role that the manager plays that the article decides to come up with a unreferenced social psychology manipulation solution when there is a greater problem at hand.... the employee is nervous about shipping the product and wants approval.

The reason why this is because a great manager is supposed to protect and shield employees from the outside so that they can feel at ease with making decisions and working with out fear of making some mistake that costs there job (unlike the article I can site like 5 or so Harvard Business Review articles written by experts that show this is often the reason why employees come to ask questions like that.. yes I'm being snide but I think "What got you here want get you there" is basically on overrated Dale Carnegie rewritten).

Not getting any input sends a message of "I don't really care about your work". And if you really wanted to coach and you really believe this arm chair psychology then why not send a link of the article to the employee asking the advice and say "I would like to give some input but I want to assure you that I think you own this project... etc etc...".

Education is a powerful thing... manipulation is not.

samscully 1 day ago 0 replies      
Some factors in motivation at work are the level of autonomy, mastery and purpose in your job.

A coworker giving minor feedback is only contributing to your mastery. A boss giving you the same minor feedback is cutting in to your autonomy. The exception is when the purpose is great and sweating every detail is necessary or when the boss is a recognised master of your craft and their feedback is almost always correct and regularly helps you improve.

An example of the first case might be engineering at SpaceX and the second could be Steve Jobs giving engineers and designers product feedback. What I think a lot of people are missing in this thread is that in most situations the purpose is relatively uninspiring and the boss is significantly less skilled than the person she is giving advice to.

guelo 1 day ago 0 replies      
I sometimes convince my boss that he is wrong. I think it's a sign of a healthy team.
euphoria83 1 day ago 0 replies      
Love the suggestion. So many times have minor suggestions from managers killed the enthusiasm for a project because it feels like the manager can't think about or appreciate the bigger picture. In fact, it looks like he is only trying to own the success of the project by picking on non-important stuff.
ryanbrunner 1 day ago 0 replies      
The way I've always tried to approach this is by pointing out problems, rather than offering solutions, especially where I'm in a position that I'm giving feedback to someone who is more of an expert in the activity than I am. Expressing things as problems automatically eliminates a lot of the minutiae about wording, color, etc. (since those are just subjective opinions and not reflective of a problem), and it lets people still feel like they're owning the work and not making changes they disagree with because they're forced to.


Providing solutions: "Move the 'widgets' menu to the top. And make it bold"

Expressing a problem: "So, when I'm using the app, one of the first things I usually want to do is look at my widgets. It took me a few minutes to find out how to do that."

The solution to this problem might be looking into whether accessing widgets is a common use case, or finding different ways of educating users about how to find widgets, or yes, even moving it to the top. But no matter which solution is chosen, everyone is going to come out of it with more information than if they blindly implemented the manager's uninformed opinion.

antoineMoPa 1 day ago 0 replies      
My experience with work is that everything can be slightly improved all the time. You have to stop at a certain point and I think the author has found the nicest place to stop, at least for employee happiness.

On the other side, if you have been working on something alone, I think it is a clever idea to accept the feedback of your boss just to have another perspective.

torrent-of-ions 1 day ago 4 replies      
Ugh. This reeks of "safe space" nonsense.

Part of working in a team is receiving comments and criticism from others. If you take these negatively and as attacks to you, rather than collective construction towards the final goal, then you have a problem and need to consider changing job.

I find that when I design something I become accustomed to early design choices and then eventually become blind to them. I need someone to come along with a fresh pair of eyes, see the whole thing and nitpick it. It's absurd to suggest that it's either perfect or needs to scrapped entirely.

epa 1 day ago 1 reply      
Tread carefully between being fake and being sincere. People will stop asking you if you give a fake answer like the article.
bmmayer1 1 day ago 3 replies      
A better way to approach this situation: "That's great! Love it! Out of curiosity, what inspired you to choose those colors and fonts?" Then, they still have ownership, but they also are given the chance to justify their choice and it starts a conversation that could lead to improvements, if necessary.
baddox 1 day ago 3 replies      
Is it not possible to make it clear that your 2 cents is just a suggestion? This just seems like bad communication, and regardless of whose fault that is, the boss might as well try to solve it.
johnwheeler 1 day ago 0 replies      
Dale Carnegie Rule #1: Never condemn, criticize or complain. In general, we're all not actually looking for input so much as support.

Human nature is such that even when we readily acknowledge someone better at something, we quietly indulge and seek out advantages we have in other areas.

We engineers like to think we're more rational and accepting of input. Working as a coder and manager for the last 20 years has shown me there's nothing further from the truth.

quadrangle 1 day ago 2 replies      
Oh how I wish I had a downvote ability on this post. The first thing wrong with this is the fact that the post is nothing more than the author's 2 cents. The author doesn't know what they hey they're talking about and is just pontificating.

As someone who does a lot of creative work, I hate it when people just give useless positive encouragement and withhold actual constructive feedback, small or large. Only hypersensitive people feel worried about their loss of ownership because they accepted someone's suggestion.

A situation where a boss having a color preference means that a designer feels unable to reject the suggestion is a dysfunctional workplace. When the manager has that type of feedback, they are not being a manager. A good manager makes it clear that if they have color feedback, that's just their suggestions and not them acting in capacity as a manager.

collyw 1 day ago 0 replies      
abalone 1 day ago 2 replies      
This doesn't necessarily invalidate the advice, but Steve Jobs clearly did not abide but it (would critique icons at the pixel level, etc.), so it is demonstrably not universal advice for building successful companies.
dahart 1 day ago 0 replies      
Honest question - is this idea of individual ownership conducive to team morale, enough to protect ownership like this? I've seen a lot of examples of how "ownership" backfires when people are protective of their turf or disregard others' valid input. Ownership seems to be commonly used to get people to take personal responsibility as a proxy for motivation, it does help some people set better examples, but does it motivate a team and make it more cohesive on the whole?

I have seen 2-cents backfire a lot a well, but I think it's most often strong personal opinion not backed by good reasons, like evidence or unseen constraints or dependencies, etc.

This article started by the boss asking for "non-obvious advice", and then provided an example of advice that was pure opinion without any reason, and stated as a veiled command rather than offering an alternative option. It can be important to share actually non-obvious insights, even if it's just 2 cents worth, so I won't be asking my team to avoid sharing their 2 cents as a blanket rule, I will ask them to share any important insights they may have, and encourage them to have a good reason.

darkerside 1 day ago 0 replies      
This is a total straw man. There's something in between bikeshedding and blindly approving work you feel is less than perfect. What if the boss responded with, "Why did you choose this color of blue?" This indicates respect for an intentional choice, lets the employee provide a rationale and be heard, and still moves towards a better final product. There's a false dichotomy presented in the article, and it's crap.
dclowd9901 1 day ago 1 reply      
As an independent contributor I don't want my manager weighing in on my choices. I see them as out of the loop on the more technical aspects of my job and they should leave those decisions to me.

If I come to a more technically senior member of the team who is more knowledgeable, it is to _precisely_ ask for their opinion.

So in my mind: if you're a manager, don't bother; if you're a more senior IC, do, with the explanation of why your approach is better. You're more of a mentor at that point than a manager.

Oh, and if your approach isn't really better, just different, keep it to yourself.

SudoNhim 1 day ago 0 replies      
Huh... approaching this from the opposite side, when my boss has suggestions I always take it as an opportunity to let them feel some ownership of the work, even on occasions where the suggestions don't turn out to be that useful in practice ("I implemented your suggestion of X, which led me to come up with Y").

Trying to game professional relationships goes both ways I guess :)

paulsutter 1 day ago 2 replies      
This is an excellent mini-rule for habitual micromanagers. Like me :)
visakanv 1 day ago 0 replies      
It's tough to be prescriptive about this sort of thing in a general way. Every situation is different. It really depends on who you're working with, sort sort of context you're working in, what sort of expectations you're working with, etc.

I would say, "make sure you set expectations in advance about how feedback is to be interpreted and acted upon". My boss gives me his 2 cents all the time, and I enjoy it. And vice versa. Sometimes I preface my suggestion with "you don't have to do change anything, but FYI...", and sometimes I say "I feel quite strongly about this: XYZ" and even then we have an understanding about whether or not something should or should not be changed. It works the other way too. I've shipped things without incorporating feedback, and all was well.

So I think it boils down to culture. Everyone's understanding of what the norms are, what is expected, etc. (Just for fun: Can you imagine telling Elon Musk or Steve Jobs not to add their 2 cents?)

giis 1 day ago 0 replies      
> because its not just one persons opinion anymore its a command!

Spot on. Exactly the reason for quitting my last job. Boss(manager) comes and adds his 'suggestion' to every task.Even though, I tried to stick to my way to going about the task. He continued to insist that I should give this view a trail run first. After a week or so, When thing go wrong, I'll go back original method and finish the task.

Later, he will complain about I'm being slow to respond to task. When I point out the unnecessary time-wasted due to his suggestion. Now he will backtrack & put it as 'I was only giving suggestions, it was your baby anyway'. It happened 3 or 4 times & I had enough.

Funny thing - During my last day, I took this issue to CEO. To my surprise, he said, 'Yeah, employee has to take my suggestion, since I'm their boss'!

[to those bosses if you are reading this:] - I don't have any issue with trying out your ideas - but when thing go wrong, take the responsibility for your _stupid_ idea.

donkeyd 1 day ago 0 replies      
As somebody who is working himself up the food chain and has a tendency to give his two cents, this is some real food for though. Thanks for sharing!
pbreit 1 day ago 0 replies      
This would need to happen 10 times before I would think I don't own the project. If I can't take input, I should not have a job.
apatters 1 day ago 0 replies      
While I'm normally a fan of Derek's musings, this one is too pithy for me. He is painting with too broad of a brush. There are all kinds of reasons why you might tell an employee to make small changes: maybe one small change will have a big impact; maybe many small changes in aggregate will have a big impact; maybe the employee doesn't want to take ownership; maybe the employee is too junior to take ownership; maybe the employee wants to learn the nuances of the profession in greater detail; maybe the employee is an underperformer; I could go on like this indefinitely as could many other experienced managers.

Now maybe if you're employing a bunch of independent-thinking artistic, creative and intelligent types this generalization makes sense. But I'm sure the global workforce has substantially more than 1 billion members who don't fit this definition. In that light it seems a little irresponsible to put this thought out there like it's a zen koan.

55555 1 day ago 1 reply      
I have a similar tip to provide:

When a designer or developer shows you a version that is not nearly done, don't provide any _specific_ feedback. If you mention specifically that a button should be red instead of blue, for example, then you are communicating that they are almost done, and they simply need to make the minor changes you mention for you to be happy. If the work isn't nearly finished, it's better to instead say, "Great work so far. I don't want to rush you. I think you should spend some more time refining the UX. Do a few trial runs as a user and do the best job you can to make the most perfect UX possible."

As soon as you mention specific things, they mentally move on.

fmavituna 1 day ago 1 reply      
If your team cannot take your feedback just like taking feedback from their colleagues, cannot argue with you or veto your idea easily with a legitimate response, take everything you said as a "command" then you have failed as a manager anyway.

I assume Derek's advice makes sense for Korean culture where manager and team dynamics are different.

kahrkunne 1 day ago 2 replies      
Maybe rather than avoiding giving helpful feedback, they should work on fostering better employer - employee relationships
odabaxok 1 day ago 0 replies      
This reminds me this story https://rachelbythebay.com/w/2013/06/05/duck/ (Project managers, ducks, and dogs marking territory)
hellofunk 1 day ago 1 reply      
Sorry, but this is just lame. A manager has a job to do, too, and whether or not they do it well, it is within their prerogative to add comments or suggestions, even if small, if they think it will improve a product.

Put yourself in the manager's position (which ironically this article seems to attempt to do). Go further, suppose it is your company, not just your department. Are you going to let a product ship when something minor could be improved to make it even better? A sloppy manager might. Or one who doesn't care. This article almost seems to suggest that managers should care less.

Attention to detail is what often separates good from excellent.

codingdave 1 day ago 1 reply      
Your boss' opinion might not be better... but it often does have more authority.

In my case, I work directly for the president of the company. He owns the place, he founded it, he built it, it is his. Whether or not his opinion is better, it does hold complete authority, and it is his right to have his company run his way.

Now, if you are a low/mid-level manager, the advice from the article may be more applicable to your situation. But your own corporate structure and culture will have an impact on the validity of that advice.

Kluny 1 day ago 0 replies      
As a follow up to this article, I need about 40 examples that illustrate the difference between manager opinions that are worth two cents, and useful manager feedback. Or about 10 years of experience, but I'm hopeful that someone will offer examples.
darkrabbi 1 day ago 0 replies      
Feeling ownership doesn't have to mean managers can't contribute at all, that's absurd.

Creative labor is unqiue - Depending on the employee, I know I have X changes I can suggest/propose per project before they start to get annoyed with me, for some it's more than others. Unsurprisingly when its promo time the guys who are easier to work with get brought up (even if the difficult divas work is marginally better) The big thing is trust and respect - earning that early on is key and once you do things are much easier going forward.

andremendes 1 day ago 0 replies      
This reminds me of that story about the bikeshed[1] colour: "This is a metaphor indicating that you need not argue about every little feature just because you know enough to do so. Some people have commented that the amount of noise generated by a change is inversely proportional to the complexity of the change."

[1]: http://bikeshed.com/

DavidWanjiru 1 day ago 0 replies      
My two cents: rather than tell someone to change a word here, a colour shade there, what you should do is give your reasons for WHY. Why is this shade of blue better than the one I chose? Why is a given call to action better than another? That way, your opinion becomes, if not data driven, at least reason or anecdote driven. And that shouldn't demoralize anyone, I should think.
virtualized 1 day ago 1 reply      
Wow, what a bad example. That is called constructive criticism and is a very valuable tool in a company's toolbox of culture.

What I observed to be actually harmful is criticism of how an employee works: Tools, practices and habits. If your boss tells you that the build tool of your choice sucks without naming a better alternative or any constructive advice, that feels really bad and destroys motivation for the job.

majkinetor 1 day ago 0 replies      
If your manager asks you to change a color or font or whatever that has 0 relevance to the actual task, you know then that your manager is an idiot and you should probably start searching for another job or a way to move up over him.

I had number of such situations. You are almost always better off without that in your life.

jhbadger 1 day ago 0 replies      
Emperor Joseph II from Amadaeus: "My dear, young man, don't take it too hard. Your work is ingenious. It's quality work. And there are simply too many notes, that's all. Cut a few and it will be perfect!"
patwalls 1 day ago 1 reply      
I like to get feedback from as many people as possible, and take action on MOST of the [actionable] feedback. In my experience, this always leads to a better final product.
meerita 1 day ago 0 replies      
The problem is the initial question, not the manager input. There's a significant difference between delivering something with facts than delivering something and asking for approval.
daveheq 1 day ago 0 replies      
Yes, bosses should always say "Perfect work!" so the developer feels full ownership of the project.
freyir 1 day ago 0 replies      
The correct management approach is to test 41 different shades of blue, collect $219M.
werber 1 day ago 0 replies      
I think this is more-so a critique of toxic manager-developer relations, but that's just my 2 cents.
CiPHPerCoder 1 day ago 0 replies      
> Its perfect. Great work!

I cringed. It's not perfect, it's at best excellent.

cursivedgreat 1 day ago 0 replies      
I always feel not to add those extra two cent when i'm responsible to something. You just my words here. Thanks
fidz 1 day ago 0 replies      
So as a leader, she/he does not need to get into details and just simply ignore insignificant details?
jagermo 1 day ago 1 reply      
So if I have an honest opinion about something my coworker or employee asks me, I should not tell him, so that I won't hurt his feelings?

Is this kindergarten or grown-up life?

Especially in creative areas it pays to let other people look at your stuff and get feedback. It is simply too easy to just get stuck on a path.

mesozoic 1 day ago 0 replies      
Great advice. Now how to subtly get the boss to read this...
tn13 1 day ago 0 replies      
Man! I wish my boss had read it 5 years ago. I could never understand why I was not motivated in my job at all even though my boss was really brilliant. It basically reduced to this. No matter what I did the boss always had 2 cents that had little impact on anything but made be less interesting in doing the work. But for my next job the boss was much better, instead of saying change this and change that he would often ask me why I made certain choices and what inspired me. He would then say "ship it" but the questions he raised made me wonder how I could make things better.

But the advice is something everyone must learn.

bambax 1 day ago 0 replies      
Excellent. So simple advice, so obvious, so true.

Works with kids, too.

johanneskanybal 1 day ago 0 replies      
This makes so much sense yet didn't cross my mind.
Mz 1 day ago 0 replies      
Maybe "Don't add your 2 cents" is not the best way to frame this, but he is correct that being the boss means your opinion is dangerous. More accurately, casually tossing out half-baked ideas is dangerous for anyone with real power or even social influence. When your words carry enormous weight merely because you said them, you need to be more careful about the things you say because it will have consequences. If it really is just a casual opinion, and not something you have really thought about, it is better to err on the side of not expressing it in such a situation.
jsprogrammer 1 day ago 0 replies      
The boss should not say, "it's perfect!", if the boss feels it is not....which is literally what this article is advocating.

In the hypothetical situation described, I get the impression that the boss didn't even look closely at the two weeks worth of work. To then get immediate flattery feels very disingenuous to me.

unabst 1 day ago 2 replies      
Huge Sivers fan! With that said, I'd take this advice way further.

> The bosss opinion is no better than anyone elses.

All opinions are no better. Does not matter who. Throw out opinions altogether. In America we are obsessed with "our right to our opinion" and somehow have confabulated this to equal our individuality, our exceptionalism, and our success. We're unique to begin with, exceptional is only a hard earned reputation, and success is just a feeling. Case in point, unique is effortless because it takes effort to be identical to other people; no one has ever been exceptional without doing exceptional work; define success conveniently, and we're all successful.

> your opinion is dangerous

All opinions are dangerous. A doctor doesn't operate based on opinion. Engineers don't build rockets based on opinion. Programmers don't program based on opinion. Reality is fact based, not opinion based.

Don't add your 2 cents. Add something that's actually worth something. Faster, lighter, stronger, cheaper, smoother, more efficient, more succinct, more obvious, more fun... Better is measurable. If you need progress, you need facts. And if you start comparing "opinions" and find one is better, you're already seeking facts. Opinions about opinions is demonstratively far worse.

Of course, this is all professionally speaking. When consequences don't matter, we're free to indulge in our opinions because we all have them. They're automatic. But just because you thought something, it doesn't mean squat. If anything, opinions are funny. Off the clock, do and say whatever you want. But whenever you need to be real, share what you know, not what you think. The more you know the better. Never confuse this with the more you talk or the louder you voice your opinion. Bosses that authoritatively enforce their opinion are the worst.

And most importantly, know when you know. Because only then can you or anyone go gather facts before making that important decision. Talking and thinking is not gathering facts! Googling is.

If you're still wondering why Donald Trump is doing so well, it's because so many of us still live in an opinion based reality. He is the feel good candidate for his supporters. Hasan Minhaj just did an awesome piece on the Daily Show [0]. His supporters don't care what he says, and their opinions are hilarious. Not to mention they are all wonderful people. If not for politics, we'd all be holding hands in a circle.

For better or worse, democracy treats facts and opinions equally. But a good boss won't. They are not equal, and only one leads to true progress.


[0] http://www.cc.com/video-clips/ukn1y5/the-daily-show-with-tre...

jecyll 1 day ago 0 replies      
Great advice, short but precise!
Apple says Pokmon Go is the most downloaded app in its first week ever techcrunch.com
419 points by doppp  4 days ago   210 comments top 16
jandrese 4 days ago 15 replies      
Shows you just how much pent up demand there was for Nintendo to release games on mobile.

Getting a huge first week download count is a lot easier when you have literally decades of brand recognition. Being a free download certainly didn't hurt either.

It remains to be seen what the customer retention numbers look like. I saw some absolutely insane projections earlier this week about how Apple and Nintendo were going to make billions off of Pokemon Go. I don't see how they're going to sustain the current game as it gets fairly grindy and there isn't much to do once you've caught them all. Maybe some compelling new features will be added to keep players from getting bored? Direct peer to peer battles and possibly trading for example.

MattyRad 4 days ago 3 replies      
When someone told me that Pokemon Go was exploding, I looked into it, and got really excited about its concept. People getting outside, interacting though a long-loved game, using real landmarks to denote checkpoints, playing a localized "king of the hill" type minigame. The architecture behind it and it really feels like it's using bleeding edge VR push us into a more social and fun world.

That said, I also feel like it's equally the biggest missed opportunity to date. Usually, I just see players walking, heads down, not talking. It was downright eerie when I was downtown one Tuesday night at midnight, and it was dead quiet despite ~60 Pokemon players meandering about. They should have introduced PvP earlier (hopefully it's around the corner!), and better yet, make it so you get more exp for battling people you haven't battled before. Spur people into social interaction!

chipperyman573 4 days ago 6 replies      
I'm confused by the title. Is PoGo the first app to reach x downloads in the first week of release, or is it the most downloaded app of all time, just one week after release? Slow internet won't let me view the article.
curiousgal 4 days ago 9 replies      
This game has become a victim of its own success. Niantic has be strangely silent about bugs and server outages. I foresee a massive drop in interest soon.
hogwash 4 days ago 0 replies      
Funny retrospective on the last 25 years of AR:


kin 4 days ago 2 replies      
The numbers will absolutely drop. I mean, there's definitely a ton of content that can be added like earning gym badges, Gen 2-6 Pokemon (which people don't really care about), trading, PvP, etc. But, at the end of the day I doubt Niantic has the time/resources for that. The execution has been rather poor.

Still though there's a demand for Nintendo software on mobile. They just need to really to execute. They're really lucky we're tolerating these huge bugs (nearby Pokemon and frozen Pokeball after catch still outstanding).

Osiris 4 days ago 0 replies      
75% of the time I launch the game, I'm confronted with an error that I couldn't be logged in.

Maybe the game is so popular because it feels like a rare resource. It's so hard to get into the game that when you do you have to play it as long as possible until the servers go down again.

meerita 3 days ago 0 replies      
A game can't have everything the first day of launch. They released this to test it against the market. Now that it is a success, changes will come to increase retention and purchases. Now the next biggest events will be promoted also with Pokemon things to do in the place, like "get this rare pokemon on the Vegas Electronic Event. The game is a real success and the mechanics described by Richard Bartle proved that people loves to collect, it is the Diogenes syndrome but in mobile version.
smaili 4 days ago 4 replies      
Would love to know the app who previously held the record.
blhack 4 days ago 2 replies      
I think that the biggest feature that Pokemon Go will add, that will hopefully come soon, is the ability to broadcast your position.

This is something that I wish ingress had done. The game is a multiplayer game, there is no doubt about that. I'd love to be able to open map map, see that some of my friends are over playing at $foo location, and then go meet them there.

kevindong 4 days ago 0 replies      
The dropoff in interest has already started. I pretty much stopped playing last week. I got to level 14 and the amount of grinding required was just ridiculous (the amount of XP you earn per action does not increase as you level up meanwhile the XP required to level up goes up exponentially). The bugginess of the game really did not help.

The dropoff in interest has already, objectively speaking, started[0]. It's currently (as of July 22) 66% of its peak (per Google Trends). In my personal experience, interest on my college campus has already subsided. It's not completely dead, mind you, but the hype is over.

[0]: https://www.google.com/trends/explore#q=pokemon%20go&date=to...

Bonsailinse 3 days ago 0 replies      
These numbers are probably the reason why niantic was totally caught by surprise and have these massive server issues.I really don't like to see the app offline every time I have to go for a longer walk, but hey, I take it easy and wait for either niantiv upgrading their ressources or the userbase to shrink, which shouldn't take too long imo.
TheMagicHorsey 4 days ago 4 replies      
Have I missed something, or is this game just about walking around collecting pokemons with eggs? Is there anything else to it that I missed? The interface isn't illuminating.
melling 4 days ago 1 reply      
So, do we have an entirely new class of games/apps that are about to appear? Like Pokmon go but for ...
mp3geek 3 days ago 0 replies      
How does the numbers stack up between the downloads on Google play vs Apple?
xlayn 3 days ago 0 replies      
As time pass there are more iDevices.... so the number can be irrelevant.So maybe a percentage as download/devices?
The Apple Goes Mushy Part I: OS X's Interface Decline nicholaswindsorhoward.com
485 points by helb  18 hours ago   395 comments top 89
56k 15 hours ago 23 replies      
This article doesn't make any sense.

The use of metaphors worked when people didn't know what a computer was, and the only way to make its UI make sense was to mimic real-world objects. Now, this is no longer necessary. It's been 40 years.

While I agree that new macOS icons aren't great (see the Game Center icon), the old ones were silly. I'm 35 and I have probably seen an actual contact book only once when I was little. It doesn't make sense to have a skeuomorphic contact book as an icon for Contacts. The old icon for Pages? I don't even know what that is, I'm not into calligraphy.

The only thing I'd agree with is hiding UI controls. Apple has been making its apps less usable to make them look pretty, hiding important elements in an attempt to declutter the interface. I hate how Safari hides the full address in the address bar, for instance. Or, how they removed scrollbars and force people to actually scroll every piece of the interface to check if there's something more to see, while before you could tell just by looking at scrollbars. Of course, there are settings to go back to the old behavior for both my examples, so power users are fine, but I fail to see how these moves improve things for regular users.

I also disagree that Steve Job's death was detrimental to macOS's UI. He was the one who kept Apple looking outdated with his obsession for skeuomorphism, I'm glad they went for a flatter look right after his death.

Of course, everyone's taste is different, but I still think this is a bad article.

arrrg 15 hours ago 9 replies      
Its not that important a point, but legal pads are a very US phenomena. Im German and I have never ever seen a yellow pad like that in person. The current notes icon (with white paper) is much more in line with the note pads you would encounter in Germany. Paper people write on to take notes is typically just white. Maybe bound with a spiral on the left or top (and perforated paper to tear off), maybe glued together at the top.

Maybe internationalization was a consideration here? Yellow paper doesnt read as anything recognizable internationally. (Yellow sticky notes are probably internationally known, though.)

My overall point would also be that taste colors opinions in this case. Or taste at least leaks into them. I think its important to be very careful with that and to try and avoid to let taste color too much of what you think. (My taste is very different from that of the author and as such I think many of his points are just plain wrong-footed. There certainly are some good points in there, but taste plays too much of a role.)

petilon 9 hours ago 1 reply      
EVERYTHING beautiful is skeuomorphic. The page turn in iOS 6 iBooks, page curl in maps, cover flow, the shred animation in older versions of Passbook, the date picker in iOS 6, rotating settings gear (when updating iOS 6), the Time Machine interface in older versions of OS X, photo borders and shadows in iWorks documents, etc.

This is not surprising, because our sense of beauty comes from the physical world.

So what is the problem with skeuomorphism?

Tech enthusiasts would like their phones to look like something from the future, not something from the past. But ordinary everyday people prefer for it to look like things they are already familiar with, or can relate to.

Tech enthusiasts worry that the skeuomorphism was getting totally out of hand, particularly where the UI metaphor started limiting functionality (e.g. an address database that's limited to what a Rolodex can do, rather than exploiting what is possible with a computer). But this is not really true. For example, iBooks has instant search, something only possible with a computer.

Some people point out that many skeuomorphic elements reference things that a large part of Apple's audience hasn't used in a long time, if ever. True, but here's the thing: It doesn't matter whether the user has ever seen a reel-to-reel tape. What matters is whether the visuals depict a physical object that the user can model in his mind. If it is too abstract (that's the opposite of physical) then non-tech-enthusiast users will find it hard to intuit.

Some people say skeuomorphism looks tacky. This is partly true. Skeuomorphism is hard to do. When done poorly it does look tacky. But when done well it looks very beautiful.

By removing all skeuomorphism Apple is throwing the baby out with the bathwater.

jpalomaki 11 hours ago 4 replies      
The camera icon is not just a picture of the physical world object. It is pretty universal symbol for taking pictures. When the same symbol is used in so many places, even people who have never used the physical object resembling the icon, will recognize its meaning.

In my view the reason behind many of these user interface changes is not really improved usability. The simple reason is that no matter how good interface you design, after some years it just starts to feel old and boring. Old and boring is hard to sell. Fresh and exciting is better. Therefore we keep on changing stuff, even though from pure usability perspective it would be better to stick to the old and boring but familiar.

Easy to use systems make happy customers, but they don't necessarily win the customer's heart at that point when the purchase decision is made. Maybe this is one issue for Apple? Maybe the "old Apple" was happy giving out xx% of their sales for a bit of ideological reasons, but the one needs to find growth where it can? One could see this kind of hints in the product lineup. I would say back in the days it was pretty opinionated, now there's 4 different iPad models (and countless variations).

flohofwoe 12 hours ago 3 replies      
The article misses the mark IMHO by focusing on skeuomorphic(sp?) icons.

Yes, usability has degraded during the recent 'flat design' craze, but not so much because skeuomorphism was tossed out, but because the many little visual design changes that kill discoverability.

The mobile operating systems started this trend where a lot of advanced functionality was hidden behind 'magic' touch and swipe gestures that go way beyond the simple and intuitive tap, zoom and rotate gestures, like 2-, 3- and 4-finger swipes, long and short touches, etc..., important features cannot be visually discovered (how do I close an application again, on iOS, Android and Windows8? how do I flip between applications? how do I take a screenshot?).

It's the many small things that kill usability for the sake of visual design:

- the famous shift-key on iOS, what the hell were they thinking?

- buttons are often indistinguishable from non-interactive label, leading to idiotic trial and error clicking to find out which UI elements do something

- scroll-bars that are hidden by default, loosing the information how far I am into a document (OSX)

- changing and moving things around just for the sake of confusing existing users, not making anything more intuitive (especially Windows is guilty of this)

And so on and on... the icon design is the least of the problems (and every OS worth its salt should allow to replace the icon theme anyway).

One important reason I'm going back to the command line more and more is because UIs have become so unusable for anything that goes beyond browsing an image collection. Change itself is only good if it results in improvements, but in the area of UI design, things that have been working just fine for 20 years have been broken for superficial visual effects.

It's like 90's web designers took over and are building operating system UIs now (and may be there's a bit of truth in it).

It's not like the past was perfect of course, I mean... Alt-F4, Alt-TAB, ... but that was on Windows which was always laughed at for its poor usability (at least from view of AmigaOS and MacOS users).

terda12 14 hours ago 4 replies      
OS X looks better than ever. Yeah I agree with Notepad and Photos icons looking not good but everything else is perfect. I'm typing this on an OSX right now and it just looks great.

Worst part of the article by far was

> OS X packaging, once very elegant and eccentric (and printed on a physical box), has become thoroughly unremarkable.

This is 2016, no one uses CD's anymore. And that leopard print box design looks like packaging for some kinky underwear.

0942v8653 14 hours ago 7 replies      
Note: This is my personal opinion!

I recently had to use a computer with OS X Mavericks on it (10.9 I think). I was struck by how beautiful the interface was. I'm having trouble finding a good screenshot illustrating this, but compare



Everything just looks better on Mavericks. The gradients may be over-the-top but they're at least consistent. Transparency on El Capitan is pointless and ugly. Maybe I like the system font a little better.

Firefox is also a really good example; it looked great on Mavericks but has not been able to fit in since.

Usually, I prefer simple UIs: i3, terminals, etc.. But the look they have done for Yosemite/El Capitan just doesn't work.

fredsted 15 hours ago 4 replies      
I like the minimalistic UI of El Capitan. It gets out of the way, and puts in focus what I really want to look at: the content, Web pages, my code, my photos. My computer is a tool, it's not an artwork I turn on to look at.

Buttons still look pushable, input fields still look editable. The Dock didn't lose any functionality whatsoever by having the 3D effect removed.

In my opinion, the El Cap UI requires just as much talent as the overdesigned (but very pretty) icons and graphics from the previous era. I don't miss the brushed metal and pinstripes, though.

oneeyedpigeon 15 hours ago 0 replies      
It seems as if the web has influenced the latest round of GUI designs, especially the 'flat' design trend which has clearly harmed usability when it comes to things like buttons. This is handled better by Android's material design guidelines, but it's still a regression.

The problem with this approach is that the web has no guidelines whatsoever, beyond user-agent defaults. So each and every site does their own thing (whether 'good' or 'bad') and Apple (+ Google, etc.) decides to cherry-pick what is 'popular' or thought to 'look good', seemingly without thinking through the impact on usability. Or, possibly worse, they have considered the usability impact but deem the tradeoff worthwhile.

Shengbo 15 hours ago 2 replies      
I have to disagree with the author. I understand his frustration with some of the icon choices(Photos, Game center, etc.) but most of the things he's grieving for are just tacky. The leopard pattern on the OSX box and the overly cluttered illustration with the galaxy background, glassy surfaced "X", icons and lens flare would look especially stupid in 2016.

I'm glad they removed all the silly shadows, 3d effects and animations and defined more strict UI guidelines.

I don't need my OS to look like a Christmas tree.

ksec 34 minutes ago 0 replies      
If Skeuomorphic means trying to mimic things in real world. Then i guess everyone would have a different skeuomorphic perspective. Especially with different age group.

Today's Kids dont even know what a Matchstick is. And the likely hood that a Phone will replace 90% of all consumer camera in next decade. Leaving DSLR for Professionals. There isn't cassette, Betamax or Type. And kids born today likely dont know what Floppy or even Optical disk are when they are 20.

So it really is a little bit of forward thinking from Apple. Given how a 5 - 7 years old are now using iPad. They have a different group of future loyal users to cater for.

Side Note - Tech is good, and it is everywhere. But what happens to good old days when kids were young they go out to do stupid things and have fun in the park. Instead of starring at the iPad screen.

zelos 15 hours ago 1 reply      
The removal of colour from the icons in the Finder sidebar is the change that feels the most clearly anti-usability to me. It clearly makes the items harder to distinguish and has no benefits that I can see apart from fitting in with the flat design concept.
mark-r 3 hours ago 0 replies      
I think people miss the purpose of icons. To be effective, they need to be distinguishable and memorable so that you can find them when needed. If I need to find something that I'm not familiar with, I'm more likely to use a text label to figure it out than to try to guess by the picture. An ideal icon would have a lot in common with an ideal logo.

The current trends are somewhat troubling. Removing color makes an icon less distinguishable, and changing icons yearly makes them less memorable. The reason the floppy disk icon is still useful so many years after it stopped being relevant is because it hasn't changed.

I do miss the richness of imagery that skeumorphism provided, I find it more visually appealing than the abstract flat look. I realize this is a matter of opinion though. How ironic that as enhanced displays allow for more realistic renditions than ever, the trend is to move away from realism.

pankajdoharey 14 hours ago 2 replies      
I totally agree with the author, in fact much of the hidden functionalities in apps that use to exist is also gone. For instance in earlier versions of previes you could join multiple pdfs into one single pdf by dragging the pdfs into thumbnail preview panel. Now that functionality simply doesn't exist, it seems like they (Apple) have done a rewrite of so many Apps, that they missed out on smaller details.
CPLX 15 hours ago 5 replies      
I'm not sure how to tell this guy, but I'm over 40 years old and I have been using Apple computers literally since elementary school. At a certain point the reference to intuitive design can be to ones own past, if that past has become sufficiently ingrained and intuitive to users.

I thought the lament about the photo app dropping the icon that looks like a camera was particularly odd. He seems uninterested in even acknowledging the point that most cameras don't look like that anymore, and there are many (most?) full fledged adults who have never used a camera with a large attached lens.

I'd even wonder at this point if there are more people in the world familiar with Apple products than with actual apples that grow on trees, but I digress.

krylon 15 hours ago 5 replies      
One thing I really dislike about 10.10 is that "maximizing" a window will - with a few exceptions - switch it to fullscreen. To maximize a window I need to press Alt while clicking the maximize-button. And there is not even an option to switch this behavior.

On a small laptop screen this behaviour might be preferable, but on a FullHD display, I find it rather annoying.

dchest 15 hours ago 2 replies      
Above, on the left, you can see the creative, dazzling, H.G.-Wells-spirited Time Machine interface and icon of yesteryear, receding into radiant oblivion (complete with animated stars that drift toward you). Well-crafted, they stirred the right mood. On the right, observe what Apple bulldozed the old Time Machine for: a low-effort cartoony icon in place of the hatch to hyperspace, and a blurred desktop background with flat grey controls in place of a fantastic portal to the past. To me, this "update" to Time Machine stands as one among many sad and uncaring obliterations of the heart Apple used to have.

My head was spinning (literally) every time I used old Time Machine, so I'm glad they removed this silly animation.

tcfunk 11 hours ago 2 replies      
For me, the biggest issue with the past 2 or 3 OS X updates hasn't been the interface at all. I can get used to a new interface, that's not a problem.

The real problem (imo) is the lack of meaningful updates to the OS. EVERYTHING is an aesthetic change, or some new Siri or iPhone integration. Does anyone actually start an email on their phone and finish it on their desktop? Anyone?

Where's updates like better window management? How is it 2016 and I still don't have window tiling on a 4k (5k?) iMac? Apple is busy repainting their bedroom varying shades of grey while Windows puts out integrated linux and bash, improved window snapping, openssh integration, etc.

adamlett 14 hours ago 2 replies      
There is an old macintosh print ad in the article stating something like: "A computer that everyone can use will get used by everyone". It immediately made me think of iOS. No matter how good you think the UI paradigm of the mac was at its peak, it doesn't compete with iOS in user friendliness. We've all heard stories of or witnessed 2-year olds who can navigate an iPad. That was never the case for a mac.

I wonder if some of the changes made to macOS (ne OS X), were made because iOS has freed the mac from having to serve complete novices and very casual consumers, and instead focus more on serving a segment of professionals and serious content producers whose needs are different from casual consumers, and may be better served by a more subtle, muted interface.

rsync 10 hours ago 1 reply      
It's worth pointing out that circa July 2016, it is still manageable and reasonable to continue using Snow Leopard.

I use it daily on my primary system. Other than the cool "draw my signature on a PDF" feature in Preview that showed up circa Mavericks, there is nothing - not one thing - that I miss by running that older OS.

VMWare fusion works great. Current chrome+ublockorigin. Great multi screen support. I don't run any services and keep a strict firewall (and also the ublock Origin) so the lack of recent security updates (combined with the gradual loss of interest in SL from exploit writers) isn't a problem.


kstrauser 8 hours ago 0 replies      
Skeuomorphic icons are probably OK, but skeumorphic behavior is horrific. Consider the awfulness of the Mountain Lion era address book and calendar that tried to emulate the behavior of physical objects by imitating their limitations. That, for me, was the nadir of OS X usability. It might have been great for someone who'd literally never seen a computer before, but anyone who wanted to navigate a calendar without flipping through months one page at a time had a real fight on their hands.

If you want an icon that looks like a physical object, OK. I can probably live with that. But the moment you want to extend that to making the app act like the object its icon represents, you've lost me. Perhaps Ive et al decided that having a realistic Notes icon attached to a non-notebook-like app was more confusing than keeping the old one? That certainly seems justifiable.

Also: who cares what the Safari compass looks like? To me, "compass" means "compass". It's never represented a safari to me (not that "safari-the-adventure" represented "Safari-the-browser"). What would that skeumorphic icon be anyway - a rifle?

This is what we used to have: http://venturebeat.com/2012/10/30/skeuomorphic-design-or-one... . I couldn't have been happier when Mavericks ditched all that. It wasn't perfect, to be sure, but it went a long way to restoring artificially-lost usability.

The article feels to me as though the author is caught up in nostalgia. That's fine, but not mistake "I liked it that way" for "that way was objectively better".

onion2k 15 hours ago 1 reply      
I agree with the author that OSX has lost a lot of it's personality, but I don't agree with the notion that this is a bad thing. I don't want my OS to have much of a personality. It's a tool for getting things done. The less I see of the OS the better.
tim333 15 hours ago 2 replies      
I think the greatness of OSXs past UIs is a bit over hyped. I changed from Windows 7 to Mavericks a couple of years ago and still find 7 more intuitive. You can see at a glance which programs are running from the bar in 7 unlike OSX, I still have not figured out the Finder very well and so on. On the other hand OSX seems better engineered in many ways, more stable, faster to respond and so on.
codeulike 15 hours ago 4 replies      
Guidelines from 30 years ago dont necessarily get to remain guidelines. Cos stuff changes. We've been using guis for 20 or 30 years now. We dont need to pretend they have shadows or include a realistic depiction of some related artifact in the icon. People just get it now without all that clutter.
coldtea 14 hours ago 0 replies      
>Buttons across the system now look much less like real buttons. Almost no life-imitating textures survive. OS X, in large part prior to Yosemite, used to crawl with visual metaphors; why has Apple banished so many of the analogies that helped people feel comfortable with the Macintosh in the first place?

Because thousands of idiotic designers and tons of media pundits lamented their "anthropomorphic" interfaces and swooned over the abstract UIs of competitors, to the point that it sounded like a real problem...

return0 13 hours ago 1 reply      
Ios is just as bad, still need about a minute to figure out how to add a reminder if i can't use siri. icons still dont make sense (serious travesty, my compass icon is actually safari, while my compass app has a cross icon). And the sheer stupidity of flatness everywhere makes it slow to read the interface.
stupidcar 15 hours ago 0 replies      
I see a lot of opinion here, and not a lot of hard evidence. Maybe the changes in OS X, and the move away from skeuomorphism in particular, have hurt its usability, but the way to prove that isn't through emotional op-eds.

Get a group of non-Mac users, randomly split them into two groups. Set them a number of basic tasks: writing and sending an email, editing a photo, opening a particular website, etc. Then have one group do it on an older version of OS X, and the other on a new version. Then record how long it takes them, what things they struggle with, etc. Ask them to report their level of frustration and enjoyment.

It wouldn't be a perfect experiment, but it would at least produce some concrete data to discuss.

sbuk 14 hours ago 4 replies      
This essay is nothing more than the authors opinion (which there is nothing wrong with) presented as fact (completely take umbrage with this). Looking at the authors website, the current design of Apple's UI elements are not to their taste. An example would be this; http://www.nicholaswindsorhoward.com/blog-directory/2016/1/2... The odd idea is interesting, but it's massively regressive.
x0 3 hours ago 0 replies      
I agree with this so much. Two other things that annoy me about the new OS X design:

- Helvetica everywhere. Helvetica, whilst not bad, has somewhat poor readability (not legibility) in body copy. I find it much harder to read than the old Lucida Grande, and it annoys me I can't change it back easily. In fact I jailbroke my iPad just to change the font (to Iowan Old Style, absolutely beautiful serif)

- No favicons in Safari. I can't even understand why they would do that, and have absolutely no way to turn them on, for those of us who like to switch tabs without spending 5 seconds reading them all.

abrbhat 15 hours ago 1 reply      
The change appears to be a part of the broader trend towards flat design from realistic design in the UI community. A balanced point of view: http://www.webdesignerdepot.com/2013/12/infographic-flat-des...
xaduha 14 hours ago 2 replies      
Windows Themes were a step in the right direction, why can't we have that in OS X? Give me that Snow Leopard skin for El Capitan or whatever.
typpytyper 13 hours ago 0 replies      
I'm 100% in agreement with the article. I haven't upgraded from Mavericks for the very reasons he outlined - stark white UIs with no feedback. (Well that and the WIFI issues in Yosemite.)

As for skeuomorphism, look no further than the phone icon in iOS - it's a handset from a traditional 80s phone. If Apple designers were to take their flat minimalist mantra to the next level the icon would be a picture of a black rectangle representing an iPhone.

rekshaw 14 hours ago 3 replies      
Currently, on the same front page on HN there is an article: Humans once opposed coffee and refrigeration: why we often hate new stuff.

This applies perfectly to you, dear self-righteous blog author.

Animats 5 hours ago 2 replies      
If you give up skeuomorphism, you have no model for icons. You could have every icon be a smartphone, but that won't help. Without some real-world basis, icons are just abstract shapes. Text boxes might be better. At least you don't need an icon dictionary.

Everything now has to be mobile-friendly, which means 1) fat fingers, and 2) you can't see the thing you're touching. "Mouse-over" for more info is not meaningful for touchscreens. So icons can't have explanations.

Whatever happened to Google's "material design"? Did anything ever use that? Even Google's own web sites didn't use it, although Google had a react.js implementation.

Incidentally, don't use a compass icon for anything other than a compass on a device that actually has compass hardware.

Maybe the future of icons is corporate logos. That's what favicons are.

intoverflow2 14 hours ago 0 replies      
To understand the photos icon you have to be familiar with iOS 6 to remember the photos icon was a sunflower because the new one is an abstract representation of that flower.

Honestly couldn't believe it when they shipped it.

noir-york 15 hours ago 0 replies      
Agreed with the sentiments of the author. While I don't expect an OS to be "beautiful" - attractive would perhaps be the better word, the general grayification and over-simplification has made OSX harder to navigate.

Compare it to Atom which I, and I am sure many others here, use every day" syntax highlighting, coloured icons, etc all make navigating code/screen faster.

ruffrey 12 hours ago 0 replies      
In my opinion, this article focuses on all the wrong things. Things that matter in OSX to me - and why I continue to choose it as my most basic tool for work - are:- exceptional workspaces and swipe navigation- the safety of time machine backups- for the most part things are very fast and the OS gets out of the way of what I am trying to do- I have not wasted any time on hardware/OS compatibility issues- I only reboot every few months, if that- battery life is excellent (os is not a hog)

Quite honesty I could give a shit what the icons look like as long as the OS is a reliable and fast tool to do my work. time machine saves your ass, who cares that the icon doesn't meet your tastes.

bitL 12 hours ago 0 replies      
I think the main problem with UI is the movement towards scalable graphics fitting all types of displays which unfortunately can't be as complex as raster graphics with the same size constraints. Hence flatness and overly abstract, "simple-gradiented" shapes everywhere. I think once people realize that 8K will be the final resolution of displays, raster will come back as the number of display formats that need to be supported will be limited and graphics could be rescaled on the go. For now we have to suffer through the "modernism" phase of UI design which like in art makes only around 5% of people inspired, rest is underwhelmed; or stick to the last bearable OSes like Win 7, Mavericks, iOS6...
franze 13 hours ago 5 replies      
I'm writing this post form an late 2015 Mac Book Retina (not air, not pro). The mac book stands on a book, a small fan points to it's back so that I can write this. Without this the machine would get to hot, which results into a slower user interaction (think: write a letter, wait for it, wait for it, wait for it, letter appears on the screen). On the left hand side a huge adapter (99EUR) is plugged in. It enables me to plug into an USB device and the charger. I do a lot of presentations. so basically the adapter is part of the machine. The color of the keyboard begins slowly to fade. Especially the "S" is now more grey than black. The Max Book is advertised to have a battery lifetime of 8 to 10 hours, I normally get up to 4, sometimes less.

It's garbage!

HoppedUpMenace 7 hours ago 0 replies      
The older aesthetics reflected an OS that could get everyone excited about its potential, kind of like back in the day when people would look at the graphics depicted on records, VHS covers, video game boxes, ect... Now, nobody draws quite a connection with that sort of art or media so you have something with no soul that conveys the idea that people are just happy with whatever you give them cause they could care less about people making a connection, just that it works and does, at a minimum, what its supposed to do.

A bit off topic... Same idea could be applied to what happened in video games in the past, like Super Mario RPG style transforming into Paper Mario or Ocarina of Time/Majora's Mask style being dropped for Wind Waker cartoon like aesthetic, both examples devoid of any human connection built up to that point in previous iterations but it gets the job done as its still the same kind of game.

rubyfan 14 hours ago 0 replies      
Great points here. The iconography is an important element of navigating a system and recent iterations of both macOS and iOS have suffered at the hands of trendy flat design.
Unbeliever69 8 hours ago 0 replies      
The abandonment of skeuomorphism was part rebellion and part caving to the ebbs and flows of popular design. When you move away from tactility and recognizability (familiarity), there is a greater burden on the user to interpret the intent of the designer. This is one of the reasons there has been a big surge in the use of animations in UI; because flat designs aren't as effective at communication as recognizable, tactile ones. Younger users of unique UIs and gestures have proven that you can adapt to new visual conventions and forms of interaction. The problem is that this is not Apple's ONLY audience. The Mac has always been attractive to people who want a more approachable, simple, user-friendly, less virus prone computer. This audience is typically less tech savvy and probably not a millennial. I further argue that the visual style of Steve Jobs was more in alignment with the innate desires of this archetype.
acr25 13 hours ago 1 reply      
WYSIWYG has NOTHING to do with UI design. It refers to being able to print a document (fonts, sizes, lines, pictures) in such a way that it precisely resembles what you see in the application's document window.
makecheck 8 hours ago 0 replies      
The thing that bothers me most about the direction of icons is the sameness of all their shapes, ignoring the rest of their appearance.

This is a problem in both the Dock (where it seems everything is a circle nowadays) and in toolbars (where everything has an ugly white rounded-rectangle behind it, and is really tiny).

There used to be explicit mention of the importance of varying shapes in toolbars, as part of Apples own interface guidelines. At a glance, its far easier to find things when theres a triangle-like icon next to a round icon next to a square icon next to a home-plate-shaped icon, etc. If every icon has the same shape, and has been shrunk into that shape to leave even less space for meaningful details, its almost not worth having an icon at all.

BlakePetersen 8 hours ago 0 replies      
The irony here is this design critique takes place on one of the ugliest sites on the internet.

Also, everything he points out is to the benefit of a society that's making it's first steps into computing using GUIs. Once you've accomplished that first introduction, you optimize for efficiency. Do you really need to sit there and parse an icon's details every time before you actually click the icon and move on to the actual task you intended to do?

I'm sure Apple considers a lot before making design changes, from UX efficiency to cross-platform aesthetic consistency to iPhone battery life (does displaying white icons encourage darker backgrounds which ultimately have more pixels using lower energy to light up or some such nuance?), which this article ignores entirely.

zeveb 10 hours ago 0 replies      
I think this is just the blah and boring exterior of modern Apple hardware being carried over to the software inside.

I remember the beauty of the original iMac (compare the G3 to the current model at https://en.wikipedia.org/wiki/IMac), or the beautiful clamshell iBook G3 (https://en.wikipedia.org/wiki/IBook). Likewise, System 9 was beautiful, and the original OS X was even more so: it looked like candy, translucent and shiny.

Now, everyone praises Ives for turning out yet another rectangle with some circles on it. Someone really should buy him a French curve to augment his straightedge and compass

The new macOS looks much the same: flat, boring, staid, plain.

Now, I personally don't really mind that (my own WM is dark, muted and mostly invisible), but it seems rather a betrayal of what Apple used to stand for: actual beauty rather than simply the lack of ugliness.

greenimpala 15 hours ago 0 replies      
I really agree with this - I thought it was me getting slower but every time I open finder or Dock I have to spend that split second extra effort to discern between the "Applications" folder or "HD" icons etc.
ldom66 12 hours ago 0 replies      
From a usability perspective I totally agree with this article. From a design standpoint though, people generally prefer minimalism over realism in UI design. I also much prefer the design of macs today over PCs, even though I prefer Windows for usability. Apple these days is more looks over function, while Steve Jobs thought the other way around was the way to go in my opinion. So there is a decline over what Apple stood for after Jobs.
swiftisthebest 12 hours ago 0 replies      
I agree with literally nothing this dude said. The old mac interface was needlessly complicated and distracting. Simple is better! I don't want to spend time deciphering what some thing is supposed to be. It is a waste of brain cycles. Just give me a memorable glyph that is different enough from the other ones to not be tough to spot, and let me focus on what's important. My work!
kartickv 11 hours ago 0 replies      
Since the article spends quite a bit of time talking about icons, let me focus on that, too.

I think the old icons had too much detail in them to be visible at typical icon sizes. They ended up looking cluttered and over-designed. The new icons are simpler visually, making them clean and fresh. And they come in brighter, happier colors.

I much prefer the look of El Capitan over older OSs.

S_A_P 10 hours ago 0 replies      
I think I would switch to windows if I had to look at a vintage OSX version. I think its great for nostalgia to look back at the evolution of UI and not every change is an improvement, but I find the latest versions of windows/OSX to be perfectly usable and nice to look at. I dont need such literal translations of my icons for them to make sense. This argument is kind of bogus anyway- how many times of looking at an icon and clicking it does it take to remember what goes where?!?! I use the application Reason quite often but that icon offers no clue as to what type of application it is. Flat layouts are easier on the eyes and all the shading gradients and other elements required for "realistic" ui are tiring. Completely disagree with this article here.
dahart 10 hours ago 1 reply      
All UI has gone less Skeuomorphic and flatter. Google's doing it and giving "material design" talks at all the major conferences. Just look at their design https://en.m.wikipedia.org/wiki/Material_Design Windows embraces flat design way more than Apple does, maybe the author needs to check out an Xbox or use a Surface for a few minutes? Websites everywhere are minimalistic and flat compared to five years ago. This is the current design thinking, the author is stuck in the past. On top of that, this article shows the opposite of what it says it does - it's demonstrating how Apple is doing all these "bad" things less than everyone else.
fredfoobar42 12 hours ago 0 replies      
There's some good points in the article, in particular the draining of color from the Finder sidebar and a few other glaring usability issues that haven't been addressed.

But, man, seriously, the original Time Machine UI was garbage. The Yosemite version is so, so, so, so much better. It's not great, but it's a lot better than what it used to be.

WayneBro 11 hours ago 0 replies      
Here's a much better article on why OS X is an exercise in bad UI design - http://aaronhildebrandt.com/archive/osx-an-exercise-in-bad-u...
hollander 15 hours ago 0 replies      
Many of the icons like Safari and iTunes haven't changed that much, just have progressed with current designs. Take a look at the Mozilla icons, or whatever app you're using like Evernote or LibreOffice. Do they remind me of a webbrowser or Office app? Not really. It's just that they are significantly different from other apps. After one or two times use, I remember what it looks like and that's all you need.
athenot 12 hours ago 0 replies      
While a streamlining of UI elements is not necessarily a bad thing (especially as the meaning of metaphors sink into the general public's consciousness), I have noticed that it takes me longer to find and converge to the right app icons on the dock & app switcher.

It seems the general trend regarding logos is "blobs of color in some abstract pattern". And those that still depict some meaning end up blending in, by adopting the same approach of using multiple sturated primary colors. Maybe it's just me but visually, they all register the same in my brain and I need to take a second look to tell them apart. And if I need to take a second look, the whole advantage of an icon vs. a keyword is lost.

petilon 10 hours ago 0 replies      
Here's another blog that documents the decline of Apple's user interface design: http://uxcritique.tumblr.com/
dingo_bat 13 hours ago 0 replies      
While I personally think the new icons and UI looks better, I agree that everything is too white. I hate white. It wastes power on phone and laptops, and it hurts my eyes. Let's make everything black. A good example is Holo-era Android.
sdkjfwiluf 14 hours ago 0 replies      
article misses the point that things change. it's a different world now and the metaphors need to change with it. the desktop is now the computer, so desktop metaphors make no sense. flat design looks clean and quiet, I don't miss the days of skeuomorphism.
pipio21 14 hours ago 1 reply      
I disagree. I use mac all day and the icons don't pose a problem to me either to the young people in my family.

Really frustrating for me are the changes in Spotlight launcher. It used to be instantaneous search and launch. Now it is painfully slow in all my computers.

ethanpil 13 hours ago 0 replies      
Very funny to see Apple playing catch up on design to Google in the area of flat style design. Google are the ones who pushed flat design into UI and everyone else is following their lead.
rahoulb 14 hours ago 0 replies      
Personally I feel Mac OS's Platinum was the pinnacle of "desktop" user-interface design and OS X/macOS is generally on a path of returning to that look (although not that feel)
rcarmo 11 hours ago 0 replies      
I've been using Macs since, well... Forever, and writing about them for over a decade over at http://taoofmac.com

I'd say this piece is too opinionated by design - not only does it ignore the almost relentless iterative approach Apple has taken with aesthetics, it reads a lot like linkbait.

But hey, that's just my opinion. :)

nickgrosvenor 5 hours ago 0 replies      
I don't even fucking understand how to use the music app on my iPhone anymore . So needlessly complicated.
mrcwinn 12 hours ago 0 replies      
I agree with the general sentiment that Apple isn't very good at software, but it's worth remembering they are solving not just for the desktop, but for many platforms. I cannot imagine that ugly 2008-era iPhoto icon on my big flat screen. The thought of that rips into the very fiber of my being and makes me question all that I know about life.

Put another way: relax. Hope things get better!

everydaypanos 13 hours ago 0 replies      
I mainly disagree w/ the points he makes, but the "vampiric" screenshot of the sidebar is very hard to neglect. There is something magical that goes away when you take away all the color and replace all the icons w/ glyphs. Especially on places like the sidebar where you interact w/ it too often and 99% of the time you are scanning/looking for stuff.
brudgers 12 hours ago 0 replies      
To me, if there is a weakness in Apple's interface due to evolution that weakness is in evolving too slowly. The premises for "easy to use" in 1984 are radically different from those of today. The "for who?" and the "for what?" and the "what artifact am I using?" have changed.

The last matters. It ain't so much skeuomorphism ain't relevant no more, it's computers are ubiquitous => to the point that the computers we carry in our pockets are taken for granted as computers.

Interface designs that draw on folders and writing pads and cameras and folders are headed to the dustbins. The best skeuomorph for X is the thing closest to X itself. I'm not saying that hamburger menus haven't irritated me, only that I know what it means when I see one and how it ought to behave because it skeuomorphs to it's own computer interface => and there it draws on a long self-referential skeuomorphic tradition of the floppy disk "save" icon.

To me it seems, the better we make skeuomorphs the further we get from icons and the closer we get to words. Siri, Alexa, Cortana lead the way back from hieroglyphs for middle mangers afraid of catching keyboard cooties. I don't want to say that everyone knows how to use a computer, but it's not 1984 and so many people around the world are comfortable typing to the point that many Africans have mobile payments over SMS.

Mobile payments over SMS shows that the efficient skeuomorphism of text. Our phones are/have cameras. "Use camera" pretty much tells the story.

dschuetz 14 hours ago 0 replies      
I suppose Apple truly intends its MacOS to become more mature and streamlined. I also loved the beauty of Leopard and Lion versions. I don't think that making all icons flat and less colorful makes Apple's products more mature. The decision might have been made as Apple decided to move from glossy devices to mat, so later after that all the icon designs also moved from glossy to flat. It took almost 10 years, btw.
pbhjpbhj 14 hours ago 1 reply      
Anyone link me to Apple's research papers on useability that underpin their design ideals? Or is it just fashion in the face of useability?
gastrointestine 11 hours ago 0 replies      
Yeah, they had some amazing meticulous artistry that they use to put into everything. I remember one of my old bosses (who use to work at Apple) talking about a special guy that Steve had, that use to create all the icons. He'd even make all that the imagery of the iphones and stuff on the home and product pages of the website by hand, all vector... But the newer stuff is less busy and way more navigable. Yes, there's less color, and concepts that are sometimes too abstract. But it's all still better for the user, in my opinion... Great, thought provoking article, though.
johnwheeler 10 hours ago 0 replies      
I like the new look and feel and where the OS has been going in general.
shanacarp 4 hours ago 0 replies      
Do you miss color, or do you miss skeuomorphism?

I'm trying to decide

Sean1708 14 hours ago 0 replies      
For reference, here are screenshots of just the stock El Capitan Apps:


swingbridge 11 hours ago 0 replies      
Meh... Fair point on the icon for the photos app, but other than that the article is grasping at staws
rahkiin 15 hours ago 0 replies      
The only thing I miss from previous versions, is the stars effect in Time Machine. It was crazy awesome to move through the stars.
brandonmenc 10 hours ago 0 replies      
It would help if they just started drawing borders around things again.
squozzer 10 hours ago 0 replies      
To me, flat design just looks cheap. Maybe texture and shading for icons makes as much sense as tail fins on cars, but even the most strident minimalist designer still wants to convey some sense of quality with the details they choose to keep.
lacion 13 hours ago 0 replies      
and all of this from a site that looks like its still hosted in geocities
read_it 14 hours ago 0 replies      
This 'flat design' ic created by designer who merely know how to create things, and is more easy to create things with one color and flat objects. I don't now why it is so popular
skrowl 14 hours ago 2 replies      
If you think this is a steep decline, wait until they report earnings after this market closes this evening. Another quarter of iPhone sales drops and they're expect to announce iWatch down by over 50% from last year.
joe_momma 9 hours ago 0 replies      
Nail on the head.
jacobmorse 11 hours ago 0 replies      
A post about garish design on a garishly designed blog. Appropriate.
EGreg 9 hours ago 0 replies      
This would not be the case if Steve Jobs still ran Apple:


api 10 hours ago 0 replies      
This is really just a pro-skeumorphism rant. I'm personally not a fan of skeumorphism, especially when taken to extremes like the silly leather calendar in previous OS X versions.

The real problem I have with Apple is the abysmal developer service their app stores offer and the technical stagnation of some of their products. They've got plenty of money and should be really really pushing the envelope. Where's my octacore laptop? Where's my really good issue tracker in the app store with 60 minute turnaround?

ebbv 14 hours ago 0 replies      
Wow this guy really misses the skeumorphic interface and really doesn't like minimalist design. Sheesh.

Personally I think OS X has been improving as its gone minimal. I don't like skeumorphic apps. They seem at first glance like they are easier to use, but the analogy actually many times makes they interfaces more misleading for novice users when they can't do everything that they expect they should be able to.

The Finder side bar is a perfect example, for me, on how minimizing an interface can make it more clear. The old Finder side bar looks cluttered and noisy and it can be hard to easily understand everything that's going on there. The more minimal one is easier to read. Could it be improved at this point by bringing some color back? Probably. But sometimes you have to go "too far" minimal to then find where you can bring things back in a way that really improves things.

The OS X interface was in dire need of a reset like this. If you ask me the main problem with the OS X move towards minimalism is that it's been too slow and didn't go far enough in some areas. I would have preferred that OS X went all the way right off the bat and then we could have already been on the path to bringing some more color and shading back in that will probably happen over the next several years.

At least, though, I agree with the author that the Game Center icon is awful.

chadlavi 11 hours ago 0 replies      
Old Man Yells At Cloud
ommunist 12 hours ago 0 replies      
I miss Glass.
jccalhoun 11 hours ago 0 replies      
I have a hard time taking design advice from a web site whose design consists of a narrow band of text and 2/3-3/4 empty space.
misterdata 12 hours ago 0 replies      
As people start getting used to computers and digital, virtual concepts (such as scrolling, windowing, buttons), the need for them to reflect tangible, real-world objects diminishes - I think abolishing skeuomorphism is the right way to go.
With Launch of AU Passport, Africa Is Now Borderless venturesafrica.com
412 points by juanplusjuan  2 days ago   107 comments top 13
jpatokal 2 days ago 4 replies      
Title is absurdly overhyped. As noted, this is currently limited to diplomats and heads of state (!). Talk is cheap in the AU, and even in the unlikely event that they do manage to "create the conditions for member states to issue the passport to their citizens", it's likely to end up a boondoggle like the APEC Business Travel Card:


Which also grants visa-free travel to APEC economies, but only if you can fulfill a huge list of mostly arbitrary conditions that de facto make it impossible to apply for unless you're sitting in the C-suite of a listed company, have someone to do the paperwork for you and travel a lot.

the_duke 2 days ago 9 replies      

"Although, the passport is currently exclusive to government heads and diplomats, it is here to stay, even though it will take a while before it circulates among non-dignitaries."

So well see if it ever goes beyond diplomats.


Since the western media almost never reports anything on Africa, does anybody know how the AU is progressing?

Are they pushing for a EU like model? What are it's goals and principles, and are they actually making progress? (Links welcome)

zingar 2 days ago 1 reply      
The author clearly has no idea of Nkosazana Dlamini Zuma's history. She's a potentially powerful figure in South Africa's ruling ANC who is in a cushy job that is really political exile.

As such, she's free to talk up ideas that sound great in The Economist (conservative macro-economics, zealous climate change action, consumer protection, gay rights, health laws) but never have to convince a single member of the electorate about their merits. Issues are somewhere between "don't care" and "when hell freezes over" in the electorate's mind.

This particular issue is on the extreme end of frozen hell. Poor South Africans perceive foreigners as criminals and/or a direct threat to their livelihoods, and the tension regularly boils over into violence. There's plenty of room for nuance: the violence is extreme but it is a fact that we have an appalling history of (and in some cases ongoing) exploitation, including hiring seasonal foreign workers (illegally) for lower wages than locals.

Regardless, nothing like the EU freedom of movement is ever going to happen.

Republicans in the US are more likely to support Obama immigration reforms for Mexicans than South Africans would vote for millions of Congolese, Zimbabweans, Nigerians, Malawians, Sudanese migrants/refugees to be allowed to hold jobs here.

neximo64 2 days ago 0 replies      
Well it's not borderless, its just you don't need a visa for a fair number of days (1 week to 6 months) and it will not accompany the right to work or study.

Currently very few African countries offer visa on arrival or electronic visas for other African countries' citizens.

While it's not the same as the EU's version of a borderless union, it's a great step forward.

zo1 2 days ago 1 reply      
Could we perhaps change the click-baity title? Africa is far from borderless. This is just a PR promotion article, with a corresponding title.
bogomipz 2 days ago 1 reply      
From the AU Summit in Kigali, Rwanda:

"One of the primary goals of the agenda is to guarantee integration and political unity in Africa and this passport will aid the body achieve that goal."

Yet Paul Kagame has been President of Rwanda now for 16 years!!!

Oh and this borderless AU is only available to the "ruling class." So African politics as usual. Nobody believes this nonsense. This is pure spin.

If they really wanted to address jobs they would need to address the fact that their countries are increasingly selling their natural resources and labor to the Chinese. I was shocked when I saw Chinese laborers in coolies building roads in Ethiopia and a foreman barking at them in Mandarin. This is not an uncommon site in Kenya and the DRC either.

nn3 2 days ago 0 replies      
The BBC coverage is much better on this (including a FAQ on common questions):


buyx 1 day ago 0 replies      
Slightly OT, but The article alludes to xenophobic violence in South Africa. South Africa already has a de facto open borders policy and its local black population has been squeezed out of many business opportunities by people from the rest of Africa, as well as Bangladesh, Pakistan, India and China, and there are spasmodic eruptions of anti-foreigner violence. The similarities with Brexit and Trump are striking, and show something more complex is going on, than simple bigotry.
p1mrx 2 days ago 0 replies      
But the AU country code is already taken by Australia. They should've gone with FU.
mrb 2 days ago 0 replies      
The AU passport is part of Agenda 2063 which establishes many other ambitious goals: http://agenda2063.au.int/en/sites/default/files/agenda2063_p...

So I want to create a reminded for Jan 1st, 2063 in my Google Calendar to "check if Africa has reached these goals". Unfortunately Calendar won't let me save events past year 2050 :-(

goatsi 2 days ago 0 replies      
A good article talking about the current issues traveling between African countries: http://qz.com/641025/the-trials-restrictions-and-costs-of-tr...
DavidWanjiru 1 day ago 0 replies      
According to the bulk of the response I saw on African Twitter when this was announced, the simpler (and much cheaper) thing to do would be to remove visa restrictions for anyone with an African passport.
known 1 day ago 0 replies      
"The greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects; In the most diverse communities, neighbors trust one another about half as much as they do in the most homogeneous settings."http://www.boston.com/news/globe/ideas/articles/2007/08/05/t...
Why Uber Engineering Switched from Postgres to MySQL uber.com
552 points by myhrvold  10 hours ago   221 comments top 42
jedberg 7 hours ago 8 replies      
> MySQL supports multiple different replication modes:

> Statement-based replication replicates logical SQL statements (e.g., it would literally replicate literal statements such as: UPDATE users SET birth_year=770 WHERE id = 4)

Postgres has that too (using a 3rd party tool, but it's an officially supported tool). We were using it on reddit 10 years ago. It caused a lot of problems. I wouldn't call that an advantage for Mysql.

Honestly, reading this it seems like the summary is: "We don't follow great engineering practices so we need a database more forgiving". Which is fine if that's how you want to run your business, but isn't really the death knell for Postgres.

A specific example:

> This problem might not be apparent to application developers writing code that obscures where transactions start and end. For instance, say a developer has some code that has to email a receipt to a user. Depending on how its written, the code may implicitly have a database transaction thats held open until after the email finishes sending. While its always bad form to let your code hold open database transactions while performing unrelated blocking I/O, the reality is that most engineers are not database experts and may not always understand this problem, especially when using an ORM that obscures low-level details like open transactions.

Your developer should understand database transactions. But you should make it easier for them by abstracting it so that they don't have to. And in this particular case, I'd say they shouldn't be using the database to do locking around sending a receipt. It should be put into a queue and that queue should be processed separately, which avoids the transaction problem altogether.

ledjon 8 hours ago 7 replies      
I would argue that most of these Postgres "flaws" are actually advantages over MySQL when you look at them holistically rather than the very specific Uber use-case.

Postgres's MVCC is superior (can rollback DDL, can add indexes online, can have open read transactions for a VERY long time without impacting other parts of the system)

Postgres supports many types of indexes, not just b-tree. One thing it doesn't have is clustered b-tree indexes... which is really what MySQL does that makes it somewhat "better." I wonder how Uber adds an index to a table that already has 1B+ rows in it with mysql?

Postgres have WAL level replication is a better guarantee of actually replicating the data correctly. I cannot tell you how many times I've had to tell my boss that the "mysql replicas might be slightly out of sync with the master" because of various replication issues. The way it handles triggers and scheduled events alone is garbage and can very easily break replication and/or silently cause inconsistency.

As for data corruption, if there is a bug that causes corruption, then there is a bug. I don't think that is a fundamental design flaw as implied in this article. You shouldn't rely on 1/2 assed replication design to accidentally save you from the data corruption bug. There are many downsides to the design MySQL has that are simply not listed here.

I have been both a professional MySQL administrator as well as Postgresql (as well as SQL Server and many NoSQL engines). Many of these Postgres issues are only issues at crazy huge scale, and I would say at that point you probably want to move away from relational anyway. MySQL has its own very large set of problems at scale as well.

It sounds like Uber is using MySQL as just a data bucket with primary keys ("Schemaless") which is good -- because you can't alter tables to save your life with MySQL.

At the end of the data each developer/business needs to use what works for them, but I would really shy away from pointing to this article as a linchpin in the "MySQL vs. Postgres" war (if there even is such a thing.)

sam_pointer 9 hours ago 1 reply      
We did something very similar at EA Playfish, at least one alumni of which is part of the Uber engineering team.

We used a 2 column InnoDB-backed table for all of our data storage, massively sharded, and run in a 3-host master-slave-slave configuration.

At that time EC2 would routinely kill hosts without the courtesy of a poke via ACPI and as such we became very good at quickly recovering shards. In a nutshell this mechanism was to have the new host contact a backup slave, perform an lvm snap, pipe the compressed snap over a TCP connection, unroll it and carry on, letting replication take up the delta.

That enabled us to not only manage the 10 million or so daily active users of that title, but was also the platform under the 12 or so additional titles that studio had.

We had lots and lots of very simple things and failures were contained.

I think at the time we were the 3rd-largest consumer of EC2 after Netflix and "another" outfit I never learned the name of. EA being what it was, however, we were never permitted to open source a lot of the cool stuff Netflix and ourselves seemed to develop in parallel.

fusiongyro 8 hours ago 0 replies      
The article could be summed up as "Postgres is not a distributed database." MySQL isn't either, although it certainly has more friendly replication technology. I think it's a lot more likely that what's really happening here is that they've designed their "schemaless" schema or its supporting software to handle the kind of soft errors that MySQL is permitting and Postgres was not.

We have MySQL replication across the country where I work and I certainly wouldn't characterize it as robust; it fails every 3-6 months. MySQL replication is certainly a lot older and easier to use than Postgres's, but SQL databases are fundamentally CP systems. When you say "This design means that replicas can routinely lag seconds behind master, and therefore it is easy to write code that results in killed transactions" it sounds like you're blaming the way replication was implemented for a physical problem. There is no way to design a replication system such that two highly-consistent databases can achieve perfect availability in the face of real-world networks. A worse protocol can exacerbate the problem, but a better one can't make it go away.

I have never seen corruption with Postgres (unlike MySQL), but I have never tried cross-datacenter replication with it. Apart from that, Postgres generally seems to do much better with consistency than MySQL does, where DDL statements are not transactional, etc. So I am not surprised to hear that their system trips harder on Postgres's more aggressive consistency.

In short, I suspect a more robust solution to their problem is a NoSQL database. On the other hand, it sounds like they want a combination of availability and consistency that will be difficult to get off-the-shelf. I'm glad they found a way to make it work. I wouldn't generally choose Postgres for a scalable system with an aggressive availability constraint--but then again, I wouldn't choose MySQL either, and I generally avoid problems that demand highly scalable, highly available solutions.

drob 6 hours ago 0 replies      
We've hit a lot of the same fundamental limits scaling PostgreSQL at Heap. Ultimately, I think a lot of the cases cited here in which PostgreSQL is "slower" are actually cases in which it does the Right Thing to protect your data and MySQL takes a shortcut.

Our solution has been to build a distribution layer that makes our product performant at scale, rather than sacrificing data quality. We use CitusDB for the reads and an in-house system for the writes and distributed systems operations. We have never had a problem with data corruption in PostgreSQL, aside from one or two cases early on in which we made operational mistakes.

With proper tuning and some amount of durability-via-replication, we've been able to get great results, and that's supporting ad hoc analytical reads. (For example, you can blunt a lot of the WAL headaches listed here with asynchronous commit.)

pella 8 hours ago 1 reply      
Roadmaps ( PostgreSQL ) 2016-2017-...

* Postgres Professional roadmap ( Pluggable storages, Multimaster cluster with sharding, Effective partitioning, Adaptive query planning, Page-level data compression, Connection pooling, Native querying for jsonb with indexing support, ....) https://wiki.postgresql.org/wiki/Postgres_Professional_roadm...

* EnterpriseDB database server roadmap ( Parallelism, Replication, Vertical Scalability, Performance ) https://wiki.postgresql.org/wiki/EnterpriseDB_database_serve...


And "Scalable PostgreSQL for real-time workloads https://www.citusdata.com " --> https://github.com/citusdata/citus

thinkingkong 9 hours ago 3 replies      
Posts like this are important.

We too often rely on a buzz-word heuristic and that's how you end up with dozens of random technologies that are harder to maintain and don't necessarily solve any of your problems. This method is good, because it shows that when you understand the problem the right way, you can find the right solution, even if by popularity it looks like a "step backwards"

Massive Kudos.

NhanH 9 hours ago 1 reply      
Well, this is heresy. Does that mean we are now officially boycotting Uber?

Joke asides, one thing I've been trying to figure out for awhile is the limitation at which certain components/ systems broke down. Basically, something along the line of "given X records, this operations would take Y time, or would cause Z A B C problems". I've actually got developers friends asking me how fast a simple "SELECT * FROM X WHERE index=?" would take on a million row table, since they were surprised that some NoSQL DB could do a query on hundred million rows in a few seconds.

I guess that's part of why you only learned how to scale after having done it once.

forgotpwtomain 8 hours ago 1 reply      
Great write up. A couple points -

I'm not sure this post is illustrative of any generally applicable considerations (re: the title) in the choice of Postgresql vs MySQL, since Uber seems to no longer be using a relational model for most of their data and is using MySQL effectively as a key-value store.

> say a developer has some code that has to email a receipt to a user. Depending on how its written, the code may implicitly have a database transaction thats held open until after the email finishes sending. While its always bad form to let your code hold open database transactions while performing unrelated blocking I/O, the reality is that most engineers are not database experts and may not always understand this problem, especially when using an ORM that obscures low-level details like open transactions.

I have to very seriously disagree here, ORMs make a lot of things easy - and you can get away with building stuff for a while without understanding the underlying databases or SQL but only to a certain scale (I'd say more like medium-scale, definitely not large or Uber level). If you have engineers writing code that interacts with a database without understanding transactional semantics, the engineer in question not the database is the problem.

> We started out with Postgres 9.1 and successfully completed the upgrade process to move to Postgres 9.2. However, the process took so many hours that we couldnt afford to do the process again.

There seem to be ways [0][1] to do online upgrades with Postgres (before logical decoding in 9.4), although I haven't personally used them. Not sure if they explored these options at Uber or not?

[0] https://github.com/markokr/skytools[1] http://slony.info/

BadassFractal 9 hours ago 2 replies      
I've heard from technical leaders at multiple now well established unicorns how they'd never use postgres or switched from postgres simply because MySQL has a lot more tooling built it and many more people are exposed to its shortcomings at "web scale" so that it's very well known where and when things will break.

Disclaimer, I'm a hardcore Postgres user myself, but I also keep tabs on the other tools.

Illniyar 8 hours ago 1 reply      
So the major issue detailed here is that postgres basically uses immutables rows which creates performance issues with writes.

Just read about their new schemaless db in their blog an the first paragraph contains this:

"The basic entity of data is called a cell. It is immutable, and once written, it cannot be overwritten. (In special cases, we can delete old records.) A cell is referenced by a row key, column name, and ref key. A cells contents are updated by writing a new version with a higher ref key but same row key and column name."

So, mmm..., not saying that postgres didn't pose a problem for them but I think postgres' db model fits better to their new db then mysql. They probably had to work really hard to get mysql to work like postgres.

Without this issue, it looks like two things needed.to be done with postgres that would have solved their problems have indexes that point to primary id and do logical replication (which they say a plugin solved in 9.4).

Is this a case of "I got burned by something so I won't use it again"

0xmohit 9 hours ago 5 replies      
Facebook maintains it's own fork [0] of MySQL. A couple of interesting talks are also available: MySQL at Facebook, Current and Future [1] and Massively Distributed Backup at Facebook Scale [2].

[0] https://github.com/facebook/mysql-5.6

[1] https://www.youtube.com/watch?v=jqwegP9xwVE

[2] https://www.youtube.com/watch?v=UBHcmP2TSvk

pritambaral 9 hours ago 1 reply      
I wonder what the design decisions are behind (or what it would take) to make Postgres store secondary indexes on disk like InnoDB does. Sure, the extra index lookup through the primary index is a cost, but it seems like write-amplification can sure be a greater concern too. Ultimately, it would be nice if Postgres gave the DBA a choice of if not move outright to secondary-index indirection through the primary index like InnoDB does.
scotty79 5 hours ago 0 replies      
Funny how the article is just:

- we used X in a fashion that suited us best

- it caused us problems Y because of some technicalities of X

- so we switched to Z and we could avoid Y thanks to how Z handles the technicalities differently than Y

and the top rated HN comments are:

- you used the X wrong

- all the technicalities of X that caused you problems Y are actually superior features of X

appleflaxen 9 hours ago 3 replies      
This was a great overview and write-up.

Anyone know why they are using MySQL over MariaDB[1]?

1. https://mariadb.org/

markpapadakis 5 hours ago 1 reply      
Great write-up. A few observations:

1. The encoding and translation schemes of Postgres and mySQL/InnoDB are well described in the blog post, and I would also agree that InnoDBs design is, all things considered, better for all the reasons outlined in the post.

2. I dont understand why anyone still uses lseek() followed by read()/write() and not pread()/pwrite() syscalls. Its trivial to replace the pair of calls with one. Aerospike is another datastore that resorts to pairs of seek/red-write instead of pread/pwrite calls.

3. Process/connection model makes no real sense nowadays - although to be fair, there is, today, practically almost no difference in terms of footprint between OS threads and OS processes (other than memory and FDs sharing semantics, they are practically the same). Its still more appropriate to use threads (although I d argue maintaining a pool of threads for processing requests and one/few threads for multiplexing network I/O is the better choice).

4. ALTER TABLE is obviously a pain point with mySQL, although I am not really sure many users with large datasets care; they probably figured out long ago its going to be an issue and they designed and expanded accordingly. Its also a relatively rare operation. That said, other than using mySQL (or any other RDBMS) to build the data plane for an elaborate, distributed KV store, one should consider Salesforces approach too. Their tables have some 50 or so columns, and the column names are generic (e.g column_0, column_1, ). They have a registry where they assign column indices (e.g column_0) to a specific high-level entity type (e.g customer title, or price), and whenever they need to query, they just translate from the high level entity to the actual column names and it works. They also, IIRC, use other tables to index those columns (e.g such an index table can have just 3 columns, table id, column index, value) and they consult that index when needed (FriendFeed did something similar).

5. Cassandra should have no problem supporting the operations and semantics of Shemaless ass described in their blog posts. However, given they already operate it in production, they probably considered it and decided against it.

0xmohit 9 hours ago 1 reply      
Worth quoting from the article:

 Accordingly, using pgbouncer to do connection pooling with Postgres has been generally successful for us. However, we have had occasional application bugs in our backend services that caused them to open more active connections (usually idle in transaction connections) than the services ought to be using, and these bugs have caused extended downtimes for us.

pella 4 hours ago 0 replies      


nierman 1 hour ago 0 replies      
with respect to "Difficulty upgrading to newer releases":

pg_upgade has a --link option which uses hard links in the new cluster to reference files from the old cluster. This can be a very fast way to do upgrades even for large databases (most of the data between major versions will look the same; perhaps only some mucking with system catalogs is required in the new cluster). Furthermore, you can use rsync with --hard-links to very quickly upgrade your standby instances (creating hard links on the remote server rather than transferring the full data).

that is all referenced in the current documentation:https://www.postgresql.org/docs/current/static/pgupgrade.htm...

jasode 9 hours ago 0 replies      
Fyi... a related (not duplicate) discussion of a previous Uber story: https://news.ycombinator.com/item?id=10923848
vanviegen 4 hours ago 0 replies      
One major advantage of MySQL's clustered indexes the article doesn't mention is that, although secondary key reads may be a little slower, primary key reads will be faster. The row data lives in the primary key index, so there is no need for referencing an additional database page (possibly causing random I/O).

This is especially relevant when doing range queries over the primary key. Imagine a table containing billions of chat messages, from which you want to retrieve a single conversation history. With a clustered primary key on (conversation id, message id), MySQL would need to process just a couple of database pages. Postgres, on the other hand, would need to reference a semi-random page for each of the messages.

Now imagine a 10k message chat conversation, a table too large to fit into RAM, and storage by means of spinning rust (yeah, yeah, I know what year it is :-)). The difference would be somewhere between 2 and 3 orders of magnitude.

viraptor 2 hours ago 0 replies      
I'd like to see their migration strategy as well. I mean, they say moving from pgsql 9.2 to higher version (which then allows online upgrades) is too much work. Yet they'll have to migrate to mysql, which will take much more engineering effort. For anything close to realtime, they'll need to copy the old data, while at the same time forking the new writes into both pgsql slaves and new mysql servers. And they cannot use WAL for that without some advanced processing.

I hope this follows in the next blog post.

ismdubey 6 hours ago 0 replies      
The fact that Uber scaled to so many users with Postgress gives me such a relief. For now, I am good !!
trequartista 9 hours ago 0 replies      
Wow, this is such a detailed analysis. Having used Postgres and suffered issues with data replication as well as database crashes, this post was really helpful.
cdelsolar 8 hours ago 0 replies      
So basically, if you don't intend to use it as a relational database, and you have enough scale to run cross-data-center (and across-the-world) master-master replication, then you should maybe switch from PostgreSQL to MySQL?
polskibus 9 hours ago 1 reply      
Does anyone know if citusdb or enterprisedb improve on the postgresql issues mentioned in the post vs last postgresql version?
brandur 8 hours ago 1 reply      
Interesting post! While I suspect that a MySQL installation is just as likely to have its own problems in the long run, I'm not smart enough to provide any kind of compelling point-by-point refutation. However, a number of the points made strike me as having possible trade-offs that were not really addressed in-depth.

My summary of the arguments against Postgres and some basic thoughts on each:

1. Writes are more expensive because all secondary indexes must be updated with a new physical location.

This may be true, but the MySQL model of using primary keys from secondary indexes will mean that reads are inherently expensive. They even mention this:

> This design means that InnoDB is at a slight disadvantage to Postgres when doing a secondary key lookup, since two indexes must be searched with InnoDB compared to just one for Postgres.

So it seems like a classic read vs. write trade-off.

I'm also a little skeptical of any performance claims that don't include any numbers. It's possible that efficient coding in Postgres makes this much more of a wash in terms of performance than claimed here.

2. Replication is less efficient because it's sending a lot of physical information out along the stream.

This is quite true, but IMO unlikely to be a major issues for most users unless they're dealing with a huge amount of data and streaming it over a slow connection (i.e. across the continent like Uber's disaster recovery center).

3. Data corruption from a bug found in 9.2.

Certainly a bad situation, but IMO not really a valid claim for situation. 9.2 is way behind at this point, and there's not much to say that they wouldn't have encountered a similar bug or something worse in MySQL in all that time, especially operating at scale.

To give a counter-anecdote, I operated Postgres at scale for a long time across many versions starting at 9.1 and was lucky enough to have never once encountered a bug with data corruption.

4. Postgres' MVCC model makes it easy for replicas to accidentally fall behind their master.

This one is valid (and annoying), but there are very good reasons for it, and you have some switches to control the behavior based on value transactions finishing on followers or prompt replication more highly.

5. Upgrades are difficult because the WAL stream works at a physical level and is not compatible between database versions.

Again, this is valid, but the statement-based replication is a scary idea. Row-level replication is more interesting and probably something that Postgres should have though.

Some good news is that Postgres is getting closer to logical WAL streaming, which should make in-place upgrades possible.

frik 9 hours ago 0 replies      
Also check out highscalability.com for more stories that value MySQL and its great InnoDB engine: http://highscalability.com/blog/category/mysql
vbezhenar 8 hours ago 4 replies      
Why would anyone run hundreds of connections? A server can only process number_of_processor_cores connections at once. Sure, few connections might wait for I/O, but not hundreds, unless database is very untypical.
snarfy 7 hours ago 1 reply      
I wonder how much of this could have been solved by using a different file system. There is all of this talk about the physical layer but no mention of the file system used.

> Typically, write amplification refers to a problem with writing data to SSD disks: a small logical update (say, writing a few bytes) becomes a much larger, costlier update when translated to the physical layer.

This is exactly the type of problem solved by the file system layer.

manigandham 7 hours ago 1 reply      
They should use SQL Server (which has great replication abilities, although horizontal scale out is still difficult) or MemSQL (which is distributed, scalable, and can do everything they need).

Or use Cassandra which is a perfect fit (or ScyllaDB which is a better version of it).

This all sounds like an aversion to just paying for or using better products when the problem is easily solved.

mspradley 8 hours ago 1 reply      
Why did they not consider Oracle or MS SQL Server? They can afford the licensing and both have numerous replication technologies to choose from.
hyperion2010 9 hours ago 1 reply      
This is a fantastic read. I hope the pg folks can turn as many of the issues brought up here into bug reports as possible (I think many of the issues, especially re: replication, are known), this kind of feedback is invaluable.
distantsounds 8 hours ago 0 replies      
Perhaps their engineers can design a web page that allows the scroll wheel to work.
raisyer 9 hours ago 1 reply      
while Postgress might be better if you use it 'as-is'....community of MySQL is much better and the tools available are more mature...just goes on to prove that even if something is not-that-good..it still might be successful,scalable and popular if there is a strong community behind it..
xaprb 8 hours ago 0 replies      
Why not PostgreSQL? (Sorry, someone had to say it.)
zeeshanm 9 hours ago 3 replies      
I like the sample data they have used:

 id first last birth_year 1 Blaise Pascal 1623 2 Gottfried Leibniz 1646 3 Emmy Noether 1882 4 Muhammad al-Khwrizm 780 5 Alan Turing 1912 6 Srinivasa Ramanujan 1887 7 Ada Lovelace 1815 8 Henri Poincar 1854

kev009 4 hours ago 0 replies      
I like this transparency, I know I never want to work at uber.
slantedview 9 hours ago 4 replies      
The connection handling section was surprising to me, reading that Postgres uses a process per connection! This is pretty shocking to me, in a bad way.
cnfjdnx 5 hours ago 0 replies      
Uber """"""""""engineering""""""""""
swasheck 9 hours ago 6 replies      
this reads like a laundry list of buzzwords that were designed to justify not throwing any effort into postgresql and just going with a new shiny toy (not mysql. yes. i know it's been around for a while).

it happens everywhere.

Wire open-sourced github.com
484 points by arunc  3 days ago   132 comments top 28
gfosco 3 days ago 3 replies      
A link to a GitHub organization isn't great.. I'd say this is better: https://medium.com/@wireapp/you-can-now-build-your-own-wire-... but even that doesn't clearly explain what Wire is. Visit https://wire.com to find out it's an encrypted video and group chat app.
grizzles 3 days ago 1 reply      
A bold move by Wire. Open source is still a very disruptive play, and the world needs something like this. If they manage this well and triple down on developer engagement, it could work out quite nicely for them. EDIT: Thread title is slightly misleading. It looks like they did a Telegram. There is no server here.
deltaprotocol 3 days ago 6 replies      
I must say that my first impression is beyond positive.

One to one and group chats, group video and audio calls, GIF search built-in, doodles, the best implementation of photos in the message stream that I've seen, poking and playable Spotify and Soundcloud music by just sharing links? All with end-to-end encryption?

I have that "too good to be true" feeling but, still impressed. Just waiting for possible audits and more feedback from the security community.

Edit: It's also Switzerland based, already supports Win10, MacOS, Web, Android and iOS, and to complete has the cleanest design I've seen in a messaging app.

laksjd 3 days ago 0 replies      
They offer a password reset function. How does that work? Do they hold my private key in escrow? I'd certainly hope not! Or does the password reset work by creating a new keypair? If so, does this at least generate WhatsApp style security warnings for people chatting with me?

With some digging I've found a way to verify key fingerprints so that's nice, but it's manual, not QR assisted :(

saghul 3 days ago 0 replies      
Lots of good stuff in there, thanks Wire! I just wish they had gone with something other than GPLv3 for libraries, like LGPL. Looks like they changed them on December, from MPL 2.0 to GPLv3.

At any rate, there are lots of us who can use the code with that license :-)

melle 3 days ago 1 reply      
I believe all their good intentions and I do hope they succeed. But for me it's too early to tell whether their business model will hold. If they build up a sufficiently large user base, but fail to monetize it and sell the company to e.g. Microsoft or Facebook, then I doubt how much of their original privacy / openness remains.

Another thing that I wonder about: Does being Swiss-based give them a privacy advantage?

nanch 3 days ago 1 reply      
See https://wire.com for more information since the linked repos provide no context. "Crystal clear voice, video and group chats. No advertising. Your data, always encrypted."
jacek 3 days ago 0 replies      
I am a user. I switched myself and my family from Skype a few months ago and it has been great so far. Quality of video and audio is great, Android app works very well (better than web based desktop versions). And it also works in a browser, which is great for me (Linux user).
prayerslayer 3 days ago 1 reply      
Not sure if these are for realsies, but there are some API keys in the webapp repository:



mei0Iesh 3 days ago 0 replies      
Thank you! Wire is the best, with multiple device support, clean mobile app, and a desktop client. It'd be nice if it were a standard open protocol so everyone could implement it, and find a way to allow federation. I'd pay to help support.
mahyarm 3 days ago 3 replies      
Now all this needs is a few good third party audits, verifiable builds and it's the holy grail of encrypted communications!
mtgx 3 days ago 1 reply      
I've been asking for three things from Signal for the past almost two years:

1) desktop app

2) video call support

3) self-deleting messages

Signal finally (sort of) delivered a desktop app, but it still doesn't have the other two. Wire has the first two, but it's still lacking the last one. I hope one of them will have all three of these features soon.

jalami 3 days ago 1 reply      
Side note, but it's kind of strange that images on their site require cookies enabled to view. I didn't dig into a reason, I just white-list the sites I want to use cookies and found it odd that there were big white spaces before doing so.
20andup 3 days ago 1 reply      
I wonder what the business model is?
_bojan 3 days ago 1 reply      
Didn't see that coming. I think Wire is struggling to get new users and this move could put them on the map.
pedalpete 3 days ago 3 replies      
I don't get how they can make statements like this "Only Wire offers fully encrypted calls, video and group chats available on all your devices". Webrtc is encrypted by default.
happyslobro 3 days ago 2 replies      
I found a file that is available as either MIT or GPL. Or is it only available under a union of the terms of both licenses? An intersection? Who knows, IANAL. https://github.com/wireapp/wire-webapp/blob/0cf9bf4/aws/main...

Why do people copy the license all over the place like that?

redthrow 3 days ago 1 reply      
Why does this Android app require a phone number to sign up?

At least Hangouts lets me use the app without a phone number.

sanjeetsuhag 3 days ago 2 replies      
Can anyone explain to me why they use an UpsideDownTableViewController ?
stemuk 3 days ago 3 replies      
I wonder how they encrypted their chat on the web client. Scince the Signal protocol is kind of the gold standard right now, probably their solution might in the end be the better one.
iamleppert 2 days ago 0 replies      
I wish they would have preserved the commit history. Future note to those open sourcing projects:

Preserve the commit history! It's very useful! Even if it takes more effort to review the history and remove stuff that you're not allowed to show or whatever.

mrmondo 2 days ago 0 replies      
Sorry if I've missed it somewhere but I'm looking for some independent, transparent reports on its security implementation. I was wondering if anyone could help me with finding this - or if perhaps they haven't been done I guess that would answer my question?
maxpert 3 days ago 0 replies      
Good to see people using Rust in production :)
aleken 3 days ago 1 reply      
Otto is my new best friend. I cannot see any information about a bot API on their site though...
yetii 22 hours ago 1 reply      
Android client uses Scala - might be changer for Scala on Android
arthurk 3 days ago 3 replies      
Is there a way to download the OSX app without the Mac App Store?
vasili111 3 days ago 2 replies      
Where is Windows client source code?
07 2 days ago 0 replies      
Hmm, seems interesting.
Yahoo sold to US telecoms giant Verizon bbc.co.uk
415 points by kartikkumar  2 days ago   350 comments top 42
TheMagicHorsey 2 days ago 8 replies      
I did a little work as a developer for Verizon's ecosystem back in 2007. Let me just say, from what I could see, they were a huge, bureaucratic company without a single redeeming cultural trait. The managers seemed like a bunch of frat boys who had been raised up into positions of authority through some inscrutable lottery, and none of them seemed to possess an iota of analytical capability or human management talent.

I left that position and later worked for a bunch of tech startups and larger companies that, while not perfect, at least had enough good people in them to redeem my view on the human race.

I cannot imagine why anyone would actually work as a mid-level worker in Verizon unless you had absolutely no other options in life.

chollida1 1 day ago 5 replies      
Yahoo got all cash, which is nice as it gives them more flexibility, the last thing they'd want after the Alibaba spin off fiasco is to have to try and sell 4.83 Billion in Verizon stock.

Bloomberg just put up a head line saying that Yahoo will return all the cash, minus Operating Costs to the share holders. If anyone has any guess as to how much "operating costs" will be, please email:)

So I guess queue Alibaba and SoftBank now to come in and divvy up the rest of the company?

From Matt Levine:

> "Marissa Mayer, Yahoos chief executive, is not expected to join Verizon, but she is due to receive a severance payout worth about $57 million," bringing her total compensation for about four years of work at Yahoo to $218 million.

Wow! So I guess the now decade old valley trick off spending a "few" years at google to start your career and leveraging the google name to get another job really is the way to go:)

tehwebguy 2 days ago 10 replies      
Verizon shouldn't be allowed to own any web properties. They inject a unique subscriber identifier into your HTTP requests unless you turn it off.


cs702 2 days ago 4 replies      
Less than the $5.7 billion Yahoo! paid in 1999 for Mark Cuban's Broadcast.com: http://money.cnn.com/1999/04/01/deals/yahoo/

Times -- and fortunes -- change.

hkmurakami 2 days ago 1 reply      
>Shortly afterwards, Verizon announced it would start combining data about its mobile network subscribers - which is tied to their handsets - with the tracking information already gathered by AOL's sites.

I was talking to a friend who is in the Telecom industry in Japan, and apparently this sort of arrangement is not legal there. EU is generally wary of such arrangements as well. So this is a merger whose product synergies would not have been possible in other jurisdictions.

In recent years I recall advertisers being skeptical about the quality of eyeballs on Yahoo!'s platform. The pitch to the same advertisers already seems more compelling, though the premise does make me feel uneasy.

And I imagine Mayer will be getting her full 9 figure severance package. So much for rewarding success and having interests aligned.

STRML 2 days ago 1 reply      
So this means Mozilla's insanely bad clause (for Yahoo) [1] kicks in?

1. http://www.recode.net/2016/7/7/12116296/marissa-mayer-deal-m...

ubersync 2 days ago 3 replies      
Why isn't this mentioned/discussed anywhere. In 2008 Microsoft offered $45 Billion to acquire Yahoo. Then Yahoo CEO Jerry Yang rejected the offer, saying that the bid "substantially undervalues Yahoo." Microsoft raised the bid to 50 Billion, and it was yet again rejected. After that MS withdrew its bid. 8 years later, at 10% the original offer!
cocktailpeanuts 2 days ago 3 replies      
To the guys on this thread who talk like they're some mini-pundits who know it all saying this is what Yahoo gets for walking away from larger valuation offers from a decade ago: No one thinks you're intelligent for pointing out something years after the fact. Tech companies come and go, and I would bet that a lot of the hottest tech companies right now will meet the end just like Yahoo did in a decade (or less).

Imagine if Google becomes irrelevant in 10 years, and end up selling itself to whichever hottest tech company that will be around then. Will you say "Told ya! Google should have sold to Yahoo when Yahoo was going to acquire them for $3 billion!"

unchocked 2 days ago 3 replies      
I remember being shocked when Yahoo spurned MSFT's $44+ billion offer in 2008. Goes to show, when someone offers you 11 figures for a failing company, sell because the offers aren't going to get better.
veeragoni 1 day ago 1 reply      
2 guys who spent 20 years at Yahoo, started WhatsApp and sold itself to Facebook for $19B. and this huge company sold for 1/4th of that price. Management vs. Product Visionaries.
anc84 2 days ago 0 replies      
Keep your eye on http://tracker.archiveteam.org/ to contribute to archiving certain Yahoo assets for future generations in the Internet Archive.
shmerl 2 days ago 3 replies      
Yeah, they have billions to spend on buying companies like Yahoo, but they can't upgrade all their rotting copper to fiber optics. That's Verizon for you.
andy_ppp 2 days ago 6 replies      
What the hell has Marissa Mayer been paid for?

Everything about this smells like Yahoo! is being run by idiot MBAs with some spreadsheets somewhere totally misunderstanding that technology can empower people to do fantastic things including those working within Yahoo! - instead it's been hamstrung by each property not being held accountable to it's competitors effectively.

I would have started competitors (startups) internally for all of Yahoo!'s key products (buy Y.com and test them under that) and told the current product owners if their products were not better faster than these startups could build them they'd be replaced.

The decision to sell search because they were not able to match the investment Google and Microsoft were putting in is another example; if you can't beat someone financially you need to be better than them. To have just given up based on "only" having a few billion to invest is absurd.

bogomipz 2 days ago 2 replies      
"The US telecoms giant is expected to merge Yahoo with AOL, to create a digital group capable of taking on the likes of Google and Facebook."

Can someone explain how combining two "past their prime" entities like Yahoo and AOL, with the Verizon telecom bureaucracy is going to produce anything "capable of taking on the likes of Google and Facebook"?

Telecom companies have a pretty horrible culture. It is not one of innovation or agility. They are bloated bureaucracies based on tenure and not merit. I speak from experience. To give one small example I have have been on conference bridges where Verizon project managers fell asleep and began snoring. I have many more of such anecdotes with these folks. All similarly illustrative of the culture.

harshreality 1 day ago 2 replies      
I'm curious how this will play out for AT&T internet customers, given that their email is currently hosted by Yahoo. Is Verizon going to host their competitor's customers' email?
aceperry 2 days ago 3 replies      
I hope yahoo mail sticks around. I've been using it since forever, and would hate to change to gmail. Yahoo mail is kind of slow, but gets the job done, gmail is too chaotic for me. Maybe I'm old school about that, but I'm all in on yahoo mail.
Ping938 1 day ago 2 replies      
Here is math for people wondering about price of Yahoo:

Y! market cap is $36.38b. Have in mind that Y! is selling only its core biz so we would have to subtract values of Y! shares in Alibaba and Y!Japan, which are worth $33.74b and $8.56b respectively. However thats pre-tax and Y! could not get that money for them. There for adjusted values (-38% tax)are $21b and $5.4b again respectively. There are also cash & marketable securities worth $6.8b and convertible debt of $1.4b.

So final math looks like this: $36.38b-$21b-$5.4b-$6.38b-(-$1.4b)=$5b

doppp 2 days ago 2 replies      
Can someone change the title to include the fact that this purchase is for the search and advertising operations part of Yahoo? Everyone here thinks it's for the entirety of Yahoo. It's like they didn't bother to read the article.
empath75 1 day ago 0 replies      
I wonder if Verizon worked out a deal with the mozilla foundation over the search bar exit clause? That's basically a billion dollar pay out over 3 years if Mozilla decides to trigger it.
Grue3 1 day ago 1 reply      
That seems incredibly cheap, when stuff like LinkedIn, Snapchat, Twitter and so on having bigger valuations, despite being just as unprofitable.
randomname2 2 days ago 1 reply      
YHOO used to be valued at $112B at its peak.
xorcist 1 day ago 0 replies      
It used to be said that Yahoo is where startups go to die. With AOL and Yahoo, is Verizon now where IT companies go to die?
kmfrk 2 days ago 2 replies      
While Flickr has decent export tools (you just get your photos without tags, descriptions - literally anything else), Tumblr's have always been non-existent aside from an unofficial, macOS-only tool by Marco whose download link (https://marco.org/2009/12/10/the-tumblr-backup-app-is-ready-...) no longer works.

Anyone recommend a Tumblr export tool? The best, as far as I can tell, is jekyll-import (http://import.jekyllrb.com/docs/tumblr/), but I'm running into errors and getting weird results.

bitmapbrother 1 day ago 1 reply      
Note: this acquisition does not include their $30+ Billion USD worth of shares in Alibaba and their stake in Yahoo Japan which is about $12 Billion USD.
smegel 2 days ago 0 replies      
This sounds a lot like News Corp's purchase of MySpace back in 2005.
Esau 2 days ago 0 replies      
"US telecoms giant Verizon Communications is to buy Yahoo's search and advertising operations"

First, I was surprised to see search operations mentioned, since they farmer that out to Microsoft. Second, if this is only search and advertising, I wonder what will happen to things like Flickr and Tumblr.

It should be interesting to see what is actually in the announcement.

mirkules 1 day ago 0 replies      
Every time I hear about Yahoo these days, I am reminded of the movie Frequency. At the end of the movie, the protagonist is driving a fancy mercedes with "YAHOO" as the license plate, implying investments in Yahoo made him rich. How ironic that since the movie came out, Yahoo started its decline.


jlgaddis 1 day ago 0 replies      
A fair amount of FreeBSD's infrastructure in hosted in Yahoo!'s datacenter in Santa Clara.

I'm curious how this will affect that relationship, if at all. It's not like Yahoo! is going to stop using FreeBSD overnight or anything but Verizon may decide they don't want third-party infrastructure in "their" datacenter.

simulate 1 day ago 0 replies      
Yahoo retains a market cap of $36 billion. The reason Verizon paid only $5 billion is that Yahoo Japan and Alibaba are not part of the deal.

> Yahoo owns about 35 percent of Yahoo Japan and 15 percent of Alibaba, two overseas companies that have long dwarfed Yahoo in size.

orionblastar 2 days ago 1 reply      
Basically everything Yahoo tried to do basically failed to earn income, and Google came along and did those things better.

Verizon can get more users from Yahoo and merge them with their AOL users. No doubt this bigger user base can be sold advertising to earn more money.

Firefox stopped supporting Google searches and switched me to Yahoo, will this Yahoo change no longer support Mozilla and be taken off the list?

josh-wrale 2 days ago 1 reply      
What is likely to happen to shareholders of YHOO?
forgotpwtomain 2 days ago 3 replies      
So less than 1/5 of a linkedin. Is Yahoo's core business really that bad?
dghughes 2 days ago 0 replies      
The oil trader folks are going to be miffed, did Yahoo! Chat get axed in the deal?


SCAQTony 2 days ago 1 reply      
ST that is so going to mess up my AT&T email with Yahoo. Both my AT&T account and Yahoo email are intertwined.
pasbesoin 1 day ago 0 replies      
Off-the-cuff sentiment:

Axis of... something.

As a Verizon (now, specifically Wireless) customer, I've watched things go from "worth the price" to "what am I paying for?".

And Yahoo. Once proud, pioneering Yahoo.

And the remains of AOL are in the mix, as well?

I mostly feel this is somehow primarily going to shovel more crap at me.

fractal618 2 days ago 0 replies      
How does this effect DuckDuckGo who just partnered with Yahoo??

Does Verizon share the same values as duckduckgo?

jjawssd 1 day ago 0 replies      
5 billion dollars down the drain! Yahoo has excellent negotiators I must say!
discardorama 2 days ago 0 replies      
If you're a current $YHOO shareholder, what does this deal mean for you?
rekshaw 1 day ago 3 replies      
can someone explain why yahoo is worth a penny?
joering2 1 day ago 1 reply      
Shouldn't it be banned? Like car factory owning dealerships? Or Movie Theaters owning Hollywood Studios??
protomyth 2 days ago 1 reply      
Mods: What was with all the other submissions being marked DUP?
HoopleHead 1 day ago 0 replies      
I misread that on my phone's tiny screen as a "$4,88" deal. Even then I thought it was a lot to pay for... er... whatever it is that Yahoo does these days.
Google tags Wikileaks as a dangerous site google.com
355 points by xname2  4 days ago   149 comments top 25
nl 3 days ago 10 replies      
The biggest downside to the NSA revelations is how quickly people accept conspiracy theories.

Wikileaks just released a big email dump. People looked at it with Google Chrome, and it detected malware in the archive. That blacklisted the site it was downloaded from.

There is no big "Google is protecting the Democrats and hates Wikileaks". Wikileaks was serving malware, and Google detected it.

user837728 4 days ago 2 replies      
This is technically accurate since I found out myself this week when browsing the AKP email leak. Some of the attachments in the emails were clearly malware of some sort. See for example: https://wikileaks.org/akp-emails/emailid/27482
Sylos 4 days ago 1 reply      
I figure this link needs to stand here somewhere, even if it's just for someone trying to understand the political implications that this could have: https://wikileaks.org/google-is-not-what-it-seems/
AWildDHHAppears 3 days ago 1 reply      
I don't think there's anything to see here. Google now tags it as "safe." The mechanism worked; the website administrators removed the malware, and the warning was removed.

See! Everything works in a rational way. There's no conspiracy.

dpweb 3 days ago 0 replies      
More interesting is the debate here in the comments where people are unsure if it's legal for them to read something on the Internet. I doubt Google is censoring Wikileaks. Too obvious. But startling is the chilling effect nowadays.
astronautjones 4 days ago 1 reply      
it could be political, but it's probably because they're hosting all of the attachments from all of the e-mails that were leaked - including spam
tszming 3 days ago 0 replies      
[youtube.com] https://www.google.com/transparencyreport/safebrowsing/diagn...

# Some pages on this website install malware on visitors' computers...

# Some pages on this website redirect visitors to dangerous websites that install malware on visitors..

brudgers 4 days ago 2 replies      
Shows me "not dangerous" at UTC 00:43 22.07.2016.
rbolla 3 days ago 1 reply      
its not a dangerous site anymore...

as of 7:15 PM PST.


cesarbs 4 days ago 0 replies      
If you refresh the page multiple times, it switches between "Not dangerous" and "Dangerous downloads".
dljsjr 4 days ago 7 replies      
I'm not sure that this is completely tin-foil hat worthy.

I was working at a defense contractor in 2010 when the big leak of all the cables occurred, and was forced to learn a lot of things I wouldn't have otherwise, including something that maybe a lot of people don't fully grasp:

When stuff like this leaks, if any of the information is considered sensitive/classified/restricted in any manner, the act of it being leaked does not dissolve its restricted status. In other words, if you are a regular US citizen and you go to Wikileaks and look at something that is classified without having the proper security clearance, then you're now on the wrong side of the law.

I don't think there's any political shadiness going on here, I think Google is just trying to be on the correct side of the system. Whether or not that system is on the right side of some moral or ethical line is a different conversation entirely.

daveloyall 1 day ago 0 replies      
Update: Over the weekend, I encountered some guy at a store who probably doesn't read HN. He believed that Google was deliberately filtering out WL for political/conspiracy reasons.

When I explained the automated malware protection (Safe Browsing or whatever the call it), he accepted that explanation (I had him at "emails have viruses") but he countered that "google filtered out wikileaks last time".

This concludes today's observation from IRL.

throw2016 3 days ago 0 replies      
This really doesn't matter. The kind of people who are concerned about the information revealed by wikileaks, Snowden, Manning and the burgeoning surveillance infrastructure have little reason to trust what Google says or does.

What seems off is the default kneejerk response especially in places like HN where one would assume a far more informed audience - working in the industry which is spearheading this - to brush things under the carpet or make discredited, desperate and increasingly irrational references to conpiracy theorists.

There have always been conspiracy theorists and always will be, but the current narrative on surveillance has moved so well beyond that point that to have this discussion tarred by these tired and banal references to conspiracy theorists is completely disingenious and makes those making these arguments look out of touch.

If you know what has been revealed so far it should not be difficult to engage with some degree of seriousness at the issues at hand without immediately resorting to strawmen.

fixermark 3 days ago 0 replies      
"Current Status: Not dangerous."

Did this change in the intervening (clock-check) 4 hours, or is there some definition of dangerous I'm missing?

retox 4 days ago 0 replies      
Andrew Simpson was possibly the first to report. Comes very soon after DNC email leak.https://twitter.com/Andrewmd5/status/756529847762087936
smoyer 3 days ago 0 replies      
As of approximately 1000 EDT (US) on 07/23/2016, the link above gives the status of wikileaks.org as "Not Dangerous".
seoguru 4 days ago 0 replies      
I am not getting the warning on my browserchrome 52.0.2743.82 beta
faddat 2 days ago 0 replies      
Well, in that case, they probably really are working for Killary.

Damn, I thought google was one of the good guys.

mjwilliams 3 days ago 0 replies      
It says "not dangerous"
prashant10 3 days ago 0 replies      
actually it doesnt anymore...
MooBah 3 days ago 0 replies      
Welp, GJ - Google Changed it back.
MooBah 3 days ago 0 replies      
yelp - looks google changed it back. GJ Thread!
cLeEOGPw 3 days ago 2 replies      
> HN is a liberal safe space.

That is easily shown to be false by the number of people who make the opposite accusation. It's also common rhetoric to cast oneself as the brave freethinker standing up against a Goliath community; people on both sides of any divide do that as well.

In fact ideological enemies resemble each other more than they do anyone else, and are probably the biggest factor making threads on this site tedious for the rest of us.

We detached this subthread from https://news.ycombinator.com/item?id=12148604 and marked it off-topic.

colordrops 3 days ago 2 replies      
This crosses into personal attack and that is not allowed here. Please don't do it again.

We detached this subthread from https://news.ycombinator.com/item?id=12148835 and marked it off-topic.

The Raspberry Pi Has Revolutionized Emulation codinghorror.com
416 points by dwaxe  2 days ago   112 comments top 18
windlep 1 day ago 3 replies      
I've had an arcade cabinet with a 9-year old computer in it that finally failed the other day. I tried the Rasberry Pi route awhile ago, it does fine on the oldest 80's MAME games, but has issues with most of the 90's era games of which I'm still quite fond. And as others have noted, it's prolly going to suck for NES/SNES emulation.

So when the arcade computer failed, I tried a different route. I realized my main desktop computer (a Core i7-4790k) is plenty powerful to do some arcade gaming on the side. A long VGA/USB/audio cable later, and my arcade is now running directly off a VM from my desktop. This works so much better than dealing with moving games on/off a SD card, and managing more physical things. It's easy to manage the VM, snapshot it, and change the config without even touching the arcade now.

With VT-d, PCI-passthrough, and the ridiculous amount of CPU cores everything comes with that this should be a more normal thing in the future. It'd be lovely to use those new multi-Gbit wireless standards instead of a cable though...

mrob 2 days ago 4 replies      
"Viewing angle and speed of refresh are rather critical for arcade machines, and both are largely solved problems for LCDs at this point"

This is true, but not for the cheap IPS LCDs he advocates. The important point is image persistence. Each frame is a sample of a single point in time. To accurately represent motion it needs to be shown for as close to a single point in time as possible. Most LCDs sample-and-hold, i.e. they set the pixel and keep it there until the next frame. This results in blurring when your eye tries to follow motion. See:


Modern gaming LCDs can strobe the image like a CRT, eliminating this blur. It causes noticeable flicker at 60Hz, but it's the only way to get sharp looking motion from these fixed framerate games (motion interpolation adds latency which is no good for games).

corysama 2 days ago 4 replies      
I've long thought it would be interesting to make an education-oriented Game Boy Advance clone to teach low-level programming. I.e: Base it on the expired patent, but don't copy the copyrighted BIOS and don't bother being compatible with commercial game ROMs.

The conflict is that it sure looks like sourcing everything but the SoC (case, screen, controls, battery?) will cost more than a RasPi-like SoC capable of emulating a GBA. At that point the question becomes, What's more valuable inspiration-wise: Telling the kids "Your game is really running on the same physical hardware as a GBA" or telling them "in addition to GBA, this device can emulate a bunch of other devices and there's the whole RasPi ecosystem as well" ?

Disclaimer: I know nothing at all about sourcing hardware.

ac29 1 day ago 4 replies      
> For a budget of $100 to $300 maybe $500 if you want to get extra fancy you can have a pretty great classic arcade and classic console emulation experience.

...if you are OK with pirating games. Its a bit odd that for the number of times this article talks about how cheap and easy it is to get this setup going, they kind of handwave away the fact that even old games are still copyrighted with "Add additional ROMs and game images to taste."

starik36 1 day ago 3 replies      
I've attempted to do this. It's not as simple as Jeff Atwood states.

For starters, there is a lot more tinkering and messing around than is indicated in the article. You want to connect an old PS3 controller that's sitting around? Great...prepare to spend 3-4 days messing around with config file via SSH to get it just right. And even then, it fails intermittently and works in some games, but not others.

Secondly, while some N64 games do emulate reasonably nicely, most do not. There are either audio issues, or video issues. And on and on. PSX and Dreamcast games - I couldn't get those to work without lag at all.

kerkeslager 1 day ago 0 replies      
If you're interested in emulation, The Internet Archive also has something cool: https://archive.org/details/internetarcade
chrisguilbeau 1 day ago 0 replies      
I used a pi2 and an LCD I got off Craig's list to build an old Mac emulator with Basillisk II and a lot of patience (took a while to get the right combo of compile options, settings and display environment). It's relatively stable and my 4 year old daughter has been playing kid pix, cosmic osmo and hello kitty on it for a while now. It's also fun to see the after dark screen savers when I go into her room.
legooolas 1 day ago 0 replies      
One thing that's missing from a lot of arcade cab builds is that old games often don't run at a 60Hz refresh rate, and so you get a strange jittery effect as it has to skip or duplicate frames to display at 60Hz on normal LCD monitors.

FreeSync/G-Sync makes a tremendous difference, but unfortunately does this to the price as well :(

Edit: Or you can use a CRT :)

Houshalter 1 day ago 4 replies      
Here's a possibly silly question. Is upscaling old games possible? In the past I have seen papers on upscaling algorithms that do amazing things to old pixel art and 8 bit sprites. Is it possible to run these in real time on something like a pi?

I ask because the suggestion of using 1080p resolution or higher for this sounded silly. But then I realized maybe it's not.

1hackaday 2 days ago 2 replies      
This is very neat. I want to have one, but don't want to have to assemble it. Any ideas about where I can buy one already assembled? (I wouldn't mind, say, paying a 20% surcharge over the prices mentioned in the article).
phreaky 1 day ago 0 replies      
I love my Raspberry Pi.

A few months ago, I started a project to convert my dad's barely used iCade cabinet [1] into a full-fledged RetroPie cabinet.

I used a GPIO-to-USB converter (which allowed me to easily interact with the buttons and joystick on my Raspberry Pi), a speaker with a 3.5mm line-out, and a 7-inch screen I got off of Amazon.

Here's a video of it in action: https://youtu.be/EiNI2vXAomg

[1] http://www.ionaudio.com/products/details/icade

parski 1 day ago 1 reply      
I guess it depends on what you consider good enough. I use my gaming PC to emulate and because it lets me use more accurate emulators and allows me to configure them to my liking. Output the video to a CRT video monitor and it's a fantastic authentic experience with liberties I could only dream of as a child. Heck, I don't even get the frame drops that are present on original hardware.
SmellyGeekBoy 1 day ago 0 replies      
Great article, but the tips about putting the Pi in a case and using heatsinks seem at odds with each other - especially if the Pi is going to be safely tucked away from danger inside some form of arcade cabinet. I just used self-adhesive PCB risers with mine and stuck it to the inside of the cab.
afro88 1 day ago 1 reply      
Does any RPi emulator do the nice slightly convex CRT emulation with scanlines, colour bleeding etc?

For me this is a big part of it. The game art is designed for these effects, and it kind of breaks the illusion if this isn't right (in a similar way to low FPS or delayed sound).

pronoiac 2 days ago 1 reply      
This advice on displays made me laugh:

> Absolutely go as big as you can in the allowed form factor, though the Pi won't effectively use much more than a 1080p display maximum.

See, I just used a RetroPi to test a new tv, and the games I reached for were very low res and extremely pixelated.

fit2rule 1 day ago 0 replies      
While I think its true that the rPi has been good for arcade emulation as a social phenomenon - i.e. the market has expanded drastically - I think its disingenuous to think of the rPi as the main driver behind emulation becoming mainstream. Devices such as the GP2X, Caanoo, GPH Wiz and Open Pandora gaming consoles have contributed immensely to the subject of game emulation, and these systems have been around far longer than the rPi - which did indeed benefit from all the work done to make emulation work on these machines previously (they use a similar class of device) ..

I know for sure that dynamic recompilation, which is key to the way emulators gain the performance needed to run on these small machines, was well and truly happening in the scene before the rPi came along.

In my opinion, the rPi just delivered the last 5% of the missing equation: cheap, broad availability.

clevernickname 1 day ago 0 replies      
I wonder what affiliate links Jeff Atwood is shilling this time.
Thaxll 2 days ago 6 replies      
Rasberry Pi are slow as hell and don't emulate recent consoles.

"Why Perfect Hardware SNES Emulation Requires a 3GHz CPU"http://www.tested.com/tech/gaming/2712-why-perfect-hardware-...

Let's Encrypt now fully supports IPv6 letsencrypt.org
338 points by el_duderino  9 hours ago   140 comments top 11
matt4077 7 hours ago 5 replies      
So unfair! Comodo once, a while ago, also thought about using IPv6!

But seriously: letsencrypt is doing excellent work. It's a great case study in how inefficient a mostly-free market can be: SSL adoption doubled within a year. All that was previously deadweight loss.

sp332 7 hours ago 2 replies      
According to conversation on https://github.com/letsencrypt/boulder/issues/593 they couldn't support it because one of their datacenters didn't support IPv6 traffic.
JohnnyLee 4 hours ago 0 replies      
For any Go users out there, I'd recommend Russ Cox's package: https://godoc.org/rsc.io/letsencrypt. It automatically acquires certificates and keeps them up to date.
jdc0589 7 hours ago 6 replies      
someone play devils advocate and tell me reasons I might not want to use LetsEncrypt? (aside from potential issues from short-lived certs).
jimktrains2 7 hours ago 2 replies      
What's this mean? If a site only has an AAAA record it can now get a cert?
AndyMcConachie 6 hours ago 2 replies      
Does anyone know if Let's Encrypt supports DNSSEC validation? I mean, do their data center recursive DNS servers do DNSSEC validation?

I'm wondering how easy it would be to forge DNS responses to their servers checking that I control a domain name.

yeukhon 6 hours ago 2 replies      
Famous question - intranet.

We can do dns-01 verification, on intranets (like valid domain). But the downside is our domain would be logged in the certificate transparency log. What is the downside of being on the log?

INTPenis 7 hours ago 3 replies      
Expected but when is Tor support coming? I read a forum thread indicating it would be nigh on impossible due to .onion tld status.
jo909 7 hours ago 10 replies      
This is in no way criticism against LE, where I work _nothing_ is IPv6 and we do not even have it on any agenda.

But when an "we are going to change the future of the internet"-project makes IPv6 a Prio-2 feature (to be added later, not native from the start) it just shows that we are really not there yet.

serge2k 5 hours ago 0 replies      
> Were looking forward to the day when both TLS and IPv6 are ubiquitous.

Kudos to Lets Encrypt for their great work on the former.

A single sad tear for the state of the latter.

Animats 6 hours ago 1 reply      
You mean it didn't?
UK surveillance bill includes powers to limit end-to-end encryption techcrunch.com
361 points by wjh_  1 day ago   220 comments top 27
tetrep 1 day ago 8 replies      
I think this same logic that is purportedly the reasoning behind this bill would also require us to constantly record all of our vocal communications, as that would be the only way we could ensure that criminals could not have communications that aren't accessible to law enforcement.

This, of course, would require microphones on all citizens as well as many more in the surrounding environment, to ensure communications of unwilling citizens can be monitored as well. And, of course, we'd need video as well to get those pesky sign language users[0].

These sort of bills always make me wonder if we'll ever see a moral stance taken by tech companies. There's a few skirmishes that happen every now and then but there doesn't seem to be any general consensus on what companies will tolerate in both themselves and their business partners. I'd love to see a "Fair Trade"-esque branding used as an indication that the product and its supply chain don't include actors who support government surveillance.

[0]: OT, but it makes me realize you can literally make illegal gestures due to https://en.wikipedia.org/wiki/Hate_speech_laws_in_the_United...

sklivvz1971 1 day ago 5 replies      
It's such a pointless war on its own law-abiding citizens. It makes me sad.

People that really care about privacy, people who need to hide what they do will not be majorly impacted.

* The main threat is metadata anyways, not the data itself. Locating where you are (e.g. with millions of cameras and facial recognition) is a much worse threat.

* They will still use full disk encryption, free software, PGP or AES, etc. outside of the affected apps. That software won't stop to exist, nor the mathematics that powers it will stop working.

The sad part is that the people who will be disproportionally affected will be the common people who have nothing to hide anyways, and do not have the technical means, or the will, to protect themselves.

TLDR: useless and damaging.

zeveb 1 day ago 2 replies      
> 'If we do not provide for access to encrypted communications when it is necessary and proportionate to do so then we must simply accept that there can be areas online beyond the reach of the law.'

Yes, yes we must accept that, since it's reality. Queen Elizabeth can no more hold back encrypted communications than King Canute could hold back the tide.

wheaties 1 day ago 2 replies      
If I want to keep my communications encrypted online, I'm going to do so. The only people who won't have the same luxury as me are those that follow the law. I don't get it.
3v3rt 1 day ago 1 reply      
Interesting to see that at the same time the EU privacy watch dog is proposing to mandate encryption and outlaw these kind of decryption methodologies[0]. While still an opinion, it is good to see that in this area the EU is among the most progressive governments around. [0] https://secure.edps.europa.eu/EDPSWEB/webdav/site/mySite/sha...
JustSomeNobody 1 day ago 0 replies      
Anyone hell bent on killing people will likely succeed. Surveillance is not the answer. Too much data is just as bad as not enough. The solution is finding out WHY people want to kill you and fix THAT.
lb1lf 1 day ago 0 replies      
This belief that you can somehow force the strong encryption genie back in his bottle is fascinating, if sad.

I guess it is not as futile as it may appear at first glance, though - after all, you don't need all the world's suppliers of communication software to adhere to be successful; just force the major ones to help you out, then simply assume that anyone using an insignificant (by user base) app is up to something nefarious.

Bah. Orwell was an optimist.

austinjp 1 day ago 0 replies      
It's time to call this stuff out for what it is: flat out idiocy or lies. Possibly both.

Here's a brief thought to uncover why:

There are two countries. Country A has security capabilities equivalent to today's UK. Country B, equivalent to today's UK plus the proposed changes.

Could maniacs based in country B commit attacks of equivalent fatality to maniacs based in country A? Of course they could.

Could a criminal gang in country B get away with crimes of similar magnitude to a similar gang in country A? Of course they could.

Other threads here have pointed out the minimal extra effort that would be required by perpetrators, if any.

So why propose these changes, and why give the stated reasons?

Perhaps the government doesn't understand the negligible impact they'll have. This seems unlikely, although perhaps they "can't see the wood for the trees" and are getting carried away with the current xenophobic mood in the air.

Perhaps the government is showing its true colours and exercising the basic Conservative desire to deny societal evolution, by tightening control over anything new and complex.

Perhaps they've had a good hard think to the best of their abilities, and have genuinely decided this is The Best Thing To Do.

Whatever the reason, it's either founded on idiocy or couched in lies.

inetsee 1 day ago 1 reply      
I can't help but wonder how this bill, on top of Brexit, will affect the state of technology entrepreneurship in the UK. Why should an entrepreneur start up a technology business in the UK if his efforts will be hampered by politicians who have no clue about how technology actually works?
49para 1 day ago 0 replies      
What possibly can they do with all this data ?

It seems that current governments can't seem to solve the drug war, the war on terror, gun crime, or the increasing number of terrorist attacks.

How much intrusion do they actually need and what is the cost of the technology before they can actually seem to make headway on solving issues.

CiPHPerCoder 1 day ago 1 reply      
Dear UK government,

Good luck with that.


An open source software developer outside your jurisdiction

petre 20 hours ago 0 replies      
If privacy is outlawed, only outlaws will have privacy.

They created terrorism in the first place by bombing and occupying other countries, removing dictators.

reacharavindh 1 day ago 0 replies      
It is to an extent funny to think that governments think they can sit on top of communications and implement mass surveillance. If you make it illegal to encrypt your stuff, the knowledged/tech savvy people will start to work on using steganography. There will be an explosion of cat pictures in the Internet. Good luck finding the hideous cat :-)

All the government does now is inconvenience to the majority of citizens who they have nothing to worry about anyway.

pre 1 day ago 0 replies      
So, how are companies supposed to keep customer data safe from hackers without encryption exactly?

This kind of thing can only make the people of the UK less safe, more at risk, and more likely to be hacked and otherwise digitally abused.

If you wanna keep the people safe, you don't ban encryption. Better would be to mandate it.

DanBC 1 day ago 0 replies      
This is a fairly obvious sacrificial anode bit of the legislation. They'll drop this, while making the "provide the keys" bit of RIPA stronger.
SeanDav 1 day ago 0 replies      
I am surprised they did not add the line "think of the children" in there somewhere...

Meanwhile in the real world, criminals will resort to sending encrypted USB sticks via post, or carrier pigeons, or implanted in mules. There is always a way around these things for those that absolutely do not want their communications compromised. It is safe to say that any criminal enterprise knows that live electronic communication of any sort is likely to be compromised.

Also of concern, is that criminals will now have extra attack vectors to sensitive data, because if encryption has to be weakened for Government, it will be easier for other parties to exploit.

mankash666 1 day ago 0 replies      
If the laws are this regressive and encompassing, the very least we as citizens can do is to lobby for full transparency in requests - after all the data belongs to the individual (regardless of what the TOS claims) and the individual deserves to know about requests on his data immediately.
0xmohit 1 day ago 0 replies      
themartorana 1 day ago 0 replies      
"...there should be no safe spaces..."

Got it.

LinuxBender 1 day ago 1 reply      
Two can play at this game. Surely folks here at HN can create something that is not technically or legally encryption, but accomplishes the same goal.
hardlianotion 1 day ago 0 replies      
Just another little reminder that you must never confuse the government's interests with your own.
fweespeech 1 day ago 5 replies      
Has the UK lost their god damned minds?

I'm sorry but between this and everything else lately...they seem pretty committed to "Security at any economic and/or personal cost! Security for everyone!"

In the real world, that never works.

beedogs 1 day ago 0 replies      
Will the last tech company to leave the UK please turn off the lights?
known 19 hours ago 0 replies      
Govt should limit end-to-end encryption AFTER open sourcing all their software;
cloudjacker 1 day ago 0 replies      
If UK finishes leaving the EU, they will just be excluded from the market given their diminished relevance. Are sure given the power vacuum in tech, I'll release a gimped software product for their citizens. $$$$$$$$$
brador 1 day ago 1 reply      
When does something become encryption?

Say I switch t and r in evetyrhing I rype, is that encryption? No? Then at what point of mixing does it all become encryption?

saulrh 1 day ago 3 replies      
> Doors are now almost ubiquitous and are the default for most houses and buildings. If we do not provide for access to people's bathrooms when it is necessary and proportionate to do so then we must simply accept that there can be rooms beyond the reach of the law,

There are well-established and functional methods for extending law into areas that you can't see all the time. You don't need to ban encryption, in exactly the same way that you don't have to ban doors. Just because it's ooon the iiiinterneeettttt doesn't mean you need to break everything.

Nexus phones now identify suspected spam callers plus.google.com
294 points by bishnu  1 day ago   227 comments top 31
ohazi 1 day ago 13 replies      
Every time I get a call from a number I don't recognize I do this ridiculous dance where I try to Google the number (usually on my phone) before the ringing stops. If it shows up on enough spam caller sites (whocallsme, etc), I don't answer and add the number to my "SPAM" contact that currently contains about a billion phone numbers.

It's ridiculous that it took Google so long to implement such a basic feature on their phones.

cmurf 1 day ago 9 replies      
I think it's ridiculous you have to have a Nexus phone to get this capability. Anyone with a Google account and Android should be able to get this functionality.

Right now with Google Voice I get a dozen hang up calls per day, it's always from a different number. When I don't answer, Ive got a dozen 2 second long voice mails. I used to spend a lot of time setting these to spam or being blocked, but between Google Voice and Hangouts simply asinine and beyond incompetent integration where some calls show up in Google Voice but not Hangouts and vice versa, I'm losing interest.

So recently I just decided to make the default behavior for the Google Voice number not ring any of my phones or Hangouts, but set up contact groups where friends and family should ring through. Well that's not working, I'm still getting spam and hangup calls, and some friends ring through, others don't, and client calls don't.

It's really craptastic.

saghul 17 hours ago 6 replies      
Honest question: are spam calls common in the US? I don't remember when / if I got any in Spain, Netherlands or UK (places I've lived and had cellphones for a prolonged time), I do remember, however, that when I managed to score a US number with Google Voice I'd regularly get weird spammy voicemails.

If that is the case, I wonder why that is!

douche 1 day ago 4 replies      
Turning the ringer off also works.

I don't get talking on the phone. It's the lowest quality form of communication - it's ephemeral, and unlike actual face-to-face communications, all nuance and body language goes out the window. Not to mention the all-too-often piss-poor audio quality, mics that don't work half the time, and the "Can you hear me now? What was the last thing you heard?" and "Sorry, I was on mute" dances.

uptown 1 day ago 1 reply      
I get 'em all. Almost never answer, and block every number. I've tracked the origin of a few of them down. Found the personal cell phone of the CEO of one of the companies that was behind one daily call that changed numbers everyday. That was a fu conversation, and it did stop the call, but hopefully if both Google and Apple implement this (as is planned in iOS10) it'll end this avenue of abuse.
ghouse 1 day ago 3 replies      
Tipping point for me last week: I now receive more spam phone calls than I do spam email.
est 1 day ago 1 reply      
The function existed in Chinese phone/ROMs for quite some time now. Some of the intercepters will display which type of spam, place it originates and the exact business entity name of the caller.
pavel_lishin 1 day ago 3 replies      
What's the procedure for removing yourself from the suspected spam caller database if you're incorrectly placed on it?
blackoil 1 day ago 1 reply      
Isn't Truecaller doing this and more on all Android/WP phones for years?
ersii 1 day ago 2 replies      
Does anyone know more on how this works? The original post references that "Caller ID must be enabled".

Is this feature going to send all my incoming phone call numbers to Google - to compare it against a list of "known/suspected spammers"?

Will this only work in the United States or will it work internationally?

Animats 22 hours ago 0 replies      
This means your incoming call data goes to Google. Previously, only the carrier saw it. Now they can use that info for marketing purposes.
allendoerfer 1 day ago 2 replies      
Spam calling seems to be a big deal in the US. Not at all where I live. Is it because of different features of the network or because there are cheap English speakers in other countries?
bikamonki 22 hours ago 0 replies      
If you are an Android user and you like programming you MUST install Automagic. It is, by far, the most useful app in my phone. You can automate pretty much any task on your droid, like say: if number from incoming call is in whitelist send me alert, if not hang and reply with sms "Send name and number I will call back".
dilemma 1 day ago 2 replies      
My Windows Phone already does this!
criddell 14 hours ago 0 replies      
Why is it so hard for the phone company to provide accurate caller id data? I can understand the need to block outgoing caller id info and I have no problem with that. But I also have the right to not answer anonymous calls.
kqr2 1 day ago 0 replies      
sonic.net uses nomorobo to filter out spam callers and it seems to be pretty effective.


Animats 21 hours ago 0 replies      
Caller ID info ought to appear with a data quality indicator. The trust value of the least-trusted SS7 node in the chain is the data quality.
harryf 19 hours ago 0 replies      
In Switzerland this has been possible for quite some time now on both iOS and Android by installing the local.ch app - http://tel.local.ch/en/advertising-calls
jmspring 1 day ago 0 replies      
Will this include unsolicited google adwords calls?
agildehaus 1 day ago 1 reply      
Caller ID can be spoofed rather easily. This will only lead to more spoofing.

And are political calls and surveys considered spam? I certainly consider them spam.

beefsack 23 hours ago 0 replies      
In Australia, I've signed up my numbers to the Do Not Call Register[1] and it's actually been pretty effective. I actually couldn't tell you the last time I had an unsolicited call like that.

[1] https://www.donotcall.gov.au/

AWildDHHAppears 1 day ago 1 reply      
Of course, it's not hard for them to use a different caller ID for each call they make...
x0054 1 day ago 1 reply      
I think iOS 10 does this as well. The funny thing is, a lot of the time I get spam phone calls that are in Spanish (I don't speak Spanish much) or when I pick up the phone, they just cut off the line. So they waste my time and theirs.
dredmorbius 23 hours ago 0 replies      
Phone spam has all but killed phones. Pervasive surveillance has done the rest.

Maybe not for everyone. Yet.

But where a phone was for a time a liberating device, it's become what many of its early (and I'm talking about late 19c and early 20c critics said, not late 20/early 21) critics claimed: an insistant, rude, inconsiderate, and noxious nuisance.

A phone can ring at any time, from a call initiated anywhere in the world. Low (or zero) costs mean the caller has very little reason not to call, and even a very slight probability of a positive financial return can support all measure of spam.

The fact that carrying a phone subjects you to sub-minute location tracking, puts an always-on microphone in your pocket, and leaks your identity, location, habits, and interests to the highest bidder (or marginally competent hacker) makes that a non-starter.

For the past several years, I've simply not carried a phone when I could possibly manage to, and the liberation is tremendous. (The trauma of having been on-call for years may or may not have contributed to my intense distaste for the devices.)

There are other options -- virtually any modern electronic kit has multiple messaging capabilities, from email to IRC to various messaging applications to full VOIP and voice/video messaging. Carrying a non-phone Android tablet affords some utility without the tremendous disutilities of a phone.

But, and this speaks to recent pain, the device (a Samsung Tab A 9.7" WiFi-only) is itself locked down -- not rootable, bootloader locked, and so far as I can tell, no CyanogenMod images available for it. I'd bought it whilst travelling under some duress, as an affordable and, so far as I could tell, least-bad option.

But without the ability to actually control the system, I'm still subject to spam, crud, poor management tools (simply being able to allocate and manage storage rationally appears beyonds its meagre capabilities), etc.

What Google are offering is very little, very late. And the fact that other telcos are failing to step up and address the massive disutilities of their projects is another immense failing of the market. Realising these are the same unspeakable idiots who'll be shoving Internet of Shit devices down our every orifice makes me cry for the future.

nstj 1 day ago 0 replies      
iOS 10 brings functionality similar to this. [0]

[0]: https://developer.apple.com/library/prerelease/content/relea...

> CallKit. CallKit also introduces app extensions that enable call blocking and caller identification. You can create an app extension that can associate a phone number with a name or tell the system when a number should be blocked.

ableal 13 hours ago 0 replies      
"Prank call" apps in the store do not help either.
gadders 15 hours ago 0 replies      
For those on a non-Nexus android phone, True Caller does this function pretty well for me in the UK.
fapjacks 21 hours ago 1 reply      
I use Extreme Call Blocker for Android. Best three dollars I ever spent in my life. My phone doesn't make any noise or visual interruptions unless the number is in my contacts list. Additionally, it answers the phone and immediately hangs it up, which prevents the number from going to voicemail. I adore knowing that strange numbers are essentially calling a sinkhole when they dial my number.
lintiness 13 hours ago 0 replies      
now if they could only get that camera to work ...
gesman 1 day ago 2 replies      
Any unknown caller that starts his conversation with "Congratulations! ..." deserves public waterboarding.
The Uber Engineering Tech Stack, Part I: The Foundation uber.com
381 points by kfish  2 days ago   180 comments top 19
Animats 2 days ago 12 replies      
It's interesting that they don't break the problem apart geographically. It's inherent in Uber that you're local. But their infrastructure isn't organized that way. Facebook originally tried to do that, then discovered that, as they grew, friends weren't local. Uber doesn't need to have one giant worldwide system.

Most of their load is presumably positional updates. Uber wants both customers and drivers to keep their app open, reporting position to Master Control. There have to be a lot more of those pings than transactions. Of course, they don't have to do much with the data, although they presumably log it and analyze it to death.

The complicated part of the system has to be matching of drivers and rides. Not much on that yet. Yet that's what has to work well to beat the competition, which is taxi dispatchers with paper maps, phones, and radios.

e1g 2 days ago 2 replies      
I'd love to know how many people are responsible for devops/operations/app at various stages of any company's journey. Wikipedia says Uber employs 6,500 people so if even 15% of that is on the tech side of the business that's still 1,000+ people allocated to tech. I think this metric would be a useful reality check for a "modern" SaaS project with 3-10 people that's trying to emulate a backend structure similar to the big league.

There are 20+ complex tools listed in the stack, and to run a high-visibility production system would require high level of expertise with most of them. Docker, Cassandra, React, ELK, WebGL are not related in required skills/knowledge at all (as, for example, Go and C are). Is it 5 bright guys and girls managing everything, like the React time within Facebook? Or a team dedicated just to log analytics?

NotQuantum 2 days ago 4 replies      
Uber is really strapped for engineering talent. Especially when it comes for SRE. Myself and many friends working SRE at various Bay Area companies get consistently hit up for free lunches and interviews. It's really weird considering that their stack doesn't NEED to be this complex....
sandGorgon 2 days ago 6 replies      
What I'm really wondering about is their app. The UI of the app can be impacted without an app update. For example the UI during the pride parade. Or minute of silence ( http://gizmodo.com/uber-makes-riders-take-a-moment-of-silenc... )

I wonder what's the architecture of the app and the API for this.

marcoperaza 2 days ago 1 reply      
Quite an intricate architecture. I can't help but wonder if all of the complexity and different moving parts are worth it. Does it really make more sense than throwing more resources at a monolithic web service? Clearly the folks at Uber think it does, and they've obviously thought about the problem more than me, but I'd love to understand the reasoning.
sixo 2 days ago 0 replies      
This is just about all the tech there is, right?
mickyd54 2 days ago 0 replies      
'wildly complex' wow. and they now have 'eaters'
haosdent 2 days ago 0 replies      
"We use Docker containers on Mesos to run our microservices with consistent configurations scalably, with help from Aurora for long-running services and cron jobs."
ashitlerferad 1 day ago 0 replies      
Anyone know if Uber supports the projects they use with human and financial resources?
legulere 2 days ago 1 reply      
> Screenshots show Ubers rider app in [...] China

Interesting to see Google maps being used, isn't that blocked in mainland China?

50CNT 2 days ago 0 replies      
So much technology, yet I still had to load the site 3 times and fiddle with uMatrix to get the page to scroll. Now, lots of people do silly things with javascript, but on a blog article on your tech stack it doesn't speak well of things.
tinganho 2 days ago 0 replies      
This sounds like a blog post for emphasizing the more buzz word you use the better.
creatine_lizard 2 days ago 0 replies      
If it is easy, it'd be nice to edit this the title to be not in all caps.
pfarnsworth 2 days ago 1 reply      
Even if you're correct in this reading, please don't get personally rude about it.

We detached this comment from https://news.ycombinator.com/item?id=12154325 and marked it off-topic.

stickfigure 2 days ago 3 replies      
Presumably by someone at Uber

Why would you assume that? Especially since the blog post is already a few days old, and the submitter doesn't have any other Uber-related posts.

marcoperaza 2 days ago 3 replies      
mikecke 2 days ago 1 reply      
For those of you complaining about the title being all caps, it was done so for aesthetic purposes. Which means somehow the submitter went through the time to uppercase each character of the HN title before submitting.

 text-transform: uppercase;

joering2 2 days ago 0 replies      
Sounds like a very solid foundation! I'm glad to see they have sufficient system in place to continue spamming the heck out of people who never opted into their advertisement in the first place.


I only wish LE would treat CAN-SPAM seriously and put more sources into criminal enforcement.

ryanlm 2 days ago 3 replies      
I just got rejected from them. I applied for a SE position, but they didn't like me I guess. They send you this really condescending rejection letter. I showed them my programming language that I built in C from scratch, and also my data structure library where I implement all the common data structures found in high level languages that I built from scratch in C, among the many projects

I have. It must have been my state school that turned them off. I know I could keep up there, but maybe they also turned me down because I'm 5 states away and they thought I wasn't worth the recruiters time.

edit: downvoter, if you could provide your rationale that would be great.

Humans once opposed coffee and refrigeration: why we often hate new stuff washingtonpost.com
302 points by walterbell  1 day ago   266 comments top 33
btilly 1 day ago 7 replies      
This was all said very, very well by Machiavelli hundreds of years ago in chapter 6 of _The Prince_.

And it ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.

crusso 1 day ago 6 replies      
In hindsight, opposition to innovations such as mechanical farm equipment or recorded music may seem ludicrous.

How about if the article talked about the problems with the people who rapidly embraced once "nifty and new" ideas like taking x-rays of feet at shoe stores, using Fen-Phen for weight loss, getting on airplanes in the early days of flying, etc.?

What the article ignores completely is the notion of idea survival bias. The article goes through pains to cast reluctance to adopt new ideas as being a defective mode of thinking by not talking about the risk model for the adoption of new ideas.

Sir_Substance 1 day ago 7 replies      
This is a pretty sanctimonious article, again attempting to push the agenda that all new technology is good technology, and all resistors are luddites.

Be cautious of this standpoint, technology suffers from a confirmation bias, we tend not to remember the technology that fails, the technology that lowers quality of life, or the technology that kills people.

Here are some counter-cases for you all:1. 1920's era radiation craze. Water energizers, xray shoe fitting etc.2. Communism3. Airships

There's nothing wrong with scrutiny, and nothing wrong with taking your time exploring an idea, dealing with it's repercussions at a manageable rate. Anyone who says otherwise is trying to sell you something.

For example, we're only just now starting to see countries start to bring the hammer down on companies that push their employees to be contactable 24/7 without paying them to be on call. Mobile phones have been around for how long? The legislative system has inertia, and sometimes it's worth giving it time to catch up.

anexprogrammer 1 day ago 1 reply      
We also have a happy habit of over-embracing the new. Radium cures, lead in petrol, Heroin, Thalidomide, electropathy (being buzzed by high voltages for the "health" benefits), arsenic pills to increase libido.

It's only some years later we realise whether it was a great idea, a really stupid one or a straight up con. So a little reluctance and wariness, especially in a world where everything is marketed as being a brilliant idea, is probably a very good thing.

Self driving cars are definitely in the don't know yet category for instance.

A firmly one-sided article promoting a new book.

ktRolster 1 day ago 12 replies      
We are seeing that still today: plenty of irrational opposition to GMOs. Eventually years from now they'll replace the old stuff, and people will wonder what all the fuss was about.
fossuser 1 day ago 0 replies      
This reminds me of a 'better' New Yorker article from a while back: http://www.newyorker.com/magazine/2013/07/29/slow-ideas.

It talks about how anesthesia was quickly adopted, but scrubbing in before surgery to reduce infection was not.

Interesting mixture of cultural issues and how people behave.

brendoncrawford 1 day ago 0 replies      
I am sure there were also people who opposed asbestos for home construction. We don't sit here with our hindsight telescopes and call them fearful simpletons . This article suffers from a survivor and confirmation bias.
_nedR 21 hours ago 0 replies      
> The same theme is playing out today as some lawmakers and consumers question the safety of driverless cars, the economic impact of automation or the security of mobile banking

Am i the only who has serious reservations about mobile banking? It completely undoes 2FA (I hear banks in the US don't use 2FA, but it is mandatory for all online transactions in India).

I mean if you lose your phone you lose your bank a/c. Your android phone is highly insecure. Microsoft got flak for dropping XP support after a decade. Your typical android phone stops recieving updates after 2-3 years. And since in the mobile security paradigm, the user is just another security threat to be mitigated, there is nothing you can do about it (short of doing a risky root of your phone, which btw would stop many banking apps from working ). Already there are viruses in the wild which are rooting millions (yes, millions) of unpatched phones. http://www.cmcm.com/blog/en/security/2015-09-18/799.html

Right now these malware are only looking for low-level fruit like steam, credit cards, and pushing ads. Its only a matter of time before they go after your bank a/c.

pkaye 1 day ago 0 replies      
There is a great book "A History of the World in 6 Glasses" that talks about how coffee and other drinks were discovered and spread around the world. Fascinating to read.
akeruu 1 day ago 0 replies      
I find a recent video by Veritasium [0] very much relevant here. As stated, cognitive ease makes us wary of new or unknown things. It is not simply fear of the unknown, we literally takes the shortest path to something we know, hindering ourselves from new discoveries.

[0] https://www.youtube.com/watch?v=cebFWOlx848

pipio21 1 day ago 0 replies      
Not true. Some humans did, some did not.

There is always enthusiast as well as critics with any new tech.

I have in my house big ads of commercial products With cocaine!! and Heroine! along with nuclear stuff when it was all the rage and we did not understand secondary effects of those things.

For me this is a PR propaganda article in a Washington newspaper(the politics center of US) in order to discredit those that oppose for example GMO, or paint those that want control in any new tech as lunatics.

If you read it carefully you can identify the tone "look, those changes improved the lives of those that opposed them, so governments have to do what we tell them is better for them and favor the new changes"(that will make richer the people who paid for this bluff article).

Nomentatus 1 day ago 0 replies      
Coffee caused a lot of harm to health right through maybe the late sixties, because people tended to use it in the evening and even right before bed, not realizing how that might be messing them up. My parent's habits at that time were typical and the later it got, the more likely they were to have a cup of coffee in their hand.
mrlyc 1 day ago 0 replies      
Extending the list,

9) New technology can be very expensive. For example, a colleague's father purchased a DVD player in 1998 that cost $3,000 in today's dollar.

10) New technology often has bugs. After Microsoft releases updates, I always wait a few days to see if there are any reports of problems before updating Windows (I run Windows 7 so I can still do that).

jejones3141 23 hours ago 0 replies      
The author has an awful lot of faith in government--with added expertise it may just move from being slow and reactionary to being fast and reactionary. Government is as much at risk of losing power and influence because of innovation as incumbent businesses.
mindcrime 1 day ago 0 replies      
Sounds interesting, and I kinda want to read the book. But... this also sounds a little bit rehashed, as it seems to cover similar ground to what Geoffrey Moore covered in Crossing the Chasm[1], or what Everett Rogers discussed in Diffusion of Innovations.[2]

Nonetheless, I think I'll buy this book and read it, just to see if there's any kernel of novel insight there. After all, for an innovator / entrepreneur, this is one of the most crucial issues out there.

[1]: https://www.amazon.com/Crossing-Chasm-Marketing-High-Tech-Ma...

[2]: https://www.amazon.com/Diffusion-Innovations-5th-Everett-Rog...

drawkbox 1 day ago 1 reply      
Drones for commerce/shipping, information and fun definitely fall into this category. So many benefits yet so many people against it. Probably same with self-driving cars yet we get on planes that are largely auto-pilot.

In the end, humans as a whole, change very slowly until you get to 50-60% support/usage.

jjgreen 1 day ago 0 replies      
Parmentier convinced a suspicious French public of the benefits of potatoes by surrounding them by armed guards.


studentrob 1 day ago 0 replies      
> 8) Innovation is not slow, linear or incremental but the government doesnt realize that.

The government knows tech moves fast. The issue is mostly #7,

> 7) Technologists often dont think about the impact their inventions have on society.

Public support for new tech sometimes moves slowly.

Politicians' heads are on the chopping block for anything the public feels they do wrong. One result of that can be slower support of new technology.

There is a societal discussion happening about self driving cars, as car companies have mentioned [1]

[1] https://youtu.be/a7mxrlDHv2E?t=1m2s

khedoros 20 hours ago 0 replies      
> But coffee took much longer centuries longer to catch on in Germany, France or England, where people were hooked on beer, wine and tea, respectively.

From what I understood, coffee was common in England before tea was, and tea was first introduced as another option within established coffeehouses.

exabrial 1 day ago 2 replies      
I don't care what you say. I want a damned gigabit ethernet port on my Macbook Pro. There is simply _not enough wireless spectrum available_ in an urban environment for us all to have full gigabit connections without dropped packets and retransmissions.

And there is nothing wrong with the standard headphone jack.

Apple's new Dongle Driven Development methodology sucks.

patcheudor 1 day ago 1 reply      
Articles like this and frankly even the statistics on the safety of autonomous vehicles are a red herring distracting us from where the focus needs to be on the development of this technology. Consider that if the same life saving autopilot technology Elon Musk is pushing was rolled out as a backup, rather than a primary control and turned on by default on every Tesla, far, far more lives would be saved beyond just allowing full-time autopilot or a full-time human in control.

That's right, keep the driver engaged because we know the outcome of the EULA whereby people swear they'll pay attention when autopilot is enabled, they won't, they can't, it's not how the human brain works. Rather, autopilot should function as a backup to the human driver in the near term as the technology is developed. If it senses a pending collision, kick in. Simple, simple. A computer won't disengage because it's not in control, it will always be vigilant as a backup to the human, but a human will never be a backup to a computer, we aren't built that way.

But you might say: "Oh, Mr. Smarty Pants, if the driver knows there's an autopilot backup wouldn't the driver just let it take over?" Well, there's an easy solution to that. Just like a computer can be taught to drive a car, it can also be taught to sense when the driver isn't paying attention. It can kick off the radio, turn off the AC, sound an alarm, or even just pull the car safely to the side of the road.The little experiment we are currently playing with everyone's lives must be better managed. As a security researcher who's uncovered thousands of bugs over the course of the last two decades, without question, the code that Tesla or anyone else in this space is producing is not of sufficient quality for a life critical system. Where are the independent lab certifications? Where's the university research? It's not there, it's too early in the game and that's why people need to set their ego's aside and do the right thing. Computers at this stage of the game are for backup, not primary.

abtinf 1 day ago 2 replies      
"Throughout the centuries there were men who took first steps down new roads armed with nothing but their own vision. Their goals differed, but they all had this in common: that the step was first, the road new, the vision unborrowed, and the response they receivedhatred. The great creatorsthe thinkers, the artists, the scientists, the inventorsstood alone against the men of their time. Every great new thought was opposed. Every great new invention was denounced. The first motor was considered foolish. The airplane was considered impossible. The power loom was considered vicious. Anesthesia was considered sinful. But the men of unborrowed vision went ahead. They fought, they suffered and they paid. But they won." -Ayn Rand, The Fountainhead
Illniyar 1 day ago 5 replies      
Coffee, sure. Some still oppose it. But refrigerators? I doubt it, unless you were an ice man. I can think of nothing less controversial then keep your food from spoiling.
pessimizer 1 day ago 0 replies      
The post-Bezos Post's stories are terrible, but their headlines are worse. There is absolutely no support given in this article for the premise that "humans" in general opposed coffee and refrigeration; so unless they were going for the weakest form of that ambiguous sentence ("at least two humans once opposed coffee and refrigeration"), using this premise as a springboard to diagnose all people against GM foods, AI, and the "gig economy" as falling into a historically predictable cognitive bias is just pop science garbage.

How about this as a headline: Some humans welcomed coffee and refrigeration, some didn't, and during their respective introductions to any particular culture, most people were unaware of the existence of either product or didn't feel like they knew enough about either of them to have an opinion: why all humans don't unanimously agree on everything immediately, or ever.

dredmorbius 1 day ago 0 replies      
For a balance to the optimsim of this piece (it's not clear whether or not Juma's book is similarly technotopian, and his publishing history[1] suggests some temperence) is Michael and Joyce Heusemann's Techo-Fix, which looks at some of the negative implications of technology, central of which is the law of unintended consequences -- it's simply impossible for all the implications of a technology to be known in advance.

Comments here mention many of the false-starts in technology -- concepts which were overhyped, or which proved disasterous. Powerful new substances have long been used in medicine: herbs, minerals and elements (notably mercury), coal, coal tar, and petroleum, electricity, magnetism, and radiation. Some successfully. Many not.

There are ideas long hyped which have failed to materialise, particularly individual air transports, jet packs, and flying cars. But also more vialble concepts such as personal submersibles. Yes, they exist. No, they're not commonplace.

And there are entire societies and cultures which have emerged based on the principle of being late adopters of the technology curve, probably most notably to Americans, the Amish.

I disagree with Juma's assertion that technology progresses at an accelerating rate. There are technologies (fire, the wheel, automobiles, airplanes, railroads (and in particular railroad brakes)) which show a distinct development curve: slow emergence, accelerating takeoff, inflection point, stable (perhaps slowly improving) higher bound. Trains today use a brake developed in the 1880s. Automobile and aircraft patent issues reached a peak in the 1920s, and in the case of automobiles, actually fell afterward. Information technology is among the odd men out, though for reasons an ontology of technological mechanisms might help illustrate[2], but even it shows pronounced limits.

Google have recently announced an application of their AI technology in managing power within datacenters.[3] The multi-millionfold increase in computing power from 1970 to today has resulted in ... the ability to shave 15% off electrical consumption. Given a Moore's Law[4] doubling of compute capabilities and efficiencies every 18-24 months, that's roughly 3 months of increased chip effiency gains, and a one-time benefit. The Jevons Paradox also suggests it won't result in actual reductions in energy use, but an increase.



1. https://www.worldcat.org/search?q=au%3AJuma%2C+Calestou&qt=r...

2. See: https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww Paired with information technology, I'd include cities, transport and communications networks, and trade networks.

3. http://www.csmonitor.com/Technology/2016/0721/Google-goes-gr...

4. Or Wright's Law, see J. Doyne Farmer.

rm_-rf_slash 1 day ago 0 replies      
I'm reminded of a section of Edward Bernays' "Crystallizing Public Opinion," where he argued that people are inherently tribal and assume identities for themselves, which, by necessity, requires that they also view their identity as being not a part of the other side.

It's like the Dr Seuss story where one guy gets a star tattoo, then everyone wants stars, then one person wants to be different so they remove their tattoo, then everyone wants to be different so they remove theirs, and it continues back and forth.

I think the perspective to keep in mind about this piece is not that people are afraid of new things, but rather, different things. When I lived in the city, I couldn't imagine life without walkable access and being in the middle of the action. When I moved to the suburbs, I couldn't imagine life without abundant space and tranquility. Now I'm back in the city. Technological advancement had nothing to do with those perceptions, but my resistance was still there, on both sides.

ObeyTheGuts 16 hours ago 0 replies      
If u drink coffee u are broken individual
dizzy3gg 1 day ago 0 replies      
"you know they refused jesus too" - bob dylan
ascotan 1 day ago 1 reply      
paywall fail
webtechgal 1 day ago 0 replies      
Why we often hate new stuff?

Original piece tl;dr.

My simple, stupid take:

A combination of the law of inertia + fear of the unknown.

smegel 1 day ago 0 replies      
Especially new operating systems from Microsoft.
Finnucane 1 day ago 2 replies      
He's wrong about coffee and refrigerators. It's true that some places tried to ban coffee due its stimulant effects, coffeehouses spread through Europe pretty quickly in the 17th century.
carlesfe 1 day ago 0 replies      
Funny, just a few days ago I wrote on the topic. Please allow me a self plug: http://cfenollosa.com/blog/living-in-a-disrupted-economy.htm...

In summary, Luddism sells, though it's irrational and history usually proves it wrong. Technology mostly changes society for the better in the long term, even though it needs to disrupt old business models first.

Valve Handbook for New Employees (2012) [pdf] valvesoftware.com
342 points by taigeair  1 day ago   192 comments top 24
Det_Jacobian 1 day ago 5 replies      
So Valve is definitely idealized by people outside (and inside) the game industry, but definitely much less so by people who have worked there. The flat structure is sort of a pipe dream that leaves nobody actually in charge of important decisions, while hiding a de facto power structure that certainly exists despite being non-explicit.

The company has transitioned to being the company that owns Steam as a platform (including and subsuming Vive), and not much else. People that have joined Valve expecting to develop games there end up fired in less than a year, which surely is destructive but also serves a real purpose of perpetuating the Valve culture. A major shakeup is unlikely to happen; Gabe seems to be unable to decide whether he wants to be a super-public figure that is the face and decision body behind the whole company, or if he wants to shrink into a hole and rub shoulders with tech legends hoping to determine the future of everything. The company will make money for a while, but they are open to platform disruption, even in their VR space where they have (more than Oculus) tried to be the open platform. Eventually the market will figure out that they don't need to pay Steam 30% of sales to host files on a server. If this view is right right, Steam is about to find out that the PC world wants to be even more open than they are offering. Of course, the board of investors will certainly find a way to use Valve's intellectual capital regardless of whether they stay on top.

gavanwoolery 1 day ago 2 replies      
I researched Valve quite a bit before applying there (I did not get in, but one of their senior team members wrote me a nice message).Some interesting bits I found:

- The average engineer there makes at least $400k/year with bonuses, although it could be much more (or less, if they somehow wind up in a bad project). IIRC Valve makes around $2m of profit for every head in the company (they only have ~300 employees or so).

- In spite of the seemingly ideal flat organization, many people find themselves unhappy there. One former employee hints at some reasons here:http://richg42.blogspot.com/2015/01/open-office-spaces-and-c...From other employees, I have heard that the flat organization and bonus structure leads to unnecessary drama/rivalry, poor communication (or even fear of communication), lack of innovation (creating your own project is discouraged, and teams have financial incentive to stick with projects that pay the highest bonus), etc. This is not to say Valve is a "bad" place to work at, I am sure it beats the hell out of many other job environments, even ignoring the excellent pay.

- If you do want to work there, you will probably have had to shipped multiple titles AND be recommended by an existing team member (alternately, writing a popular mod is equally lucrative). Typically, applying through their website will not get you a job - they usually hire by actively looking through a pool of candidates that they already know of. They also look for candidates who are good at producing high amounts of customer value - they care more about this than technical ability.

sebastianbk 1 day ago 6 replies      
As a Microsoft employee I am so happy that we got rid of stack ranking a few years ago. It encourages a bad behavior and goes against helping your coworkers with whom you are essentially competing for compensation. I am surprised to see that a company like Valve, which seems to be held in high regard by many developers in the industry, still operates with this compensation system. It's system of the 80's if you ask me.
MIKarlsen 1 day ago 13 replies      
I find the whole "You are a person who spend every waking hour optimizing yourself to become the best YOU you can be"-frame of mind very intimidating. Maybe it's because I'm not american, but even though I like to work with complex issues, I also like a 9-17 job with a decent income, and the ability to go home and relax when I'm not working. And by relax, I don't mean working on side projects, doing volunteer work or earning a second degree in something. But playing board games, working out/running or even just watching mindnumbing TV. I feel like the "100 % dedicated 100 % of the time"-thing has become the only way to really make it in tech-life.
stevebmark 1 day ago 1 reply      
This is a wonderful read, thank you for sharing it. I'm genuinely curious about this, if any Valve insiders have insights:

> Thats why Valve is flat...You havethe power to green-light projects. You have the power toship products.

Is this really the case? On paper this sounds great. I've worked at companies that have a similar motto. Power to the employees, power to the developers. But it usually just means the hierarchy is unspoken and assumed. No structure means no one to go to with disputes about your job, problems with co-workers, etc. It can be worse than a traditional hierarchy because everyone sells the "flat" motto to newcomers, but as soon as you join you learn the hidden politics. The cognitive dissonance can be soul crushing.

So is Valve truly flat? Are there any examples of relatively new employees spinning up teams and shipping unique ideas? If it works, how do you handle inter-personal employee issues?

Thaxll 1 day ago 2 replies      
From what I heard from ex employee working at Valve it's not what you think it is. i.e: it's very political.
maho 1 day ago 3 replies      
We value T-shaped people.That is, people who are both generalists (highly skilled ata broad set of valuable thingsthe top of the T) and also experts (among the best in their field within a narrow discipline -- the vertical leg of the T).

This is a nice metaphor. I try to be T-shaped, but I wonder how useful I am becoming... My expertise in high-precision mass spectrometry is not something companies are looking for....

elthran 1 day ago 2 replies      
Is there any change here, or is this just the fairly common repost of this?
sboselli 1 day ago 1 reply      
Yes, it would be really interesting if someone could get a person from Valve to give us an idea of how this all looks as of today.

Can anyone make it happen?

ixtli 1 day ago 1 reply      
I think anyone who likes what they see here needs to honestly ask themselves how they believe this applies to their world. Whenever this PDF is posted on HN I am disappointed to see comparisons between how people perceive Valve and their own companies. Does your company have a total equity > 2 billion us dollars? Probably not. Much of this flows from the ability to invest in an incentivize your employees at this level.

As another aside I do not think there is causal evidence that Valve became successful because of these ideals. On the contrary they seem like the result of success.

rsp1984 1 day ago 2 replies      
The problem is in almost every group of people lacking a formal hierarchy, an informal un-official hierarchy will start to emerge with a higher likelihood of manipulators, sociopaths or other political climbers on top. I'd rather not be part of one of those.
swanson 1 day ago 0 replies      
Let me start by saying that I love reading these kind of handbooks from various companies.

One thing that stuck out to me was multiple mentions about "raising the issue" for the tough topics (compensation issues, feeling uncertain) -- who is the issue "raised" to if there is zero hierarchy and I have no manager/adviser/councilor? Does the flat structure only apply to "individual contributors" and is there a more traditional HR/operations structure that is not shown?

bishops_move 1 day ago 0 replies      
Even if it's not accurate to Valve of today, it's a helluva reach goal for most dev-oriented companies.
Xyik 1 day ago 0 replies      
How is pay handled if there is no hierarchy? Who decides how to spend the company's money? This type of recruiting-marketing is almost as bad as Google's.
denzell 1 day ago 0 replies      
Keen to know how they are different now.
Pinatubo 1 day ago 2 replies      
The "flat structure" at Valve reminds me of the"unlimited vacation" policy at Netflix. It sounds liberating, but also offers the potential for employees to be judged by rules that are no longer clearly spelled out.
ebbv 1 day ago 2 replies      
This is quite old, and paints an overly rosy picture. After this was published a lot of the SteamBox project people got canned and were less impressed with the reality of Valve:


perseusprime11 1 day ago 1 reply      
What worries me about this handbook is that it rarely gets updated. For a healthy culture to sustain, rules have to evolve based on all the new employees who join Valve.
esaym 1 day ago 0 replies      
I'm going to assume they don't have any remote software dev positions?
cdevs 1 day ago 0 replies      
I wish my company was this organized
branchless 1 day ago 1 reply      
Cults focus on "we/us/them" not you doing what you feel is best for you.
roddux 1 day ago 0 replies      
Honestly I only skim read this wondering if there was a policy not to talk about Half Life 3! Interesting read, though.
Goodbye, Object Oriented Programming medium.com
367 points by ingve  2 days ago   326 comments top 55
overgard 2 days ago 7 replies      
Programming paradigms are a lot like political parties -- they tend to lump a lot of disparate things together with a weakly uniting theme. You don't need inheritance for encapsulation to be useful, for instance.

The problem is, sometimes you agree with only a small part of the platform. None of these things individually are terrible ideas if tastefully applied, but it all gets clumped together into one big blob of "the right way to do things" (aka object oriented programming). I blame languages like Java for selling certain ideas as The Right Way, and building walls that intentionally prevent you from using other techniques from different schools of thought ("everything is an object, no you can't write a function outside of a class").

I think the functional paradigm has a lot of good ideas too, but in my experience they're just as annoying if they're strictly and tastelessly applied in the same way OOP principles often are.

Don't be a "functional programmer", just take the ideas that are useful.

I tend to prefer languages and tools that adopt good ideas without promoting a single specific way of thinking.

millstone 2 days ago 5 replies      
Let me try to list the objections:

1. Inheritance creates dependencies on their parent class

2. Multiple inheritance is hard

3. Inheritance makes you vulnerable to changes in self-use

4. Hierarchies are awkward for expressing certain relationships

All true. But likewise, functions introduce dependencies on their arguments, and data structures introduces dependencies on their fields. You must consider your dependencies carefully when designing any software interface.

The task of software architecture is not to go around categorizing everything into a taxonomies. Inheritance is just one tool in your software interface toolbox.

5. Reference semantics may result in unexpected sharing

This has more to do with reference semantics than objects.

6. Interfaces achieve polymorphism without inheritance.

Interfaces long for inheritance-like features. For example, see Java 8's introduction of default methods, or the boilerplate involved in implemetning certain Haskell typeclasses.

Illniyar 2 days ago 11 replies      
I think the functional vs OO debate is being done with a very narrow point of view.

Functional came before OO and there are reasons why it became much more popular- it had much better, easier and simpler solution to the most common problems of the 90's and early 2000's, namely handling GUI and keeping single process app state (usually for a desktop app).

It fares much worse in today's world of SaaS and massive parallel computing.

Frankly I think the discussion will be much better if we debate the merrits of each paradigm in the problem domain you are facing, rather then blindly bashing on a paradigm that is less suited to your problem domain.

For instance I have yet to see an easy and simple to use (and as such maintainable) functional widget and gui library.

ryanmarsh 2 days ago 1 reply      
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.


saosebastiao 2 days ago 1 reply      
For some odd cosmic anomaly, I learned programming almost exclusively in functional programming environments. My first language was R, and subsequently learned Scheme, Clojure, Ocaml, Haskell, and currently program primarily in Scala. Having never gone a through the OOP trend, and realizing that my current programming experience happened to be de jour gave me some undeserved confidence. So much so that I would regularly make fun of all of the Java drones at my work for their insistence on using such an inferior paradigm.

Then due to some directions I was taking at my job, it became very valuable to run millions of simulations of warehouse and transportation operations. After months of pain, I discovered object oriented programming (luckily I didn't have to abandon my language of choice to get it). Comparatively speaking, there wasn't a functional design pattern I could find that could come anywhere close to the simple elegance of OOP for modeling people, vehicles, warehouses, etc.

It's almost as if different ideas have different virtues in different domains.

skywhopper 2 days ago 0 replies      
OO is just a way of organizing code. You can simulate quite a bit of it in non-OO languages. But a lot of the problems are universal.

OO lets you abstract away a lot of detail, but locks you into some rigidity that doesn't map perfectly onto the real world. It's a leaky abstraction. But so is _everything_ real that we attempt to represent in a computer or in any formal system. Gdel proved this 85 years ago.

Code reuse is entirely possible with OO. The practical difficulties of code dependency management are not unique to OO. Anyone who's ever developed anything non-trivial in Node has seen how insane the dependency tree can get. Every language and platform has its own version of this problem and its own solution. From Windows DLL hell to Ubuntu Snap, from Bundler to Virtualenv, this problem transcends any particular style of programming.

It's good the author is skeptical of the promises of functional programming, but the total rejection of OO concepts as useless reveals that ultimately the author didn't really learn anything useful. The author fails to address how abandoning OO solves any of the problems he claims to have. "Ew, that's gross!" is not a useful analysis.

saticmotion 2 days ago 1 reply      
My biggest gripe with OOP is the Oriented part. If you design your entire codebase around OOP you will run into architectural problems. Especially with so-called Cross Cutting Concerns[0]. The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out. I have heard this approach being called "Compression Oriented Programming", but I don't care much for what people want to call it.

This approach doesn't mean no objects ever. But only when your problem actually calls for it. Likewise you will also end up with parts that are purely functional, data-oriented, etc. But they will be used where they make sense.

On top of that I'm also using pure C99. It does away with a lot of the fluff and cruft in other languages. In the past I used to try to fit my problems into whatever the most fancy language features I was offered. Which cost me a lot of time analysing. Now I just solve my problem.

Mind you, C is not a perfect language. There are features I wish it had. But for my approach to programming it is the most sensible to use. Apart from maybe a limited subset of C++ (such as function overloading and operator overloading for some math)

[0] https://en.wikipedia.org/wiki/Cross-cutting_concern

whack 2 days ago 2 replies      
Most of the problems he brings up are already addressed in major OOP languages.

1) Inheritance can be confusing and messy.

Yes, hence the advice: Prefer composition over inheritance. Instead of having B inherit from A, declare an interface I, and have both A and B implement I. If B wants to reuse A's functionality, it's free to do so through composition, and not through inheritance.

There are some edge cases where inheritance is vastly simpler than composition - mostly when the interface requires you to implement 20 different methods, and there's only 1 method that you really care about changing. Using inheritance here gets rid of a ton of boilerplate, but that's a conscious choice you're making. If you don't like this, just revert to using composition.

2) Encapsulations can leak if you write buggy code

Any program can break if you write buggy code. Not sure what the author's point here is. In order to encapsulate your class carefully, either accept immutable inputs, or make deep copies of them. If neither happens to work, warn users that class behavior is undefined if they misuse it. This is what every non-thread-safe class already does anyway: it warns users that if you use them in a concurrent manner, things may break.

More importantly, when dealing with internal state that's created by the class, make it private and ensure no one else can access it. This also serves to encapsulate the internal implementation and algorithm from external users.

3) Polymorphism is... not unique to OOP languages?

Yes, using interface-based polymorphism is a good idea, and covers most of what people need. How does this make the argument that we should never use OOP languages?


The author brings up valid points about what to watch out for when coding in OOP. If you read other books like "Effective Java," they bring up the same points as well. But instead of acknowledging the benefits that come with OOP as well, and teaching people how to avoid these pitfalls and write code the right way, the author jumps to an extreme position that OOP languages should be abandoned entirely. Can we please avoid this type of wild overreaction, and pointless jumping from one shiny tool to the next, in a never-ending search for a silver bullet that will solve all of our problems. Because let's face facts: No such silver bullet exists.

stepvhen 2 days ago 0 replies      
In other literature the answer to inheiratance is "composition" or "components" rather than "delegate and contain." A nitpick, but I think it better captures the meaning of the method.

Bob Nystrom wrote a very good chapter on composition in his Game Programming Patterns book [1] and is worth reading if you want to program in the OO paradigm.

[1] http://gameprogrammingpatterns.com/component.html

EdJiang 2 days ago 4 replies      
Interesting. I almost though this was going to be an advertisement for Swift, since I saw this exact argument in a WWDC talk.

Apple calls Swift a "protocol-oriented" programming language, and with the addition of first class value types, tries to solve these problems in their own way.

I'd definitely suggest people frustrated by the problems outlined in this post to check out the Apple talk on protocol-oriented programming in Swift.


mk89 2 days ago 1 reply      
When I read such titles I feel sad.

In 2016 we are still talking about Cobol, which is spread in a relatively niche market and considered as a pillar in fields like banking, how can the object oriented paradigm be considered "past or even bad? It is the present and will be the future for at least the next 20 years, considering the number of billions lines of code. From a management perspective, such statements are not strong enough to be justified.

I find this sort of articles to be just bread and butter for codemonkeys, people who learn the most recent paradigm, technology or whatsoever and think that it's the key to happiness, or people who read for the first time a book like the ones from Bob Martin and feel they already know how to develop good software - or poems, as mentioned somewhere in the book - and list the bad things about other types of software architecture or design or whatever.

kentt 2 days ago 0 replies      
This is just a rant. It's not about Object Oriented vs Functional. Perhaps it could have been if had said how functional programming help these issues.

The summary of the article is programming is nuanced. You can attribute some nuances to OO design.

maxxxxx 2 days ago 2 replies      
Let's wait for a few years and we'll see plenty of articles "Goodbye functional programming". You can write good and bad stuff with OOP, you can do the same with FP. There is no one-size-fits-all porgramming style.
GFK_of_xmaspast 2 days ago 2 replies      
The author's beef with encapsulation seems to be that when an object A is used as an argument in the constructor to object B, the latter needs to do a deep copy (as keeping a pointer is not "safe"), which is of course not always possible.

I'm at a loss as to what this has to do with encapsulation, and even less able to understand how any language with user-defined data types is going to be able to avoid it.

vinceguidry 2 days ago 2 replies      
Inheritance is overused in OOP. There are many ways to share object behaviors, inheritance only works well when you expect all objects of both classes to share all behavior except one or two things. Even then, you should investigate dependency injection before reaching for inheritance.

For the example given for the Triangle Problem, the author isn't clear about exactly what behavior is being shared among the classes. The top of the tree, PoweredDevice, gives an indication, but my guess is that there are more responsibilities than just power, these responsibilities aren't being reflected in the domain model as they should be.

Instances of a class share behavior with other instances, it is the state that differs, i.e. the data being stored in the instance variables. In the example hierarchy, the state being stored is left out of the analysis, but it's the first place I would look for a missing domain concept. My guess would be that the most concrete class is going to be models of consumer peripherals, of which instances are intended to represent actual devices.

In this case a copier, which contains both a scanner and a printer, but not an actual discernible model of scanner or printer, would simply inherit from PoweredDevice. That it has this functionality does not mean it need actually have those in its class hierarchy. It is a job better suited for mixins, or injected dependencies.

bsaul 2 days ago 0 replies      
Funny how some people believe software programming is one big problem to solve as a whole, rather than a craft. OO is a one tool in your toolbox. A good craftman doesn't use one tool, he knows what tool to use for which work.
jhoechtl 1 day ago 0 replies      
Declaring Functional programming as the rescue at the very end of the post is just not right. FP will gain you something in particular programming requirements while being just wrong in others.

Looking back now on 25 years in software development, plain old imperative programming still bought me the most in terms of getting stuff done (Banana problem). With a decent set of standardisation and sane language defaults a mostly imperative approach will get you very far.

Golang hits that sweet spot very decently for me. Missing type generalisations are an impediment from times to times though.

graycat 2 days ago 3 replies      
I find many of the objects in .NET very useful and use them in my code.

Also in my code I define and use some classes.

I like the idea of classes. E.g., in my Web pages, I have a class for the user's state. When a new userconnects, I allocate an instance of that class. Then I send that instance to my session state store server. To do that, I serialize the class to a byte array and then send the byte array via TCP/IP. The session state store server receives the byte array and deserializes it back to an instance of the class and stores it in an instance of a collection class. Works great. It's really convenientto have all the user's state in just one instance of one class. Terrific.

Encapsulation? I don't know what the OO principles say about encapsulation, but it looks useful to me as a source of scope of names and keeping separate any members in two different classes that are spelled the same. So, terrific: When I define a new class, I don't have to worry if the names of its members are also used elsewhere -- saved again by some scope of names rules.

Actually, I much prefer the scope of names rules in PL/I, but now something as good as PL/I is asking for too much!

But inheritance? Didn't think it made much sense and never tried to use it.

Polymorphism? Sure, just pass an entry variable much like I did in Fortran -- now we call that an interface. Okay. I do that occasionally, and it is good to have.

Otherwise I write procedural code, and the structure in my software is particular to the work of the software and not from OO.

I couldn't imagine doing anything else.

I've seen rule-based programming, logic programming, OO programming, frame-based programming, etc., but what continues to make sense to me is procedural programming with structure appropriate to the work being done. E.g., the structure in a piece of woodworking is different from that in metal working, residential construction, office construction, etc.

ryanmarsh 2 days ago 1 reply      
How about we just say this:

OO solves a set of problems albeit with tradeoffs

Functional solves a set of problems albeit with tradeoffs

There. We can all go back to our tea.

MarkMc 2 days ago 1 reply      
I love using object oriented design and find it quite odd when I meet seasoned programmers who still don't 'get it'. It feels a bit like meeting someone who says Obama was born in Kenya.

Here's a concrete example of object oriented design:

To understand the problem domain, go to https://whiteboardfox.com and click Start Drawing > Create Whiteboard, then draw something. Play around with different colours, erase some lines, try undo and redo, etc.

Now here is my class diagram for implementing it: https://s1.whiteboardfox.com/s/7762255cabe34643.png

I honestly don't see how you could implement it without object oriented design. Surely it makes sense to have a Diagram class that encapsulates a list of strokes and pictures? Isn't it easier if the Diagram class exposes addStroke() and removeStroke() but does not reveal how it's implemented? And shouldn't I have a separate view class which encapsulates how much zoom and pan the user has applied to the diagram?

Could you implement Undo and Redo actions so neatly without a command pattern?

And isn't it lovely that the ViewController can switch between different modes (Pencil Mode, Eraser Mode, etc) without needing to know anything except a small interface that is common to all modes?

I actually get a little thrill when I think about how cleanly this design addresses the requirements. Could I get that feeling if this were implemented in a functional programming style?

discreteevent 2 days ago 1 reply      
He qoutes Joe Armstrong's criticism of OO but later Seif Haridi corrected him leading Armstrong to say:

"Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism."


StreamBright 2 days ago 1 reply      
Thanks for writing up this. I work with OOP programmers a lot and I am tired of explaining problems with OOP over and over. This article just saves me that effort.
stillworks 1 day ago 0 replies      
Was there really a need for this article ?

What if every Java developer who discovered the immense cerebral gratification in Scala decided to write an article with the theme "Aww shucks... Frick You Java, I wasted so much time on you damn it !!! I am going to Scala and I am never coming back."

Also, the examples the author gives maybe weak. Inheritance breaks my code ? If it's code I don't own I use dependency management. If it's code within the same team then code review before commit ?

The reference owning example for encapsulation assumes references are globally held ?

(PS: Just using Java/Scala here but feel free to vote me down if the experience is different with other language pairs. Oh also that I am having dirty dreams of leaving Java and indulge in Scala's monads as I recently discovered I wasted time on Java)

elgoog1212 2 days ago 0 replies      
OO is one of those things best used in strict moderation. Unfortunately, most people lack moderation, and strive not to necessarily solve the issue, but to show everyone just how smart they are. As a result we get object hierarchies 10 layers deep, and 1000-line source files (or worse, dozens of 100-line source files) which don't do anything meaningful.
pfultz2 1 day ago 0 replies      
C++ has already moved past OOP when it was standardized in the 90s by having a standard library built around regular types and generic programming. Here is Sean Parent's talk'Inheritance Is The Base Class of Evil', which discusses some the same issues with OOP and the solution in C++:


aibottle 2 days ago 17 replies      
God damn it I begin to hate Medium. Just another Bullshit article. When I read those dipsts description: "Software Engineer and Architect, Teacher, Writer, Filmmaker, Photographer, Artist" Great. And you want to tell me that OO is dead and functional the only future? Fuck off.
vlunkr 2 days ago 1 reply      
The king is dead, long live the king! Thinking that a new framework/language/paradigm will solve all your problems is naive. The author should know that if they've truly been programming for decades, as stated in the article.
Kequc 2 days ago 2 replies      
OO is treated almost like a religion by some people. It's useful to be able to create instances of some things but the place OO fails is the "oriented" part. Code is much easier to maintain and understand written in a functional state.

If something doesn't need to be an instance, it probably shouldn't be one.

This article articulates a lot of problems I've noticed in OO code, I think it would be foolish to ignore it. My life as a developer became 10 times easier once I realised some of these same pain points and pivoted, or maybe even more so.

In school I was taught all about OO coding practice and I think he's right, they were wrong.

finavorto 2 days ago 0 replies      
I'm at the point now where I just refuse to read any Medium post titled "Goodbye, {x}".
Artlav 2 days ago 5 replies      
I wonder if someone invented the "modular" programming yet.

Judging by the UNIX paradigm of the command line tools, the idea is clearly out there.

Instead of objects, do modules - things that do one thing, and carry minimal dependencies.

You need a banana? Grab the banana module. You need a banana with ice-cream center? Feed the "center" callback of the banana module with "ice-cream" instead of "banana intestines".

You need a copier? Grab both printer and scanner.

Is there any existing language that i'm describing now?

TheLarch 2 days ago 0 replies      
Lisp Weenie assertion that "OO" is a feature list not a solution in of itself, and that CLOS is embarrassingly better than the OO in C++/C#/Java.
ahmedfromtunis 2 days ago 0 replies      
I've enjoyed OOP more than anything else. The real issue here is that these pillars are but low-level building blocks. To fully take advantage of the OOP paradigm, you need to take a look at DESIGN PATTERNS. They'll solve (almost) any issue mentioned here. That is, if you know how to apply them, the right way, at the right time (just like everything else in this damn world).
davidad_ 1 day ago 0 replies      
The specific problem described in the "encapsulation" section is solved in modern C++ (11/14) by std::unique_ptr. While this may seem like a trivial quibble, I think it's part of why I find modern C++ quite tolerable despite disliking almost every other "object-oriented" language.
sebastianconcpt 2 days ago 1 reply      
It seems to me that my OOP is so functional that I didn't felt these issues that bad (it is true that I actively evaded them with the design) and at the same time it sounds like falling into that is typical of not so great OOP programmers.

It's curious to see that OOP hate coming from someone that got a chance to work in Smalltalk.

halayli 2 days ago 0 replies      
OO paradigms are not magical and they have a learning curve. They can look simple and obvious but knowing how to abstract your problems using these techniques is not simple and it's what differentiates a good programmer from a bad one.

It's easy to complain about them but in most cases I see it's a misuse issue.

dhab 1 day ago 0 replies      
As someone who recently started learning FP in Haskell, I think one cannot look at individual parts and compare OO to FP. I find that while both have strengths and weaknesses, in FP the sum of parts is much greater to appreciate than in OO with comparable energy invested in them in problem areas where performance is critical, but not too critical.

That has been my cumulative verdict so far learning FP - perhaps this view would sway one way or other as I learn more about it

adamnemecek 2 days ago 1 reply      
I think that fundamentally OOP and FP are both necessary for any language that wants to run relatively close to the metal.

The reason is that a computer is fundamentally all about state and you need something to manage the that state. This is the antithesis of FP. OOP manages state somewhat nicer.

ern 2 days ago 0 replies      
We keep getting caught in theoretical cesspits. Perhaps the way forward is to reduce our focus on philosophical discussions of programming paradigms, and to iteratively figure out, using well-defined metrics and outcomes, how best to develop software(and to define these in the first place). Taste, one-size-fits-all trends and hype are what drive the industry, and we tend to ignore, or hopelessly lament, the (unmeasured) waste that results from these.

And then, once we have hard data, we should have the courage to follow the data, even if it means throwing away our cherished pet paradigms and methodologies.

prashnts 1 day ago 0 replies      
I find the `Printer + Scanner ~= Copier` example poorly designed.

Sure, the Copier has both Printer and Scanner, however, in practice, the "Start" function on a Copier differ from either -- it starts the scanner and forwards it to the printer. It might also print multiple copies.

Point being, the `start` functionality here differ from both Printer and Scanner hence, the `start` method shouldn't be inherited.

mempko 1 day ago 0 replies      
This person has been doing class oriented programming for years and calls it OO. He will try structured programming with recursion and call it FP now...
Yokohiii 2 days ago 0 replies      
Fast reading through the article I was already prepared that OP would shift to FP. OP should assess his own fallacies and not blame imperfect concepts. One can probably improve certain things switching paradigms, but we as humans fail at conception, communication and complexity (althought we can brute force the latter). There is no language that can solve this problems sanely and it is questionable that any can.
skocznymroczny 2 days ago 0 replies      
Scanner and Printer can be made interfaces, then Copier can hold reference to IScanner and IPrinter, it doesn't have to care about their concrete implementations, as long as it's something that has a scan() method and print() method, for all the copier cares it doesn't have to be a powered device at all, it could be a cloud printer and a scanner located 1000 miles away.
Waterluvian 1 day ago 1 reply      
My experience with ReactJS has been the first time I felt I had the perfect balance of OOP and FP.

The components are so well defined as objects since they have the luxury of being tangible. But using them in a pure manner with zero local state makes them so easy to reason about and reuse.

More can be said about Redux but I'll leave it there.

juliangamble 2 days ago 0 replies      
This article makes the same argument but with better reasoning: http://www.smashcompany.com/technology/object-oriented-progr...
matchagaucho 2 days ago 0 replies      
I would rather continue using the functional features of Java7 and C# than switch entirely to Erlang/Scala.

Usage of interfaces, immutable final keyword, and anonymous methods are powerful and flexible enough to move beyond the constraints of pure OOP.

stevesun21 1 day ago 0 replies      
a programming paradiam can be accepted massively not because people hate the predecessor, it's because the new one is more intuitive and useful. If you hate oop so much, then approve how it is counter-intuitive compare to fp. Keep complaining make you sounds too emotional, as a SE you should know how to objectively analysis.

FYI, in OOP paradiam all inherent, encapsulate and so concepts are for one goal design a better interface, that also follow how the real world be designed, for example, power outlet at you home.

rukuu001 1 day ago 0 replies      
Makes me feel like writing "Goodbye, Functional Programming" and making my case with a bunch of bad development practices.

A good programmer writes good programs.

The tools don't really come into it.

mirap 1 day ago 0 replies      
So, show me better approach than objective programming, offer me any solution. Otherwise this article is just pointless complaint.
moron4hire 2 days ago 1 reply      
No project ever failed specifically because of the paradigm--or programming language, even--used to implement it. Project failure is a people problem.
Clubber 2 days ago 1 reply      

Class Copier{ Scanner scanner; Printer printer; function start() { printer.start(); }}


Placing a Start() in a PoweredDevice base class doesn't make sense in the real world. There are plenty of "powered devices" that don't have start buttons. A phone, a fish tank pump, a smoke alarm, none have a "start." A powered device should have just that, a PowerOn() and PowerOff() or SetPower(bool isOn). I wouldn't even create a PoweredDevice base class unless you have a reason. This is the main fault in your design.

Scanner.Start() should return a byte[] which is the result of the scan: byte[] Scanner.Start(); A scanner is an input device.

Printer.Start() should take an argument of byte[] as to what it is to print: void Printer.Start(byte[] byteArr); A printer is an output device.

Having said that, your Copier class would look like this:

 Class PoweredDevice { void SetPower(bool isOn) { ... } // Start() doesn't belong here. } Class Copier : PoweredDevice { Scanner scanner; Printer printer; void override SetPower(bool isOn) { printer.SetPower(isOn); scanner.SetPower(isOn); base.SetPower(isOn); } void Start() { byte[] document = scanner.Start(); printer.Start(document); } }
This can easily be enhanced to handle copy counts:

 void Start() { byte[] document = scanner.Start(); for (int x = 0; x < copyCount; x++) printer.Start(document); }
Ideally you wouldn't even make an inheritable Start() method. The Scanner class would have a byte[] Scan() method and the Printer class would have a Print(byte[] byteArr) method. You're trying to ram a square peg into a round hole. Use inheritance when it is convenient and makes sense to do so. Don't force it. Think, what does a scanner and a printer have in common that works the same, then put that in your base class. A power button is about it.

A lot of inheritance is done backwards. You make your classes then find commonalities and put that in your base class. Only create a base class first if you've thought about your object model and you know the commonalities.

Also, there is no reason to make your inheritance chain deep, just because. Build your objects in a way that makes sense. Don't write code or base objects you will never use. You can always insert a class in the chain when necessary.

Mastering OOP is hard, and people who have mastered it get paid a lot of money for their skill. It took me a few years to really understand how to design with it. It's invaluable though. A good object model is a thing of beauty, and a hell of a lot of fun to design.

Edit: I don't know why the editor won't keep the CR's.

adamconroy 1 day ago 0 replies      
Hey, ingve. stop trolling us. you wasted 5 minutes of my life by posting this.
JustSomeNobody 2 days ago 0 replies      
And Hello, Clickbait headline!
PhasmaFelis 1 day ago 0 replies      
Oh, is it time to declare a popular and widely-used thing dead again?
MawNicker 1 day ago 1 reply      
Object Oriented Programming simulates the restrained reasoning capacity of the real world. This is done by weaving state into every conceivable unit of computation. The result is a universal and inescapable notion of identity. It's a state conspiracy! Sometimes you are actually interacting with the real world and this is an appropriate constraint. That is only because, in the real real world, these things are pervasively intertwined. Right down to the smallest phenomena we've been able to observe. We can't actually take them apart except for in our minds. To do so is a very old idea, pervasively apparent in western thought, called platonic realism. I internalized it as an unknown known at some point. I imagine that's just how people did it before someone as smart as Plato was able to articulate it. It's sort of the doorway to abstract thought. Most mathematically inclined people have ventured into the depths of the world it conceals. It's necessary in order to properly understand the concept of a "value". When these people first start to program they rely heavily on expressions and functions. They tend to atomize complex values with simple structs. They don't know they're doing it but they're writing "functional" programs. It might be more apparent if we just called them mathematical or algebraic programs. They demonstrate a preference for referential transparency without knowing what it is. Much of their code is outright stateless. They're hesitant to use a "var" as anything but a "let". Many seem to immediately grasp the simplicity and generality of recursion. They have to have it pried away from them like it's a dangerous recreational drug. That recursion is not "optimal" is simply presented as an engineering reality. Always intent on incremental improvement they diligently internalize these "optimal" representations utilizing loops and state. They're tricked into feeling they've acquired a worthwhile skill; They don't know they're doing what a compiler ought to. They learn to reserve the truly optimal representations for their minds eye. With the desire to utilize their new "skill" they move towards external representations that could only be considered "optimal" by an unconscious machine. All of this damage is done in the earliest stages of learning; Probably before they've even attempted any significant programmatic interaction with the real world. That's when everything gets worse. They start trying to coordinate too much state and they can't cope. They're told they need these object things. Everything seems to get easier: Sockets, Widgets and even the Lists that had been such a struggle to use before. They choke down the declaration syntax and hastily strap their newfangled constructor and destructor gadgets onto their toolbelts. These are excellent tools for arbitrating the abstract world and the real one. The ability to hook into their creation and destruction provides abstract objects with a canonical state-of-existence. This is necessary to fully simulate the identity possessed by real objects. For the purposes that they've learned them, objects are immediately and overwhelmingly useful. They come to appreciate the clarity of the method invocation syntax for manipulating state. They're right to do so. The functional languages themselves even sort of "do" it. Tragically with their most fundamental notions of computation already brutally violated by the state conspiracy, they're vulnerable to seeing objects as a universal paradigm. Everything is an object. Everything. They ascribe pet-hood to their little objects and feel driven by the satisfaction of teaching them their own special tricks. Each and every one of them is an excessively black box. Some go so far as to make social-networks called UML diagrams to protect them from inappropriate "friends". They have forgotten the elegant abstract world that was left for them by the intellectual giants of history. They descended from it in pursuit of mere performance and are in serious danger of never returning. To act like it's just another way of looking at things is a brutal misunderstanding. It's a discipline that resides entirely within a much larger one that it is not a suitable replacement for. Despite the confusing desperation of non-academics for it to be that. Even it's creators are disappointed by it's dominance.
RantyDave 2 days ago 1 reply      
Author writes tightly coupled architecture, discovers it sucks. So, of course, moans about OOP.
NIST declares the age of SMS-based 2-factor authentication over techcrunch.com
285 points by Osiris  23 hours ago   220 comments top 39
Matt3o12_ 19 hours ago 9 replies      
I don't think eliminating SMS authentication is a good idea. While it is vulnerable to social engineering attack (and sniffing if you are near the receiving phone), SMS authentication has proven to be invaluable for most ordinary users. Those users are everything but eager to use anything more secure then their 6 letter password. With tricking many users into using SMS authentication, companies like Google have improved the overall security of most account by a lot. While it is possible to remove an SMS authentication mechanism, it is a lot harder and probably not worth for most accounts.

Most people will not bother to install Google authentication and just not use 2FA (who wants to steal their account anyways /s). Even if they did install it, recovering their authentication codes if they have lost their phone is incredible hard (because too many won't use backups even if it is as simple as apples iCloud backup).

What I think companyies should do is give their users a choice not to use SMS authentication. Power users (and hopefully most high profile users) will make use of that and normal user can just use SMS.

In the end it is always a trade off between convenience and security and sadly convenience almost always wins for most users even for easy solutions. So we (the developers) should provide them with the most convenient way they accept which offers the maximum security and SMS does just that.

ams6110 21 hours ago 9 replies      
This is the perfect being the enemy of the good.

They should be calling for secure, validated SMS.

Thinking that "Joe and Jane Six-Pack" are going to use Google Authenticator, is frankly laughable to anyone who does end-user support. But everyone understands "get a text and enter the code here"

Kortaggio 21 hours ago 2 replies      
One of the biggest weaknesses of SMS 2FA that I didn't see the article cover is when an attacker can socially engineer their way into your account with your cell service provider.

I'm thinking of a high-profile example when an attacker tried to take over h3h3's YouTube account by requesting his SIM card from T-mobile by pretending to be a T-mobile employee: https://youtu.be/caVEiitI2vg

Stratoscope 22 hours ago 2 replies      
When I think of NIST, the first thing I think of is the old National Bureau of Standards (now NIST) WWV Time and Frequency broadcast on 2.5, 5, 10, 15, 20, and sometimes 25 MHz. (You old-timers can probably hear the radio voice already!)

This came to mind because I just read a great PDF with a detailed history and technical description of WWV and its sister stations WWVH and WWVB:


Not directly related to 2FA, of course, but that PDF is recommended reading. These people were hardcore hackers before any of us were born!

techsupporter 17 hours ago 1 reply      
> "...the verifier SHALL verify that the pre-registered telephone number being used is actually associated with a mobile network and not with a VoIP (or other software-based) service."

Now this bothers me. I deliberately use a service (RingTo, discontinued for new users) to park a handful of numbers and be able to exchange SMS and MMS with them. One of the things I do not do is give out my actual mobile number to every random web service that wants it for "2FA," primarily because that now opens me up to even more phone spam. With RingTo, I just set that number to always go to voicemail but am still able to use SMS through their app.

It is arbitrary to say "one number type is acceptable for SMS verification but another is not." I'm actually more concerned that my mobile carrier will cough up my account to an arbitrary attacker than I am about some out-of-the-way number parking service that I log into using credentials that are not able to be easily discovered (an alternate e-mail address and such). My mobile carrier is a much larger target and has scores of fallible humans working for it just waiting to be socially engineered.

viraptor 22 hours ago 3 replies      
> the verifier SHALL verify that the pre-registered telephone number being used is actually associated with a mobile network and not with a VoIP (or other software-based) service

Is that possible to do this reliably in any country right now? I know you can easily migrate numbers and the oldschool block assignments don't mean anything in a few countries.

wfunction 22 hours ago 7 replies      
The problem with 2FA apps is that they don't also serve as an instant notification you when someone is trying to log in as you. 2FA SMS does. This needs to be addressed somehow before we declare the former superior.
niftich 22 hours ago 0 replies      
Github repo for the working documents:https://github.com/usnistgov/800-63-3

Issue tracker (discussion/request for comments):https://github.com/usnistgov/800-63-3/issues

sdm 20 hours ago 1 reply      
It's about time. Phone numbers change too much to be used as part of a reliable 2FA. You go on a business trip or a vacation, you of course get a pre-paid SIM card with a local number in your country of destination. It's simple and straightforward, most airports are lined if kiosks of vendors. But then you can't access any of your services. You can't do your work. 2FA should be based around something that isn't tied your location and doesn't change so regularly.
AdmiralAsshat 12 hours ago 0 replies      
Good riddance. I live in a basement. I can't tell you how many times I've been scrambling to log into a service that only allows SMS-based 2FA, requiring that I then run upstairs or outside, waiting for my phone's signal to get strong enough that it will receive the SMS, then dash back down before the code expires.
tombrossman 15 hours ago 2 replies      
I'm curious, do people still consider it "two factor authentication" when you have a mobile device generating (or receiving via SMS) one-time codes and that same mobile device syncing passwords?

For example, if your web browser or password manager is syncing your passwords to your mobile phone, and that's the same phone the SMS codes or TOTP app runs on, is this completely circumventing the whole concept of "two factors"?

Asking for a friend, because I'm sure no HN readers would be dumb enough to do this...

(also, The Register covered this same story yesterday, here's my dupe submission: https://news.ycombinator.com/item?id=12157529)

roywiggins 22 hours ago 3 replies      
I love Google Auth, and SMS really does have security problems, but you need a 2FA method for dumb phones, don't you? Are there Java apps for it?
kozak 19 hours ago 0 replies      
The problem with 2FA is that quite often it will turn into (a weaker) 1FA when users gets a possibility to restore their primary password by the SMS.
original_idea 20 hours ago 1 reply      
Wouldn't PUSH notifications over the Google and Cloud networks resolve this? I know Google Prompt and Authy do this already because of SMS. Authy posted this a couple weeks ago:https://www.authy.com/blog/security-of-sms-for-2fa-what-are-...
mappu 22 hours ago 1 reply      
One possible impetus is the rise in SMS phishing like this: https://twitter.com/maccaw/status/739232334541524992

How do you verify that the SMS is really from your service?

pixie_ 22 hours ago 0 replies      
It's annoying in a lot of services I can't use my google voice number to authenticate.
Glyptodon 10 hours ago 1 reply      
It's fine that there are flaws with using SMS, but the alternatives -- proprietary apps, proprietary dongles -- aren't any better. They also just create more parties you have to trust.

And if it comes down to using public/private keys, there's no reason an open source SMS app couldn't authenticate encrypted text messages or something.

If SMS in the clear is bad (and it probably is), then whatever is okay needs to be broadly accessible, open, and usable.

mankash666 21 hours ago 0 replies      
Reading through the draft, the level-2 authentication and upwards (AAL-2 https://pages.nist.gov/800-63-3/sp800-63b.html#sec4 ) spec is encouraging. NIST is encouraging eliminating the password and fully embracing cryptographic authentication (like SSH public-private keys).
ComodoHacker 19 hours ago 0 replies      
What's wrong with SMS to virtual number? I mean how it's less secure than regular number?
Dowwie 14 hours ago 0 replies      
The list of drafts and request for comments on a range of topics can be found here: http://csrc.nist.gov/publications/PubsDrafts.html

Other than the SMS-based 2FA work, see:- Identity and Access Management for Smart Home Devices- Multifactor Authentication for e-Commerce: Online Authentication for the Retail Sector

ittekimasu 20 hours ago 0 replies      
davidhyde 16 hours ago 0 replies      
In South Africa, this scenario is becoming a big problem:A victim's cell phone, in their possession, is triggered to go into no-signal state which is sometimes not noticed for hours. During this time, criminals are somehow able to capture communication that would have originally gone to the cell phone. Communication like 2FA passwords. This is then used to transfer money out of the victims bank account. How can 2FA over sms be considered safe if this is possible?
onetimepassword 12 hours ago 0 replies      
The biggest threat that other people have mentioned is using social engineering to get a new SIM card that works with your telephone number. I have a google alert for "sim swap fraud". It's oddly under-reported in the US, but quite common everywhere else. How bad is this?? Well, what if at attacker obtained more information about you (ie security questions possibly obtained from a keylogger), then was able to get your phone number, then contacted your bank or other investment broker and drained your accounts? Yes it happens- all the time. It's about time that NIST declares this form of 2FA insufficient. Hopefully the rest of the world will take notice, soon.

I prefer OTP... hopefully there aren't any other RSA-type hacks in the future.

xg15 15 hours ago 1 reply      
So what exactly is the alternative? I should carry around a physical security token with me for every single account I ever made?
willvarfar 19 hours ago 0 replies      
Here's a story about a friend whose phone number was hacked in a banking Trojan attack: http://williamedwardscoder.tumblr.com/post/24949768311/i-kno...

In this case it was a land line, but it's still a relevant empirical data point for those weighing options.

billpg 12 hours ago 1 reply      
Does anyone have any experience of using hardware tokens (like the sealed key-fobs) running TOTP?

For some services, I would much rather have a key-ring-ful of these devices rather than an app on my phone which I also use for reading websites.

xaduha 19 hours ago 0 replies      
Any 2-factor auth is better than none.
forgotpwtomain 18 hours ago 0 replies      
I'm somewhat surprised NIST is using github (a private company) rather than self-hosting.
ungzd 16 hours ago 0 replies      
And still in "secure" messengers like Telegram SMS is primary authentication method, not even secondary for 2-factor. Despite documented cases of account hijacks this way.
turnip1979 11 hours ago 0 replies      
The headline seems a bit click-baity. It seems to merely add some suggested guidelines to how this is done.
retox 18 hours ago 0 replies      
I never used SMS 2fa because I don't want my phone number out there.
bdamm 20 hours ago 0 replies      
Eventually the government will issue identity cards with certificated key pairs.
cmurf 20 hours ago 0 replies      
My bank uses SMS based 2FA. I'll send them this link and reiterate they should support U2F, or at least TOTP supported by Google Authenticator.

If SMS is problematic for 2FA, why isn't it problematic for account recovery?

nutanc 18 hours ago 0 replies      
"For now, services can continue with SMS as long as it isnt via a service that virtualizes phone numbers"

How much of an affect will this have on companies providing such a service.

Illniyar 20 hours ago 0 replies      
I failed to find in the article the reason why it's frowned upon, are there reasons published in the guide?
tlrobinson 21 hours ago 2 replies      
> To avoid red tape, the Institute is trying out a new method for reviewing and commenting on the guidelines that isnt quite so official: GitHub.

Ironically, GitHub uses SMS-based 2-factor authentication...

Kiro 19 hours ago 1 reply      
I will never ever use 2FA if it's not via SMS. I just don't care enough to be bothered.
nxzero 14 hours ago 0 replies      
If SMS is out, any form of verification or identity tied to phone numbers should be too.
cutie_honey 22 hours ago 3 replies      
The double-speak and quack-speak is getting a little thick for me lately.

Government bodies frown heavily on end-to-end encryption, but also frown heavily on authentication methods that are less secure.

Why, whichever directive shall I adhere to? The more secure behavior or the less secure behavior?

Maybe I should just do whatever benefits everyone else but me.

How we broke PHP, hacked Pornhub and earned $20k evonide.com
324 points by KngFant  3 days ago   103 comments top 11
krapp 3 days ago 4 replies      
The takeway:

 You should never use user input on unserialize. Assuming that using an up-to-date PHP version is enough to protect unserialize in such scenarios is a bad idea. Avoid it or use less complex serialization methods like JSON.

danso 3 days ago 2 replies      
OT: Is there a site that curates these kinds of interestingly detailed hacks? Like Dan Luu does for debugging stories? (https://github.com/danluu/debugging-stories)
ckdarby 3 days ago 2 replies      
That moment when the company you work at is on the front page of Hacker News xD
watbe 3 days ago 0 replies      
This is an elaborate hack and a very detailed writeup. Thanks for sharing.
ndesaulniers 3 days ago 1 reply      
> Using a locally compiled version of PHP we scanned for good candidates for stack pivoting gadgets

Surprised that worked. Guess they got lucky and either got the comiler+optization flags the same as the PHP binary used, or the release process can create higly similar builds.

aprdm 2 days ago 1 reply      
Really good write up. Some people are really smart, I wouldn't ever be able to do that kind of stuff even after being programming for years.
tjallingt 3 days ago 2 replies      
I have some questions about two things in the exploit code that puzzled me:

 my $php_code = 'eval(\' header("X-Accel-Buffering: no"); header("Content-Encoding: none"); header("Connection: close"); error_reporting(0); echo file_get_contents("/etc/passwd"); ob_end_flush(); ob_flush(); flush(); \');';
1. they seem to be using php to code the exploit (solely based on the $ before the variable name) but i've never seen the 'my' keyword before, what exactly is this language?

2. if i understand the exploit correctly they got remote code execution by finding the pointer to 'zend_eval_string' and then feeding the above code into it. doesn't that mean the use of 'eval' in the code that is being executed is unnecessary?

Phithagoras 3 days ago 3 replies      
Appears to be experiencing the hug of death. May be quite slow
cloudjacker 3 days ago 4 replies      

From a legal perspective how do companies and hackerone create a binding exemption from laws used to prosecute hackers?

fencepost 3 days ago 1 reply      
So does Pornhub's bug bounty program include some number of years of free paid membership along with financial bounties? Kind of a "treat us right and we'll let you treat yourself right" kind of thing?
given 2 days ago 1 reply      
Too bad they didn't just go ahead and:

> Dump the complete database of pornhub.com including all sensitive user information.

And of course leak the data to expose everyone that participates in this nasty business. It is such a sad thing that people are even proud to work at companies like this where humans are not worth more than a big dick or boobs.

And then you get around and say that child porn is so horrible. No, all porn is horrible and destroys our families and integrity. How can there be any dignity left if these things are held to be something good?

Ask HN: Why don't companies hire programmers for fewer hours per day?
461 points by maythrowaway  1 day ago   315 comments top 73
dustingetz 21 hours ago 8 replies      
Mythical man month says that even if longer hours have diminishing returns, the organizational overhead from adding people to the team is worse - fewer people with longer hours is the better alternative.

Here I will quote a HN post which quotes a secondary source which analyses mythical man month

"From a business point of view, long hours by programmers are a key to profitability. Suppose that a programmer needs to spend 25 hours per week keeping current with new technology, getting coordinated with other programmers, contributing to documentation and thought leadership pieces, and comprehending the structures of the systems being extended. Under this assumption, a programmer who works 55 hours per week will produce twice as much code as one who works 40 hours per week. In The Mythical Man-Month, the only great book ever written on software engineering, Fred Brooks concludes that no software product should be designed by more than two people. He argues that a program designed by more than two people might be more complete but it will never be easy to understand because it will not be as consistent as something designed by fewer people. This means that if you want to follow the best practices of the industry in terms of design and architecture, the only way to improve speed to market is to have the same people working longer hours. Finally there is the common sense notion that the smaller the team the less management overhead. A product is going to get out the door much faster if it is built by 4 people working 70-hour weeks (180 productive programmer-hours per week, after subtracting for 25 hours of coordination and structure comprehension time) than if by 12 people working 40-hour weeks (the same net of 180 hours per week). The 12-person team will inevitably require additional managers and all-day meetings to stay coordinated."


daef 12 hours ago 4 replies      
Living the dream here too:

When applying for my current job I made my salary proposition. My future boss to be told me that he had an more or less fixed absolute upper limit he tries to keep employees payroll below - so I told him that if the amout of money he is willing to pay is limited we can simply adjust the hours, and decided to do only 6hrs a day. I work from my 'home office', in the back yard under a roof beyond the sun, dont have to waste time every day to travel to some cramped up office, get food delivered by my wife, and have the rest of the day to spend with our son.

And since there is quite some time left I still have time to run my startup on 'spare time'


Dont give up - there are employers that honestly care about emloyees wellbeing.

I hop in the car and drive to the office (3hr drive) every month or so, stay for 2 days, do some meetings and go home then. I guess it's important to have a beer with your git-mates now and then.

majewsky 18 hours ago 3 replies      
I'm living your dream. I'm on 70% part-time, originally because of the CS degree that I'm working on in parallel (I originally majored in physics bevore pivoting to IT), but I plan to continue on part-time (maybe 80% instead of 70 then) after I have my degree.

When I originally reduced from 100 to 80%, during my next annual review, my boss was surprised that it didn't impact my apparent productivity in the slightest, which supports your theory: More hours have diminishing returns, but cost the same from the employer's POV, so it would be better to spread it out to multiple people.

Indeed, the part-time model has been gaining traction within my circle of colleagues (in Germany). Many of them are taking a day per week off to work on their private projects, or cutting away some hours to spend more time with their children.

dangrossman 1 day ago 14 replies      
Salary is only roughly half the cost of employing someone, so while a 25% salary reduction for 25% fewer hours is fair for you, the employer is not seeing a 25% cost reduction. They're not saving 25% on payroll taxes, benefits, workers comp, PTO, training time and management overhead. In fact, more people working fewer hours increases most of these costs.
niallwingham 15 hours ago 1 reply      
We work 20-hour weeks (remotely) at Apsis and it goes fine for us. We're a small consulting company with very little overhead, but I imagine it could work in other contexts as well. It certainly encourages you to eliminate unnecessary meetings!

I'm earning about 2/3 of the salary from my previous job, where I worked 60-80 hours a week. In practice this has meant a more-than-doubling of my free time outside work with only a small change in living standards, so I'm very happy with the tradeoff.

The company gets a pretty good deal too -- our work hours tend to be quite productive -- but I don't know if there's a purely economic argument on the company's side. Part of it may require looking at the company as a vehicle for employee benefit rather than an opposing force trying to extract maximum value.


PS: Apsis is a U.S. company, though I personally live and work in Toronto, Canada.

beefman 1 day ago 1 reply      
A lot of good responses here, but I think the main reason is being overlooked: someone willing to work more would take your job. They'd take your work by 'helping' with 'emergencies' that came up while you were out, then you'd be marginalized and you'd fail.

It would take a powerful credo from above to preserve such a position amid such politics. Most management probably doesn't even have enough power to do it. They already have problems getting people to take vacation for the same reason.

And it's the same reason you're probably already working more than 8 hours once e-mail and Slack hours are included (if not outright spending more than 8 hours at the office).

maxxxxx 1 day ago 1 reply      
I think especially in the US people are being paid for 8 hours, but in reality most of them work much more. So the overtime is essentially free. If you take a pay cut for working only 6 hours your employer loses much more working time than just the two hours.

The fact that overtime is free also accounts for the number of useless meetings. The time wasted is free for the employer. When I worked in Germany the bosses were much more conscious about wasting time because they had to pay for overtime or the union simply wouldn't allow it.

preek 19 hours ago 1 reply      
In my startup (voicerepublic.com) everyone works part time. Technical and operational side are incredibly complex, yet with a part-time team we managed easily. In fact, I could have been a 100% CTO (which of course means 180%^^), yet I chose to bring in a good friend and split the workload. In my experience 2 * 50% of good programmers that know each other and their stacks well, is much more than 1 * 100%. The rationale being that you can do architecture and reviews together, teach new traits and go on vacation at different times so that someone is always available. We are still a smallish team of about 12, but work from Switzerland and Germany and so far this methodology scaled very well for 2 years.
Chinjut 14 hours ago 0 replies      
In response to many of the comments here:

There is a frequent pattern which concerns me of primarily justifying the desire for reduced work hours in terms of the alleged increase in productivity this will bring about (by allowing recharging, preventing burnout, etc.).

I worry that this already concedes too much. This allows for just as much stressful dominance of work over the rest of life, and shame over any deviation from this script, as maximizes productivity.

Even if my shorter-work-hours productivity doesn't match my longer-work-hours productivity, I'd still prefer shorter-work-hours, with no guilt over having those preferences. My goal in life is not to optimize everything I do for maximum benefit of my employer; I have my own priorities and trade-offs to worry about.

cjg 1 day ago 1 reply      
Some of the many reasons:

1) inertia - that's how it's always done;

2) it's easier to make your good employees work harder than it is to recruit more of those difficult to recruit developers;

3) they would rather you work more hours than you already do - not fewer - after all they make money hiring you, they can make more money if you work more (in some twisted linear productivity logic);

4) some people wouldn't be interested, they don't want to work shorter hours and earn less money, and that reduces the opportunity for discussing this / rolling it out;

5) understanding how productive developers are is very difficult - did that bug take a week to fix because it was challenging or because you were slacking;

6) larger teams are less productive than smaller teams.

SatvikBeri 19 hours ago 2 replies      
A lot of companies do, informally. At many tech companies programmers set their own hours, and high performers who have a lot of career capital sometimes use it to work fewer hours.

Personally as a manager I often work fairly different hours from my team, and try to avoid knowing when/how many hours they're working, since I consider it irrelevant.

juiced 1 day ago 2 replies      
Why don't companies hire programmers for 6 hours a day, but keep paying for 8 hours a day? As a company you will probably attract the best in the field - programmers who are 4 times as efficient - and the programmer is not able to keep his/her concentration for 8 hours a day anyway.
kwhitefoot 15 hours ago 0 replies      
It depends on your circumstances. Now that i am 60 I would quite like to cut my average hours per week, not because it is too tiring but because I have a lot of other things I would like to do. But thirty years ago I was able and willing to work a more than eight hour day and then spend three or more hours at home learning more about my trade.

But as others have said, it gets complicated when you consider pensions so I'll probably keep working full time for another couple of years and then ask my employer if they would be willing to to let me drop to 70%. My net income at 70% hours will be a lot more than 70% of my current net because of progressive taxation.

Another option that can work for some people is to work longer hours for fewer days, instead of 7.5 hours per day for five days a week work 9.375 hours a day for four days a week. You save 20% of your commuting time as well.

0xfaded 23 hours ago 1 reply      
I for the first time have an understanding with my employer that I have about 5-6 useful hours in me. I am not someone who paces well and will go into a coding frenzy until I drop, at which point I know I'm done.

After that I go home and do other things (piano, study languages, etc.), but I couldn't write another line of decent code if I tried.

Working a 30h workday with a typical salary is awesome, I have approximately the same output as I would working 60+ hours, leave the office during daylight and have time to explore other interests.

jrbapna 1 day ago 1 reply      
This also begs the question, why don't companies hire more part-time programmers? In my experience it's literally an order of magnitude easier to find a quality full time gig than a part time gig. As many of us enjoy working on side projects and don't want to work 40 hours a week for somebody else, why is nearly every hiring post for full-time?
ne01 20 hours ago 1 reply      
Most managers don't understand that a company is a group of people (employees). for it to succeed you have to make sure employees are happy and productive.

If you are not productive (tend to procrastinate) it's because you don't have enough incentive to work.

People you hire have feelings, hopes, and dreams too. Like you, they want to become extremely successful (and if they dont you have hired the wrong people). Embrace other peoples feelings and help them fulfill their life with happiness.

How? Simple.

Instead of hiring 3 people and pay them average wage or god forbid minimum wage hire 1 person and pay her 3 times more.

When you fulfill someones dream and help them achieve happiness you have no idea how much they become more productive and it requires zero effort to manage them.

Most managers run their company like dictators.

When my startup gets to a point where I have to hire someone besides hitting the right person I'll make sure I have the resources to fulfill his dream life and I'm sure he will be more productive than 3 programmes (just like me) and would never submit a question like this on HN

a_imho 15 hours ago 0 replies      
Whenever I offered working 66% of the regular office hours for 50% of my salary, my employer refused. Whenever I brought up I prefer other compensations than money e.g. cut working hours in interviews, I was promptly reminded that I will work with others and need to fit within the schedule, like off days, flexible work hours and part time work never existed. I only worked at one company that allowed parents to negotiate an early leave when they need to take care of the children.

edit: it applies to small local shops and 1000+ SF based companies too

rejschaap 17 hours ago 1 reply      
Programmer salaries are quite low as it is. I think there is a good case for reducing working time without reducing salary. I would like to see more companies experiment with 6 hour work days (like in Gothenburg, Sweden). I don't think there will be a huge drop in productivity. Maybe it is wishful thinking but I think there is a good chance it will increase productivity.

One of the problems with a 6 hour work day is how to organise it. If left to the employee they might choose to have no break. Which is really bad for productivity. There probably has to be some agreement that you can only have a 6 hour work day if you have a good lunch break somewhere in the middle.

k__ 15 hours ago 0 replies      
Funny thing is.

If your output is good, people don't care how much of that fulltime work you actually spent working. The only additional thing they care for is that "you are at the office at 9!"

cperciva 17 hours ago 0 replies      
But, considering that no company does this, it looks like this isn't a good idea for employers. Why? Why do companies try to squeeze all the possible juice from employees instead of the alternative where they pay a little less, require a little less, and the employee becomes much more happy?

Tarsnap does this. My employee (yes, just one so far) submits semimonthly timesheets, and he gets paid for however many hours he worked. He likes the flexibility, and I know that I'm getting a good deal because productivity is correlated with motivation -- the hours when he feels like working are the hours when he'll get the most done.

kartickv 15 hours ago 1 reply      
Or entitle people to take a quarter or two off between projects, at loss of pay. Not, "the manager and HR may or may not approve", but as a matter of course.

That gives you more or less what you want, that too in a form where you do other things with the time (travel, take on a personal project for a month, etc).

And it eliminates the concern of "But if we had him work full day our project will get done sooner."

mahyarm 1 day ago 2 replies      
Also there is a communication synchronization issue. If people have overlapping patchy schedules where people are not available for questions, talks or whatever, it slows the company down. By having a consistent core hours where everyone is working then you solve this issue.

Also if your still fucking around and not working super productively, your still available to answer questions quickly, meetings and other low energy tasks. While if you're not working and doing stuff at home, it's significantly harder to talk to you.

mancerayder 8 hours ago 0 replies      
The comments below about productivity are interesting. One person works 30 and not more, another is happy to do 55 hours a week for years. For others, it depends on how interesting the work is.

You salaried folks are fun to watch. The reason why the debate between 30 and 55 hours continues, and from the perspective of the bosses, how to steer you (in the U.S. at least) towards 55, or even higher, is because you don't get paid in proportion to what you work.

I'm on 40 hours a week in my current contract and (with full consent) soliciting more work on the side with other clients. I promise you, when you're paid hourly, putting in longer hours and doing more work is far more 'rewarding' because the rewards are less philosophical, psychological and political - they're material.

Edit: come to think of it, I am FAR more productive in my 40 hours now than in the 50, 55, etc. hours I was 'putting in' in my salaried years. But maybe I'm just an old cynic who doesn't drink the same kool-aid as those who run the company or own significant shares in it.

kayman 1 day ago 1 reply      
The Quaker mentality lives."Idleness is the hands of the devil."Work hard, long...it's the only way.

I also get a "I work long hours why don't you" feeling I approach the subject.

pyoung 17 hours ago 0 replies      
I did 75% time for six months. In my case I was approached by an old employer because they really needed the extra help. I originally asked for a contracting arrangement, but according to HR they couldn't do that, so we settled on the 75% (w/ benefits) arrangement. A few thoughts on the subject:

1. Everyone I know who has managed this schedule has done so by negotiating with their existing employer (in my case a former employer). Doing this at the interview stage seems a sub-optimal (unless you are really in demand) because they will probably just go with someone else. After you have been working for a while, you are a known quantity, and so it's easier for them to stick their neck out for you. If your current employer is not receptive, you can apply pressure by threatening to leave (worked for a buddy, but you have to be prepared to walk).2. For this arrangement to make sense, you really need to be good at setting boundaries. I was pretty bad at that, and as such, I often put in more hours. Part of the reason they took me on was because they were having trouble finding people to handle all the workload, so I should have realized that it would be a struggle to keep the workload manageable on the 75% schedule (when I used to work full time I put in a lot more than 40/week).3. For me personally, I would much rather have a (very) flexible schedule than a shorter work week. I don't mind cranking out 10-12 hour days if needed, but I like the option to get some of that time back, whether it be taking part or most of a day off on short notice, or coming in late to the office in order to enjoy the outdoors in the AM or do some chores.

Due to 2 and 3, I think an hourly consulting arrangement would have been much better (for me at least). If you charge enough, you will make up for the lost benefits, and because you are charging hourly, you don't have to stress too much if the workload spikes from time to time because you will get paid for those extra hours.

candu 10 hours ago 0 replies      
I would accept such a reduction, provided the reduced salary was still enough to live comfortably on while continuing to save money.

That said, there is another approach: work remotely, and avoid companies that insist on installing time / activity trackers on your work machine. This frees you from "seat time" metrics of productivity - if you can get the same thing done in 4 hours at home vs. 8 hours in the office, you gain 4 hours. (Versus an office environment, in which you'll either a) be expected to sit around doing nothing but looking productive for that time, or b) be "rewarded" for your efficiency with useless bullshit no one else wants to touch.)

Know that this will limit your opportunities for "career growth" within a company. The next level is to not care about that, and instead switch to consulting. Once you're doing remote work anyways, this is an incremental step where the only major difference is that your tax returns are now a bit more complicated. As a consultant, your "career growth" is being able to ask for higher rates over time.

If you have a good relationship from remote work with your previous full-time employer, you may even get them as a client! Just make sure that, when calculating your rates, you don't just divide your previous salary by 2000 hours - remember that you now have additional overhead that you didn't as an employee, like health insurance (in the US, at least), client development, and all those mini-breaks that employees take during the day that are now unpaid. The usual rule-of-thumb is 2x your previous hourly rate.

With the right setup, you can reduce hours without reducing salary ;)

delinka 23 hours ago 0 replies      
"...hire programmers for fewer hours per day"

It takes time for me to get my head around the problem my employer wants solved. They can't plan to the tiniest detail the precise class hierarchy that I should produce, the exact procedure I should write. If they could, they'd only need a typist. They need someone who can comprehend the problem, formulate a solution, and then slice that solution thin enough to implement each layer in code. This process is necessarily creative and often does not involve producing code. Why don't they hire for fewer hours? Because they'd get less 'productivity' out of me.

Further, as a company grows, it takes time to communicate, to get everyone headed toward the same goal. Just because the founder said "protect data" doesn't mean that everyone automatically comes to the same conclusions about how that should be done. So there are meetings for communicating verbally, followed by documentation to solidify understanding of the problem and the solution, followed by more meetings to clarify intent in the documents...

After all that communicating, when I get time to focus on creating solutions, I've got to take the time to reconstruct state in my head so that I can be productive.

"...programming makes us tired and stressed, how often we spend hours and hours per day just procrastinating or being completely unproductive while still trying to be."

All those meetings make me tired and stressed. But more to your point, I'm not unproductive while I'm defocussed. While writing part of the solution, I hit a wall - how do I implement this so that it's readable, so that when I come back to this part of the code in six months I'll still be able to understand wtf I was thinking? This operation isn't permitted, so what's another way to solve the problem without creating a security issue? So I defocus - walk the floor; walk the block; look out the window; operate the flippers on the pinball machine - and let my subconscious work it out.

Between attempting to understand the problem, interruptions for communication and time required to find an acceptable solution, I'm left with little time as it is to actually implement the code that solves the problem. I'm already averaging about two hours of "programming time" per day. Any less and I probably can't even write the code.

segmondy 23 hours ago 1 reply      
If you work for 20-25hrs, you are going to put exactly just that and go home. If you work full time which is 40hrs, they are going to expect 60+hrs from you to show that you're a team player if you want to rise in the organization.
ankurdhama 21 hours ago 0 replies      
Productivity is a tricky word in case of programming. The usual definition is how much you produced and also the produced thing should be tangible in some form or other but we all know that good programmers spent a lot of time thinking about things, trying various alternatives etc until they make a particular decision and then implement the decision. In such cases it becomes very hard to justify the productivity. Also when I am working on some particular problem, the problem is always with me in my mind. Whenever I get some free time I tend to think about the problem, no matter where I am or if I am at office or not. This leads to this idea of "working for fixed hours" being dumb in such cases.

Of course this doesn't apply to all programming jobs, specially where you are given a requirement and you just go ahead and implement it without thinking too much about the big picture of the whole software system.

A thought can come anywhere anytime, it doesn't care if you are in working hours or not :)

jacobr 17 hours ago 0 replies      
I work 80% and have been doing so for the last 5 years. In Sweden you have that right until your child turns 8 years old (so until they've finished their first year of school), with reduced pay of course. This lets you be available to pick up your kids from school when they still have very short school days.

I rarely play ping pong or do non-work related stuff at work now, because I feel I want to make the most of my working hours. I go home around the time I often get a slight down time around 15:00 anyway, so I feel I'm very focused when I'm at work.

I think I will have a hard time adjusting back to 100% when my youngest child turns 8, but I know people who have negotiated down their working hours to play more video games so it will probably be fine.

sjm-lbm 1 day ago 1 reply      
I have almost no data to back this up, but it's my suspicion that the current workweek is something of a lowest common denominator: across all job types, company needs, etc., it maximizes productivity while keeping most HR polices identical.

I mean, imagine the administrative overhead to create a system like you propose: you might want to work 25% less, but I really think a decent number of young and single software engineers could actually work something like 40% less and be OK at a 40% reduction in salary. Somewhere there is someone that is working on an interesting problem and, in spurts, wants to work 30% more. This variability increases as you look at, say, the sales team, the accounting team, etc, and all other positions that have different optimal working styles.

Now, imagine creating a system to make sure you pay people fairly in that world (remember, not even everyone with the same role/job title starts with the same "100%"). Or a way find out who your real best/most essential employees are... and so on. I honestly think that the workweek, as it is, exists mostly because as a company becomes sufficiently complex a few assumptions need to be made in order to keep it functioning properly, and '5 days, 8-10 hours a day' (or whatever) is an assumption that everyone can more or less stomach - even if they don't enjoy it.

anupshinde 15 hours ago 0 replies      
If you are feeling stressed staying productive 8 hours a day, try comparing your productivity with others in the team. If yours is higher you deserve a better pay.

1. Everybody doesn't have similar productivity. There could be a Nx(or 10x) programmer who would finish off his work in say 2 hours. And there would be a 1x programmer who would finish the same task in a 8 hours. Both programmers are good - but the 10x programmer will get tired, burnout and eventually become less than 1x if made to work 8 hours every day. For employers and management - this is pretty hard to figure out (or understand)

2. "Wouldn't you accept a proportional salary reduction?" - Yes, only if everybody in the team exhibited similar levels of productivity. Unfortunately that is rarely true. I have faced this scenario as a freelancer when I worked 32 hours a week. Some team members have worked much much slower than me, wasted my time in meetings outside my work time, asked for help on simple things that were well documented and still got paid higher than me. I almost doubled my rates for that client.

3. One needs few hours to get-in-the-zone and task-switching costs are higher with lesser hours per day. At least that is true for me. Once I am in the flow, the outputs are much faster. I would work one 80-100 hour workweek and finish the work. If I were to do the same work in 40 hour workweek, I might finish it in 3-4 weeks. And if 20 hour week - that same work would finish in 7-8 weeks. That is simply because of the switching costs. Obviously, I cannot do 100 hour week consistently

timwaagh 17 hours ago 0 replies      
No I would never accept that. at least not now. I get a below-average salary (not just for programmers but across the general working population). I have a mortgage which needs to be paid. And that should be enough explanation. if rich programmers started to demand this kind, companies would just open an office elsewhere and there would be plenty of people willing to work fulltime at a fraction of their salary. Actually, my preference is the reverse. I would gladly accept working 60 hours a week if they could paid me for 60 hours. however this is illegal.
biot 9 hours ago 0 replies      
If your day is only 50% productive, then clearly working only 4 hours a day would make you 100% productive! Only it doesn't work that way. Unless you have laser focus, most people can't pick and choose the times that they are productive. Rather, they show up for 8 hours and warm up getting into things, then are productive for a while, get interrupted, think about that whole lunch thing, eventually get back into a productive mood, then gradually wind down and think about leaving work. Net result might be 4 hours of productive work, but it's interspersed with unproductivity.
beat 11 hours ago 0 replies      
Just because I'm procrastinating doesn't mean I'm being unproductive. Quite often, my brain needs to shove problems back into asynchronous batch processing in the back end.

Or, as someone once told me, "That's not programming. That's just typing!"

ci5er 21 hours ago 0 replies      
I have to imagine that economically there are fixed per-employee costs. There are, additionally, coordination costs.

When I am a "boss" - I try to be empathetic and treat others like I would like to be treated, but that starts with knowing how I work.

Recognizing first that I am not normal, I never ever really have "down days" - preferring to work a little every day of the week, month and year, but I am fortunate in that I can switch up what I am working on -- sometimes it's just reading, which I can do at the lake, sitting in the sun.

In any case, when coding, some days I am "on", and some days I am not. On the days that I am not, 3 hours of coding is too much, and on the days that I am, 12 hours isn't enough.

I don't think I am alone in this.

For me, working 6 hours a day would be the perfect amount to be ... completely not working for me. I would rather have 3 16-hour days, and the rest off. Now, I realize, that's just me ... and everyone else is different.

But, for a group of 5, 10, heck - even 1,000, how are employers supposed to be able to keep up with the coordination cost of knowing all of this and keeping everything smoothly functioning? It's kind of like throwing a barbecue in Texas and making sure that there are kosher and vegan and gluten free choices for people (or something).

For myself, when I have to wear a team management hat, I try to build an async remote-first kind of environment where every employee is supposed to work away from the office 40% of the time just to make sure our systems work well enough for our truly remote staff, but ... gosh! This is a lot of work and experimentation that can simply be avoided by bosses that have other things to worry about by saying: "You know what? Everybody. Here. 9. to. 5. done."

I am sympathetic to what you are saying - but mapping it to my own self, I would have to make adjustments. Then there are adjustments for everyone. Which sounds great (like a 100% flat organization), until, you know, it really doesn't. Because we do not have the tools and the generations of know-how on how to manage it. We'll get there. It will be 40 years.

fimdomeio 1 day ago 1 reply      
Because if you have more free time from a job you don't enjoy doing, its a lot easier to find another job.
iopq 16 hours ago 0 replies      
How about this: hire people to work 32 hours a week for the same pay. Offer it as a competitive advantage to decrease employee turnover.
throwaway092343 1 day ago 0 replies      
I used to own a business and I did this with an employee. I was just starting out and had only 1 employee. I had a contract job that was paying my salary, and a lot more, which I was banking. But it wasn't enough for 2 salaries. So I hired someone part time to help with products I was writing to sell directly to customers. It worked out fairly well for a few years. He got to do other things (he might have had contracts of his own or been studying something at the time - I don't remember), and I got additional work done without having to pay a full-time salary. They're rare, but if you look for them you can find them.
Spooky23 23 hours ago 0 replies      
For exempt employees, it makes things more complicated and creates liability.

A friend of mine worked for a .gov that made a big political display about employee last voluntarily reducing their workload. He went to 80% and took every Friday off.

It was hilarious... The HR people were so crazy about him potentially working 81% that his remote email was shut off, his boss had to log any calls made to him after hours, etc.

So he basically paid 8 hours to get about 20 hours off -- and the punchline is that he got a major promotion explicitly to make him ineligible to work part time.

robterrin 10 hours ago 0 replies      
There's some truth and wisdom in your question and also, the replies that people have given. Likely, there is no one answer. A short incomplete list might look like this:

- coordination problem- organizational overhead- micromanagement

David Graeber wrote a good piece on bullshit jobs that you might enjoy: http://strikemag.org/bullshit-jobs/

louisswiss 16 hours ago 0 replies      
As a startup founder with a technical team I've never understood the obsession with #hours worked.

Perhaps it is because we originally started off working remotely and logging hours worked wasn't an option. With our current tech team, # hours worked is nowhere near top of mind. We evaluate performance based on output, team dynamics and (perhaps related to # hours worked) availability.

While admittedly vague, availability is a measure of how the employee in question's work schedule negatively impacts our ability to work quickly and effectively as a team. It doesn't necessarily correlate directly to # hours worked, however, as more often than not other factors such as 'willingness to take part in group discussions' or 'prefers to work at night' have a bigger impact.

dustingetz 21 hours ago 0 replies      
Try freelancing, I've had several clients who are fine with weird arrangements and have worked less hours for less money on two different occasions.
nathan_f77 1 day ago 1 reply      
I'm doing this now. I've found a contract job where I can work 20 hours per week (4 hours per day). I don't think I ever want to go back. At least, not for someone else. I'm actually spending the rest of my time working on my own projects, and that doesn't really feel like work at all. My own projects are anything from mobile or web apps, to short films, art, music, electronics, inventions, etc. etc.

I wish I didn't have to spend any time at all on contract work, but unfortunately I'm not yet able to pay the bills with my side projects. Hopefully one day.

ahazred8ta 1 day ago 1 reply      
It actually costs a business money to have an employee, over and above the amount shown on a paycheck. If you hire a lot of employees who work 20 hours per week, the HR department has to screen, hire, monitor, discipline and/or fire twice as many people. Training costs also double. Part time employees usually quit sooner and turn over faster, which again means higher onboarding and training costs. It's also harder for managers to keep track of twice as many people. YMMV
VLM 9 hours ago 0 replies      
Someone should provide some military perspective. They have a bit of documented experience with extreme working conditions and hours and responsibilities.

On one hand in the reserves we were butts in seats about 1/10th the time of active duty soldiers so you'd theorize we did 1/10th the training of active soldiers, but I can assure you by the time you're done with semi-annual weapon cleaning, range qual time, aids briefing and test (do they still do that?), semi-annual review of DD214 and SGLI beneficiaries, PMCS the vehicles every saturday morning, group PT every sunday afternoon, once a year holiday party and once a year summer picnic, NBC training and mask cleaning, semiannual company wide PT tests, by the time you're done with all that we didn't really train in our MOS at all. Lower enlisted don't go to annual training they go to various classes. Somehow I never went to AT with my unit, somewhat unusual, but "most people" only go once or twice at most anyway. At any rate my point is if the BS load is a fixed amount of time, dropping the percentage of time you spend can have an extremely dramatic effect on the ratio of work to BS. I will admit that unlike a standard American 60 hr/wk workplace, when we worked, we actually worked, not goofed off while present. I would theorize that office cooler BS talk is a very fixed cost of not working remotely, people are going to goof off a minimum of 10 hours per week no matter how many hours of butt in seat you force on them.

Another thing to keep in mind is the military has an enormous amount of experience with having young people get no sleep or rest and therefore require two months to accomplish about two weeks of learning, its called basic training. If you're attending one of the more cerebral post-basic training schools they force you to spend less than 40 hrs/wk in classroom thinking. And after you graduate aside from deployment / emergency / wartime events you spend less than 40 hrs/wk in office actually working. I can assure you the army does not care about your happiness or long term health, they simply have billions of human-hours of experience that you can get butts in seats any arbitrary number of hours up to near 168 hours per week, but you can only get working brains in seats maybe 30 to 40 hours per week and maximum total brain productivity regardless of IQ is around 30 hrs/wk. And this applies from the lowliest E1 doing accounting audits and sysadmin duties all the way up to the officers flying fighter planes, so no "I'm special because I'm smart and/or highly trained" BS.

JamesBarney 20 hours ago 0 replies      
Companies say it is hard to find good developer talent, but if they were willing to switch to 4 days a week I'm certain they would have a long line of great developers lined up outside their door. I imagine it would quadruple the applicant pool and double* the talent of the average developer applying.

Also I think if anyone wants to convince some employers to allow for a 4 day week focusing on attracting and retaining talent is more important to companies than the 20-25% pay cut.

*Because the average developer who applies to a position is probably in the bottom quartile, but the large number of dev's interested in a 4 day work week would bring the quality up because these would be closer to median developers.

softwarefounder 1 day ago 1 reply      
For those looking to find such a job, I'd recommend going independent. Though I usually work 40+ hours a week, given my current client base and relationships, I could easily setup the contract to be such that I work for <= 20hrs a week.Also, I really think that this model only works at an hourly level. We all know that being on salary always equates to 40+ hours a week, emphasis on the +.
penguat 1 day ago 0 replies      
The answers already here address the direct effect in terms of company costs - there are fixed costs of an employee as well as variable costs.

I am more interested in what it does to your typical team in your something-like-typical corporate environment: I think it would tank team productivity. Instead of programmers getting up to maybe 6 hours' writing software done, programmers would get significantly less time doing that. And we would all hate it.

This effect would happen because a) there are interruptions, and communications overheads to being part of a team; and b) because you need a bigger team to cover the reduced productivity, all of your things that scale poorly with the size of a team mean you need an even bigger team, with each person therefore doing less development.

I think companies which are smart about this whole building software thing are more interested in teams and their capabilities than individuals and their capabilities, and I think many of those would look at this in the same way as I just did.

monort 19 hours ago 0 replies      
Company has a constant overhead per programmer, so you will see a larger than 25% cut and most people are not ok with that. The solution is either freelancing or taking extended unpaid vacations when switching jobs.
bcoates 1 day ago 0 replies      
Nobody in a position to make the decision gains anything from lowering your salary. Reducing your pay doesn't solve a problem they have, but explaining how you don't work 10 hours a day like the employees of $BADMANAGER creates a problem they don't have.

Letting your devs goof off/work on the side while keeping the seat warm is easier for everyone.

stefs 15 hours ago 0 replies      
i used to have a 40+ week with fixed hours at my previous employer and it got me close to depression and a burnout in about two years.

now 30h/week (fixed salary). my current company hires a lot of university students to work part time, so this wasn't a problem. we have mostly flexible work time (this might differ between projects). i get comp time but no overtime pay.

in practice i usually work as long as i feel concentrated and productive. this means that i usually work longer hours if there's a lot to do and i have a clear vision what and how to do it and fewer hours if i'm distracted, tired or have urgent stuff occupying my mind.

we're required to be flexible though when it comes to projects - so generally your choices are respected, but there might be occasions where we've got standups at fixed times or longer hours. in my experience those are rare.

imo this benefits both me and my employer and i'm very happy with it even though i earn at bit less than i could. on the upside though i'm usually motivated, well rested and concentrated.

pinouchon 13 hours ago 0 replies      
I work 4/5 days with a 4/5 salary. I had this idea for a while. The key for me has been to negotiate it before taking the job.

It's been a year, and it has worked fine for me, no tensions with fulltime coworkers or anything like that. Just the occasional "yeah I'm working part time" when meeting new hires.

toomim 17 hours ago 0 replies      
> Why don't companies hire programmers for fewer hours per day?

My companies all let programmers choose when they work, and for how many hours. It works quite well.

thallukrish 19 hours ago 2 replies      
More than programming, often a lot of time goes in discussions and meetings. I feel the best thing that can happen is, if you hire a programmer who does end-to-end and releases a product all by himself/herself. This whole job of parallelizing, scheduling and managing looks so artificial and thrust upon unnecessarily just because we were not able to slice and dice the problems in a way they can be addressed completely individually.
JDiculous 1 day ago 1 reply      
Mostly unwillingness to challenge the status quo. It's the same reason why despite all the incredible advances in remote communication technology (eg. Google Hangout/Skype, Slack/HipChat), most companies haven't embraced remote work.
dboreham 1 day ago 0 replies      
Programmers are usually salaried employees therefore (in US and other western labor markets I'm familiar with) are not hourly paid. So they are paying you for the 10 minutes in the shower when you figure out the hard problem.

Also: fewer, not less for discrete quantities.

lrem 17 hours ago 1 reply      
Programmer working hours are a fiction. I'm contracted to work 8-17, actual requirements (schedule-wise) are for 11-19. Many days I might either crunch 11-21, or slack 13-17, not mentioning all the time spent in office but resting. And this is all fine, as long as both me and my manager don't perceive a problem.
magoon 14 hours ago 0 replies      
While you're a good developer, you can spend 6 hours coding - why not use that other two focused on business, building your team, improving process, and researching new tech strategies? Then you don't feel burned out coding all day.
intrasight 12 hours ago 0 replies      
Somewhat depends what you mean by "hire". I've several consulting clients. For one I work ~4 hours per day. At the other extreme is one for whom I work ~4 hours per month.
aaron695 22 hours ago 1 reply      
I dont believe it's directly beneficial for the company as others have listed.

But its actually because programmers are mostly men.

Society doesn't accept the benefits of men working part time yet so a mutually beneficial system like many female/equal ratio oriented positions have has not been pushed in IT.

jack9 1 day ago 0 replies      
Having the programmer available for communication (changes, updates, confusion, immediate maintenance) with the people that do work the full day is more productive than having them unavailable and paid less. This is eyeroll territory. When I don't have a set deadline for Problem A as a contractor (this has happened), I work intermittently on the Problem A and bill appropriately.
mydpy 1 day ago 0 replies      
Exactly. This pains me all the time. I would love to take a 25% salary cut for a 25% reduction in working hours.
1_800_UNICORN 1 day ago 1 reply      
> ...it's very rare for me to have a fully productive day. If I moved to 6 hours per day, I'm not even sure if I'd become less productive, I'd probably just spend less time chatting at the coffee room, and have a smaller chance to get burnout.

I'm not sure I follow here. You want 75% of the salary to do the same amount of work? That doesn't really make sense to me. You also have to take into account that companies have to manage the overhead of having more employees (more mouths to feed at events, more HR costs, at certain thresholds it triggers new legal requirements for the business). I doubt you're going to find a lot of takers from either employees or employers.

It sounds like you're dealing with a personal issue, which is that you feel like your company is "try[ing] to squeeze all the possible juice" from you. This sounds like you're burned out on your job. Maybe it's time to find a new company, a new position, a new work environment, something that will make you happier than you are now. I'm not sure that less hours and less pay is necessarily the problem here.

randprogrock 20 hours ago 0 replies      
Here's an idea: Don't worry about programming, worry about negotiating better. I was obsessed with programming in my 20s and 30s and put in god-knows-how-many hours and all-niters and it never really got me anywhere. Nowadays I work about 40 hours, and I make a sh*t-ton more money. Oh, and full benefits. Also, I'm the best at what I do.

Intelligence in programming isn't the only kind.

dschanoncanon 17 hours ago 0 replies      
In my opinion, 2 developers doesn't make 2 times results of work. This mean that if company hire more 6hr/day developers the result of work does not absolutly paralleled.
sqldba 21 hours ago 0 replies      
I'd accept it. Management is not interested however.

I'd also rather it in 4 days on 3 days off.

anotherhacker 15 hours ago 0 replies      

Fewer hours forces you to focus and work.

ebbv 14 hours ago 0 replies      
On the fatigue issue: try pair programming. The day goes by a lot faster and with less fatigue for me. I was skeptical of the idea for many years, but once I really tried it I realized it was much better.

On the time issue: employers in the US could hire part time developers. But I think good devs are so hard to hire in the first place that if you find one you want to have them work a full 40 hours per week to maximize their productivity (in theory.)

It is unrealistic to expect 40 hours of dev time out of a developer who is working 40 hours a week, there's a lot of book keeping, meetings, emails, etc. to deal with. But if you cut the dev's time by 25% you're not reducing the amount of that non-dev stuff that needs to be done, so you're not making the time more efficient. Other than, as you said, perhaps reducing burn out. But that's what vacations are for.

19 hours ago 19 hours ago 1 reply      
Flagged for being a bot dupe of https://news.ycombinator.com/item?id=12161992.
19 hours ago 19 hours ago 1 reply      
This seems to be a spammy re-post of what a different user said many hours ago: https://news.ycombinator.com/item?id=12161852
19 hours ago 19 hours ago 1 reply      
Why are you saying the exact same thing that Harkins said to dangrossman?


vumgl 1 day ago 1 reply      
If you work 8 hours a day, you are productive 4 hours a day, (So you are basically paid double salary for your 4 hours or productive work). If you work 6 hours a day, you will most likely be productive 3 hours a day.
Resources for Amateur Compiler Writers c9x.me
262 points by rspivak  1 day ago   70 comments top 11
senko 1 day ago 4 replies      
IMHO, compiler construction as an advanced excercise for amateurs is at topic that has been beaten to death (as OP suggests, there's tons of available materials and projects ranging from high quality not-so-amateur to quick or fun hacks - I'm guilty of one myself).

On the other hand, I would love to see "HTML5 and CSS parsing and rendering for amateurs". Given the state of modern HTML5 and CSS standards, and ignoring compatibility and real-world usage (just like for toy compilers), Let's Build A Browser Engine sounds more tempting than Let's Build a Compiler.

(To preempt "contribute to existing actual real-world engine" suggestions -- while that's worthwhile, it's like saying "contribute to LLVM" to someone looking to write a toy compiler, ie. completely misses the point).

zzzcpan 1 day ago 0 replies      
Frankly, I don't see anything interesting in that list, especially for amateurs.

As an amateur compiler writer you would probably want to make something useful in a few weeks, not waste a year playing around. And it's a very different story. It's essentially about making a meta DSL, that compiles into another language and plays well with existing libraries, tooling, the whole ecosystem, but also does something special and useful for you. So, you should learn parsing, possibly recursive descend for the code and something else for expressions, a bit about working with ASTs and that's pretty much it.

PaulHoule 1 day ago 2 replies      
Is amateur the right word?

I am in it for the money which I guess makes me a pro but I don't have a computer science background and frankly in 2016 I am afraid the average undergrad compiler course is part of the problem as much as the solution.

Another big issue is nontraditional compilers of many kinds such as js accelerators and things that compile to JavaScript, domain specific languages, data flow systems, etc. Frankly I want to generate Java source or JVM byte code and could care less for real machine code.

johan_larson 1 day ago 0 replies      
Compiler construction is a big field, so it's easy to get lost in the details.

If you are mostly interested in principles rather than the most recent tooling, there's a course by Wirth that makes it tractable.

More here: http://short-sharp.blogspot.ca/2014/08/building-compiler-bri...

qwertyuiop924 1 day ago 0 replies      
If you're interested in getting started with interpreters, which are easier, you might want to look into Daniel Holden's excellent Build Your Own Lisp (And Learn C). Although it has been criticized for many reasons, it's a great book, and if you find interpreters and compilers totally magic, it's a good place to start.

Also, after reading What every compiler writer should know about programmers, I finally understand why people hate C. Because this just shows definitively that C compiler writers have been in their own little world for the past few decades.

Man, now I want a C compiler that wasn't written by a bunch of mindless jerks that will be the first up against the wall when the revolution comes...

jfoutz 1 day ago 0 replies      
Dybvig's dissertation is great. [1] People might disagree that it's a compiler, it targets a fairly high level vm rather than a native machine. But it's got everything you need. Really, you can fire up dr racket, type it in and have a great framework in an afternoon.

Anyway, it's very readable.

[1] http://agl.cs.unm.edu/~williams/cs491/three-imp.pdf

barrkel 1 day ago 0 replies      
If you're more interested in the front end than the back end, then Crenshaw's Let's Build a Compiler is still worthwhile.
_RPM 1 day ago 2 replies      
I wrote a VM, I still can't get recursion to work. It's hard.
cocoflunchy 1 day ago 1 reply      
I recommend http://createyourproglang.com/ too if you want something very simple and you don't know where to start.
Ind007 1 day ago 1 reply      
Is there any similar kind of collection for static analysis?
poseid 1 day ago 0 replies      
nice collection - i keep some notes myself here, and was able to generate my own parser with Jison https://github.com/mulderp/mulderp.github.com/issues/13

Once the parser returns the AST, it is getting more complicated, how to decorate an AST, add actions, etc. still looking to learn more about compiler backends

Approaching Almost Any Machine Learning Problem kaggle.com
340 points by Anon84  4 days ago   33 comments top 9
glial 4 days ago 5 replies      
Interesting post, but there are several errors, or at least suggestions that don't make sense to me:

> Doing so will result in very good evaluation scores and make the user happy but instead he/she will be building a useless model with very high overfitting.

Testing on your training data doesn't in-and-of-itself lead to overfitting, but it will hide overfitting if it does exist (and is a terrible practice for that reason).

> At this stage only models you should go for should be ensemble tree based models.

Not sure why this should be the case. Many ensemble models are very memory hungry & slow, Random Forests being a good example. They are flexible and have minimal assumptions, sure, but that doesn't mean you shouldn't try any other modeling technique, especially if you have domain knowledge.

> We cannot apply linear models to the above features since they are not normalized.

Not true at all. You won't be able to meaningfully compare coefficient magnitudes, but you can certainly apply linear models.

> These normalization methods work only on dense features and dont give very good results if applied on sparse features.

Since normalization is done feature-by-feature, there is no reason why this should be true.

> generally PCA is used decompose the data

If your latent features add linearly, ok, but do they? Is it meaningful to have negative values for your features? If not, consider using something like sparse non-negative matrix factorization.

Lastly, using this approach means you have a HUGE model & parameter space to search through. Because of this you will need a ton of data to get meaninigful results.

He seems to be treating machine learning methods as a bag of tricks. That's ok so far as it goes, but in my experience it's much more valuable to try and understand your data, and the process that generates it, and then build a model that tries to reflect that data generation process.

vonnik 4 days ago 3 replies      
This is a great post as far as it goes. While Abhishek mentions Keras for neural networks (and Keras is a great Python library), he doesn't really go into the cases where deep neural networks are more useful than other algorithms, and how that changes a data scientist's workflow.

DNNs are really well suited to unstructured data, which isn't the kind he highlights. One reason for that is because they automatically extract features from data using optimization algorithms like stochastic gradient descent with backpropagation. What that means is they bypass the arduous process of feature engineering. They help you get around that chokepoint, so that you can deal with unstructured blobs of pixels or blobs of text.

Because unstructured data is most of the data in the world, and because DNNs excel at modeling it, they have proven to be some of the most useful and accurate algorithms we have for many problems.

Here's an overview of DNNs that goes into a bit more depth:


lqdc13 4 days ago 0 replies      
Approaching (Almost) Any Machine Learning Classification Problem.

If you are doing sequence labeling, learning something about the data, tackling partially unlabeled data or time-varying data, you generally have to take a different approach.

pesfandiar 4 days ago 2 replies      
It's a very insightful article about nitty-gritty details of working on ML problems. However as an outsider, I can't decide if some of very specific statements without any reasoning (e.g. good range of value for parameters) are highly valuable wisdom coming from years of experience, or merely overfitted patterns that he's adopted in his own realm.
slv77 3 days ago 1 reply      
Throwing out a couple of stupid questions since we're talking about rules-of-thumb..

1) How much does feeding irrelevant features have on a model? For example if you added several columns of normally distributed random numbers?

2) How much impact does having several highly correlelated feature have on a model?

3) If you had limited time and budgets would it be better spent on cleaning data (removing bad labels, noise in data); feature engineering (relevant features) or feature selection (removing irrelevant or redundant feature)?

danvayn 4 days ago 0 replies      
Awesome post and site, my only complaint about this blog is that you'd think the top left would be a link and 'No Free Hunch' would not be one, but in fact it's the opposite..
cmdrfred 4 days ago 0 replies      
I've been looking for a place to begin with machine learning, thanks.
elgabogringo 4 days ago 1 reply      
Awesome post. Too hungover to read all that today though. Definitely Saturday morning over a cup of coffee - assuming I'm not in the same shape then.
greenpizza13 4 days ago 1 reply      
Great post, but how am I supposed to find it credible when he uses quotation marks for emphasis?

> The splitting of data into training and validation sets must be done according to labels. In case of any kind of classification problem, use stratified splitting. In python, you can do this using scikit-learn very easily.

Introduction to Zipline: A Trading Library for Python quantinsti.com
310 points by kawera  1 day ago   145 comments top 12
chollida1 1 day ago 17 replies      
I've typed and deleted this post a few times trying to find a way that it doesn't sound kind of pompous but if it helps save one person alot of money then screw it, I'll sound pompous....

I get asked quite a bit on how to start doing algorithmic trading and the first thing I always tell people is don't.

I think I've said this many times now but the number of people who come at it with the thinking "I'm a computer scientist. I'll just fire up R or python and apply some machine learning to the markets and watch the money roll in" is staggering.

I mean each day 100's of Phd's start with clean market data, more data sources than you could possibly think of and statistical back testing systems that have 1000's of man hours put into them, trying to find a way to make money.

After all of that if you really want to I wrote this in response to an Ask Hacker News a little while ago


TL/DR- focus on time periods greater than a day

- expect to lose money

- expect to take a year to figure out some edge in the market

- most decent trading strategies that a normal person can use come from economic/market insights first and technology second.

The site: https://www.quantstart.com/ is also decent at bringing you up to speed on the math you'll need to know though I believe that the material there oversells how easy it is to find a decent trading strategy.

cowmoo 23 hours ago 0 replies      
Hi, a shameless plug: I went to the Quantopian (the company that is behind Zipline and essentially uses Zipline as the core backend to their cloud platform) algo-trading hackathon two weekends ago and came up with this algo:


Pair-trading VXX and XIV based on the StockTwits sentiments of the SPY at market open. The backtest did really well from 2011 to 2014 with 1700-1800% return in 3 years; and flat between 2014 to present-time,

I'd really love it if people can improve upon the algo and see what people when they clone the algo and come up with ways to mitigate the drawdown's and improve the performance!

unknown_apostle 1 day ago 4 replies      
Everybody's trading nowadays.

How about just investing :-)

I.e. focus on periods longer than a year, which so few people/professional market participants do. And on actual businesses instead of the crazy antics of a line.

I wonder if you could use something like Zipline/Quantopian to screen huge amounts of consolidated balance sheets for markers of undervaluation. You could reject 1000s of companies and focus your manual vetting on the few that remain.

If you can find the dollar selling for half a dollar and you can understand why it's selling for that price (e.g. because the entire market is down), you may have identified a winner. Then all you need is a little guts and lots of patience. And a predefined set criteria that you would constantly monitor to decide if your thesis is still valid.

shortstuffsushi 1 day ago 0 replies      
General question: how do you take this and interact directly with the market? Is there some sort of general, public api that you're making calls agains, where do you get an account for it, etc? Or, is this going through some firm that interfaces with the market?
lordnacho 17 hours ago 1 reply      
I'm sitting at an HFT here, coding.

This zipline thing is quite interesting if you're new, but if you can code, I'm not sure what the advantage is. The idea of a backtest is quite simple, and you can easily fire up something like pandas to do it for you. The equity line is simply your positions x returns, minus costs. To determine your positions, you have to make sure you aren't looking at future prices, but apart from that you are flexible in doing whatever you like.

And this was a question for me. Suppose I want to code a cross-sectional strategy. How would I do that in zipline? It seems to be the kind of thing that gives you one backtest for one time series. Perhaps I just haven't looked into it enough. When we backtest, often we want to do things across the ensemble. We also take positions in a whole universe of instruments, so the backtest needs to be a matrix, rather than just one column.

Incidentally, the example strategy will work quite well for retail traders. You can add a bunch of futures together and get a sharpe well over 1, basically what every CTA does but won't admit to. If you're wondering what all those PhDs do all day, it's adding capacity and researching minor improvements on that MA strategy. A colleague of mine worked at one of these brand names, and another friend owns one.

So, does that mean anyone can simply do this? Well, yes. But you'd have a lot of leg work to do, and you might get discouraged before you start. You need an account from someone like Interactive Brokers. You need a fair bit of money, or you'll have increment problems trading the large contracts. And you'll have to set up all the data feeds and look at it each day.

overcast 1 day ago 0 replies      
I basically implemented this, and a lot of other features for my own personal trading bot against the Cryptsy API written in Python. The idea was to makes tons of small trades on alt coins throughout the day, constantly buying and selling on short crossovers, making fractions of a perfect profit after fees. It turns into an up and down roller coaster, sometimes you're way ahead, and other times you lose it all. The biggest issue was the low volume on most of the alt coins. At the end of the day, like many others have said, it's mostly luck, you will lose money eventually,but it's an interesting learning exercise. The only real way to win, is to have some insight into the market, not machines.
ocschwar 1 day ago 0 replies      
As a veteran of one algo shop, I have this to say:

Play with the data all you like. Don't try to trade on it if you don't really know what you're doing. (Or, just recklessly trade other people's money. It's fun.)

What you're seeing here is the "napsterization of finance." (Google it, it will lead you to the article I am almost plagiarizing).

Basically, the market at large puts together a pot of money (called "alpha", debatably) The better you are at trading, the more of that pot you get.

BUT this is not a zero sum game. It's worse.

If the markets are functioning properly, then the better you are a this, the bigger the share of the pot you get, AND the smaller the pot of money gets.

It used to be that middlemen like the NYSE stock market specialists made very large amounts of money doing what Homer Simpson automated with a drinky bird. Now, the also shops have already shrunk that pot considerably. Good news for your pension fund. Bad news for you if you try this yourself. So don't.

lootsauce 1 day ago 2 replies      
For the past year I have been trying to learn more about trading, risk management, etc. There are so many stories about how the markets work and how to make money in them. You could spend your lifetime throwing money down a hole trying each one and probably do worse than random. I can't say enough good things about the perspective I have gained from just listening to good interviews of people that trade and manage funds for a living. Take a look at https://chatwithtraders.com/podcast/ and https://realvisiontv.com
edsouza 1 day ago 2 replies      
I have looked at Zipline before, but it does not handle intraday trades, and does some guesses on when the trade executes during the "day", so you may not get the best price.

Running an algorithm for multi-day trades for more than a few months does not make sense on how the markets move, as certain events like "brexit", earnings, M&A, etc... affect stock price.

If you are really interested in algorithmic trading, and you have programming experience, it's best to build your own backtesting system with intraday market data (pay for this).

This way you will know the ins and outs of a trading system.

hellofunk 1 day ago 0 replies      
Before you jump in try to make your millions, a fairly well-known and accepted statistic is that at least 95% of day traders (using any method) lose money.

All the very successful day traders I know lost lots of money in the beginning before learning how to do it properly. You need to have a solid source of funds to fuel your learning, and tremendous patience.

lifeisstillgood 1 day ago 0 replies      
i just want to check my understanding of the algorithmic trading "world", so please do jump in.

Once upon a time (1986ish) the equities and bond trading world was run by humans talking to humans and agreeing deals, the prices then fed into computer systems and the exchanges passed the prices around to make things mostly fair.

Fair of course is relative, the Eco-system was very hierarchical, with major institutions at the top, trading between each other at low fees, with brokers feeding up into them and retail shops feeding into major brokers. The customer got a raw deal, being charged heavy fees per transaction, and getting a poor "spread".

Spread was where the major institutions made their money. Human traders effectively bought very low and sold very high - both because they were human and could not easily handle algorithms in their heads and because who was going to stop them? At the top of the hierarchy traders got to see both sides of every trade - they could net trades off one against the other to make deals with little risk. And if it was not visible in a fair exchange they had even more leverage.

Spreadsheets took off around now, making it possible for one trader to plan and monitor his trades and look really good to his boss.

And then it became obvious that having a human in the spreadsheet-to-trade loop was sub optimal. A human with a spreadsheet still needed to dial a phone, make a decision, go to the toilet. A perl script could out perform him.

And at the time the algorithms were simple. If Exxon's share price dropped then pretty obviously other oil companies would drop too, but so would say car company stocks, but maybe coal miner shares would go up. And that's just in LSE - the same goes for Hong Kong and Chicago. Those correlations I could work out in a perl script. (OK, 1980, maybe some Basic :-)

And so algo trading was feasible with really tiny hardware - because the correlations in the world markets were simple, and large. And so low latency trading started. Because if I can use my ZX spectrum of my Commodore 64 to beat major traders to the punch, then all you need is a faster computer than the commodore and you beat me to the punch. And so it goes.

Fast forward twenty years and

- the hierarchy of the past is mostly still in place. Retail shops pull in the customers money, pass it upwards to brokers and they deal with traders at large banks. However the traders are much reduced, the volumes they do are orders of magnitude larger now.

- the spread has gone. Major institutions make money on tiny margins and tiny fees and just do vast vast volumes. Major FX desks will make maybe 10 USD on a billion dollars of Eurodollar trades (I think).

- the spread has gone for the algo traders. The reason PhD's are needed is because the correlations and arbitrage is all eaten up. The wins are few and far between and mostly need real world events (Brexit)

- this is generally good, there is more trade on open exchanges (good for everyone) there is smaller spreads (good for customers). The break neck automation to a good for contractors like me :-)

I'm not sure where I am going with this to be honest - but mostly it's that I am sure zip line is a good library, that the core part is written in the way a proprietary engine would look if someone took a year to rewrite it, but the core tech will not give you any edge - that edge has gone. The correlations have gone except in esoteric areas.

If you want the edge, you need to be at the top of the tree again.

seibelj 1 day ago 0 replies      
I use the Magic Formula[0] strategy because index funds are too boring for me. It's a value strategy and you have to hold for a year, but it's fun to see your stocks rise (and fall).

[0] https://www.magicformulainvesting.com/

Functional Programming Jargon github.com
333 points by adamnemecek  2 days ago   99 comments top 18
erikrothoff 2 days ago 5 replies      
Seriously, what's up with these comments? If the examples are plain wrong, wouldn't it be more prudent to create an issue in the repo? Also it seems to be a lot of hate for the use of JavaScript. It's a great thing that they chose the most widely available language. We get it, you know a purely functional language. Good for you! I am really disappointed in the general, holier-than-thou attitude here.
greydius 2 days ago 8 replies      
I took a (mandatory for cs) class in college that introduced these concepts. For a long time I thought that was a normal part of cs curriculums, but it seems not.

Also, its very strange to see these concepts illustrated with Javascript. I imagine thats something like trying to learn Chinese using the roman alphabet. Not that it can't be done, but a lot of important details are necessarily missing.

DeadReckoning 1 day ago 1 reply      
Why on earth are people complaining about using Javascript to illustrate functional programming language concepts? The whole point of this is helping non FP programmers to wrap their head around these terms. Javascript is by far the best mainstream language for doing functional programming.
white-flame 2 days ago 1 reply      
Defining what things are doesn't describe what they're useful for. Consider the Monad entry; does this bring any clarity to a beginning functional programmer who's looking up why people use monads?
dexwiz 2 days ago 3 replies      
It uses Javascript in the examples, but Javascript has slightly different definitions for some.

Partial Application and .bind(). Function.bind() in JS is about setting `this` for when the function is called[1].

Constant, `const`, declaration in JS governs reference reassignment. Is that the same as referential transparency? Either way the example is wrong. The following is not an invariant. `five` cannot be changed, but `john.age` can be. The object reference is constant, but not the object value.

`john.age + five === ({name: 'John', age: 30}).age + (5)`

Lazy evaluation should be called a generator if we are doing JS specific naming conventions.[2] Seems like the Haskell camp like the term Lazy Evaluation, but the Pythonthoic yield camp likes the term Generator.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

mavelikara 2 days ago 2 replies      
This could well lead to a GoF-style book to be published. And 15 years later, cool kids then can point and laugh at "Traversable Semigroup Isomorphism" like they do for "Abstract Factory Factory" now.
openasocket 2 days ago 1 reply      
Some of these are basic, some are more complex, and others I've never ever seen actually used. Does some one here actually know a practical (or at least non-toy) example usage of a co-monad?
catnaroek 2 days ago 0 replies      
How come the definitions of functor, applicative functor, etc. don't mention the most important part? The laws.

The definition of equational reasoning is also very weak:

> When an application is composed of expressions and devoid of side effects, truths about the system can be derived from the parts.

Truths about any system can be derived from the parts. What equational reasoning gives you is the ability to study the parts in isolation from each other.

sdegutis 2 days ago 1 reply      
What's most interesting to me here is that this article uses JavaScript for all its examples.

Also, this may well be the thing that finally helps me understand what the crap Monads are.

TheMagicHorsey 2 days ago 0 replies      
I wish the utility of these concepts was illustrated with some code. Although, I concede perhaps this is more difficult to do with short code snippets.
33degrees 2 days ago 1 reply      
For anyone looking for something more in depth, check out https://drboolean.gitbooks.io/mostly-adequate-guide/content/
amelius 2 days ago 1 reply      
Nice idea, but that must be the worst explanation of monads I've seen so far :)
ninjakeyboard 1 day ago 0 replies      
Some feedback for discussion - some of the FP stuff is sort of blurry to me after a point but here is what I sort of feel scratching this at the surface as a scala dude. Not trying to be critical - just whatever is off of the top of my head a la review.

Lazy:I feel like the definition of lazy needs to be demonstrated in contrast to eager evaluation.

val x = {println("evaluating x"; 5)}

lazy val lazyX = {println("evaluating lazyX"; 5)}

this would print ("evaluating x) to the console. lazyX would not be evaluated because it is not invoked. Here the eager definition of x demonstrates that it is evaluated immediately. lazyZ is only evaluated when invoked.

Monads and higher kinded types. The informal vs formal definition of some of the higher kinded types is worth noting. You're either describing the most formal definitions (identity/associativity laws) or you're describing the informal defition (has flatmap, processing things in a series as described in the wikipedia article https://en.wikipedia.org/wiki/Monad_(functional_programming) ) Monad seems to be the informal definition, otherwise associativity laws should be included maybe? Eg in Scala Try is not a formal monad as the associativity laws don't hold true. https://wiki.haskell.org/Monad_lawsAlso why is it "of" and "chain" here? Isn't it traditionally flatmap that indicates a type might be a monad, and then the identity laws the truly confirm it? Maybe this is js land? If so, should the doc say "this is js" as flatmap is pretty descriptive of map+flatten. It's hard to talk about - It's not at all clear to me what people mean when they talk about monads either - just throwing thoughts around.

I think you should outline how Currying and Partial Application are related maybe? Eg currying is taking a function with an arg list and making it take multiple arg lists (the act of making a function take multiple arg lists), and Partial application is the application of an argument of a curried function more or less? I think the relationship between these should be highlighted possibly to sort of draw out what part of the life of a function we're talking about if trying to illuminate the subject for someone.

toolslive 2 days ago 1 reply      
what about homographs ? For example, 'functor' means something different depending on the programming language (Haskell, OCaml)
jtchang 2 days ago 0 replies      
Hmm first time I had a semblance of grasping what the heck a monad was. Cool.
intjk 1 day ago 0 replies      
This is great! Throwing all of these into an anki deck. :)
iandanforth 2 days ago 0 replies      
This is great. Very helpful for understanding / translation.
namanyayg 2 days ago 1 reply      
Requesting @dang to change the URL to remove the #list fragment.
Pokmon Go Is Teaching Americans the Metric System gizmodo.com
206 points by ahmedfromtunis  3 days ago   161 comments top 22
nickcw 3 days ago 6 replies      
I always approximate the number of kilometers in a mile to phi (which it is pretty nearly - it is about 5% out), then you can use the Fibonacci series to convert between them.

1 1 2 3 5 8 13 21 ...

So if you want to know what 5 km is, find 5 in the Fibonacci series and the number of miles is the one just below - 3 in this case. Just use the next higher for miles -> km, eg 5 miles is 8 km.

bigdubs 3 days ago 10 replies      
It is interesting that it would be trivial for them to localize the units, it seems like they specifically didn't.

As a datapoint of one I was fine with thinking in kilometers because those are generally the units I use when running (you don't typically enter 3.11 mile races).

My sense of metric vs. imperial is that metric will seep into public conscious slowly over time, and hopefully some time in the future, we'll realize it's better for everyone to use the same units, and have a referendum.

chriswarbo 3 days ago 4 replies      
Urgh, more discussions involving some contrived correspondence between 1 inch and 0.01m, 1 foot and 1m, and 1 mile and 1000m. There is only the metre; divide it up however you like, compare it to whatever imperial units you like, but "centi", "kilo", etc. are just a different way of writing 100 and x1000.

The "real" reason to use metric is that all of the constants of proportionality are defined as 1. "1m = 100cm" is trivially true, by the definition of "centi", but 1m is also equal to all of the following, which I wouldn't even want to attempt to figure out in imperial units:

 1 metre 1 Joule / 1 Newton -- Applying 1N of force over a 1m distance requires 1J of energy 1 Volt * 1 Coulomb / 1 Newton -- 1 Volt is a potential difference of 1 Joule per Coulomb of charge 1 Volt * (1 Amp / 1 second) / 1 Newton -- 1 Amp of current is 1 Coulomb per second 1 Volt * 1 Amp / 1 Newton * 1 second -- (by rearranging) 1 Watt / 1 second * 1 Newton -- 1 Volt * 1 Amp gives 1 Watt of power 1 Watt / 1 second * 1 kilogram * (1 metre / (1 second * 1 second)) -- From Newton, force is mass * acceleration 1 Watt / (1 kilogram * 1 metre / 1 second) -- (by rearranging) 1 Watt * 1 second / 1 kilogram * 1 metre -- (by rearranging) (1 Joule / 1 second) * 1 second / 1 kilogram * 1 metre -- 1 Watt is 1 Joule per second
And so on. We just need to get rid of anachronisms like litres (1l = 0.001m^3), defining "kilogram" as the name of a unit, and of course prevent the use of weird language-juggling like reporting the distance to Saturn as "1000 million kilometres", or measuring energy in "kilowatt hours".

Genuine question: does the imperial system even have distinct units for mass and force? I tend to see ounces, pounds, stones, etc. converted to kilograms, which implies they measure mass; but pressure seems to be measured in pounds per square inch, which would be a 2D density (e.g. used as a measure of paper quality).

chillaxtian 3 days ago 0 replies      
> Unfortunately, theres no way to change the in-game units in Pokmon Go.

i think they meant 'fortunately'

bunkydoo 3 days ago 1 reply      
Finally, a way to teach Americans the metric system that doesn't involve weed
fpgaminer 3 days ago 1 reply      
I switched to using primarily Celsius a year or two ago. It's been tremendously useful for me, moreso than switching to metres (which I've been doing as well, though not as aggressively). Like most tech companies we have to frequently interact internationally, and everyone uses metric outside of the U.S. Knowing Celsius has made even basic conversations easier. I mean, "How's the weather where you are?" is such a common conversation starter, and it's one I can actually meaningfully have with people outside the U.S. now. No more "oh, pretty hot", instead I can just say "It's over 35 here!"

And as an engineer I obviously need to deal with metric for most everything (except those damned PCBs; thou, really!?), so it's very useful to have an intuitive understanding of what temperature my chip is at, or how thick 3mm is without grabbing a ruler.

Switching to Celsius is also a lot easier than switching to metres. You can get through a day using only Celsius, but you'll have a lot of trouble traveling around in a car using metres.

p4wnc6 3 days ago 0 replies      
When I read the title of this post, I first imagined a happy and earnest grade-school teacher saying it, perhaps sharing the story with some fellow teachers in a teacher's lounge, all discussing it positively.

Then I imagined an angry state legislator slamming his or her hand down on a brown lacquered table in some indiscriminate political conference room and yelling, "Pokmon Go Is Teaching Americans the Metric System!" and a bunch of interns scurrying in fright.

joezydeco 3 days ago 1 reply      
Metric is in a lot more places than you'd care to believe. And it's been there for decades.

Ask any high school student if their science classes are in metric units or English units.

What units does the AP Physics test use? How about the SAT? And college chemistry 101?

fencepost 3 days ago 2 replies      
It's also teaching Americans to go ride bikes again, or it will be eventually. I say this because I currently have 35km worth of eggs "stacked up" beyond the 2 that I'm gradually working on, and I'm sure that I'll accumulate more before I manage to get rid of all of those.

On the upside, it's provided some gamification to the increased walking that I was starting to do anyway, but it's also making me think about getting my bike down from the garage ceiling where it's been hanging for a couple years. Perhaps I'll start riding for quick trips out instead of driving......

gpvos 3 days ago 0 replies      
Given the obesity epidemic, it would actually be reasonable to let Americans walk 1.6 times as far.
spullara 3 days ago 0 replies      
Americans know the metric system. They just don't use it regularly for things outside science and technology. Everyone has to learn it though.
imgabe 3 days ago 0 replies      
Anyone interested in running has known how many miles are in 5km for a long time.
chriswarbo 3 days ago 1 reply      
I don't know much of the imperial system, but since the metric system only has one unit of length (the metre) it would presumably be easiest to convert that directly into imperial, rather than choosing arbitrary multiples (e.g. 1000m).

In fact, the metre coincides quite nicely with 1 yard (or 1.1 yards if you want more precision; which is still "just ones"). That seems much easier to remember than, say, 1610 metres/mile, or 0.62 miles/1000m.

Once you're in imperial you can presumably use the various conversion rules to get feet, miles, inches, furlongs, etc.

fma 2 days ago 1 reply      
More importantly people, Americans are now walking 5km...maybe there will be a small dip in obesity rate while the all is trending.
mhartl 2 days ago 0 replies      
Next thing you know we'll be buying soda in two-liter bottles and measuring our pills in milligrams.
beefield 2 days ago 1 reply      
I think the world would be a much better place if we compromised so that US moves to SI and rest of the world gets rid of the decimal comma and starts to use the decimal point.

And as an icing to the cake, everyone could switch to ISO 8601 date formatting everywhere.

iainmerrick 3 days ago 0 replies      
The Fibonacci sequence is a pretty good rule of thumb for converting between miles and kilometres. 2 miles is about 3km, 3 miles is 5km, 5 miles is 8km, etc.
truantbuick 3 days ago 1 reply      
So are drugs.
ragebol 3 days ago 0 replies      
ap22213 3 days ago 2 replies      
Shopping for groceries outside of the U.S. is so straight-forward, the U.S. food industry should worry about loss of profits in moving to metric.


$4.49 / lb

0.67 lbs



$9.89 / kg

$0.3 kg


free2rhyme214 3 days ago 1 reply      
No it's not. People just ask Google or Siri. They still don't know that 1km equals 0.62 miles.
sandworm101 3 days ago 0 replies      
No it isn't. At most it is teaching a single unit, not the system. And if all it's doing is teaching them how to convert that unit into something they know, rather than them actually use the kilometer for what it is, then it teaches nothing.

Until you live with americans you don't appreciate how focused they are on accent and vocabulary. Social groups are defined by their vocab. Someone using the wrong synonym for the context betrays their foreignness. I remember once using the phrase "by whatever metric" in a lecture only to be informed that he correct wording was "by whatever measurement". Even the word is a red flag. The was a law lecture. Had it been a physics lecture my choice would have been apt. It isn't a lack of knowledge that holds them back. It's a class system in which the use of the metric system is an important identifier.

Also, pokemon can do nothing in comparison to the US military's increased use of metric, but even there its use in casual conversation remains an identifier of background/rank history.

9th Circuit: Its a crime to visit a website after being told not to visit it washingtonpost.com
259 points by walterbell  1 day ago   240 comments top 29
ramblenode 1 day ago 9 replies      
This opinion is based on a fundamental misunderstanding of how the internet works.

A client which is on the same network as a server can request data from the server, and the server chooses how to reply. The internet is just a really big network, so for an internet-facing computer the assumption is that anyone can request data. The analogy of the request-response process to physical space isn't--as the judge claimed--someone entering a building, but rather someone asking permission to enter the building. Asking permission to enter a bank with a shotgun is very different from actually entering and is not itself a crime (as far as I'm aware).

Every interaction between the client and server begins as a request from the client. If the server replies "yes, have some data" it's nonsensical that this could be construed as gaining unauthorized access, as Facebook claims. Ultimately, Facebook still controls access to its servers; it just hasn't figured out a good policy for denying certain requests. This is a technical problem which should be solved through technical means (e.g. blocking IP addresses) or by changing its terms of service with its users.

Unfortunately, rather than solving the problem itself, Facebook has convinced the federal government to work on its behalf through an especially blunt weapon known as the CFAA. This risks changing the entire legal framework in which everyone else is required to operate. Technology moves much faster than the laws governing it, so we may not appreciate the full impact of this decision until later. That should give even proponents some pause.

Animats 1 day ago 6 replies      
The court seems to have it right, while the article author doesn't see the court's reasoning. The issue is whether accessing a web site after being sent a cease and desist letter telling you to stop is an act for which the accessor can be assessed damages. (Despite the title, this was a civil case, not a criminal case.)

A previous case established that accessing a web site in violation of its terms of use is not a offense under the Computer Fraud and Abuse Act. That's just a contractual issue.

Here, though, the site operator (Facebook) explicitly sent a cease and desist letter to the accessing service, telling them to stop doing what they were doing (which apparently involved some kind of message sending on behalf of Facebook users). That put the accessing service on notice that they were accessing the site without authorization. After that, access was considered unauthorized. Seems reasonable enough.

donkeyd 1 day ago 3 replies      
With Facebook buttons being on many websites now, wouldn't Power technically breach the cease-and-desist by visiting any web site that has a Facebook button? That's what troubles me about this, websites aren't fixed to a single location like physical locations are. You can 'access a computer' without really being aware of it.
numlocked 1 day ago 5 replies      
The physical space analogies in the comments seem reasonable:

"If I am given a formal trespass warning by Wal-Mart which covers the property of Wal-Mart, going back is a criminal offense. Going into the store is an offense. Driving in the parking lot, if it is owned by them is an offense. The different norms of public spaces vs private spaces vs semi-public space goes out the window when you have been formally trespassed from a place. I don't see why the exact same rules shouldn't be applied in the context of computer trespass."

It doesn't quite feel right and my intuition says that the same rules should NOT be applied in the context of a computer trespass, but I haven't yet thought of a satisfying counterargument.

vertex-four 1 day ago 0 replies      
To intentionally access a computer system after being told not to by its owner is, by definition, unauthorized access of a computer system - the core of the Computer Fraud and Abuse Act. That should be pretty clear.

However, the law in question is old and probably doesn't make much sense any more. Claiming that the courts made the wrong decision is nonsense - the law needs to be rewritten.

ktRolster 1 day ago 0 replies      
The law is not good, but the court's ruling seems correct and following the law.

The law says, "Unauthorized access to computer systems is a crime." Facebook sent them a cease-and-desist letter saying not to access their system anymore. The court agreed that at that point, they were no longer authorized to access the system, and that they knew they were not authorized.

So unless the law is unconstitutional, them the only way to solve the problem is by getting congress to change the law. Maybe that sucks but welcome to America.

rabboRubble 1 day ago 1 reply      
Late to the party, but my 2 cents.

If I'm ordered to avoid a certain house or small non-social website, I can do that easily.

Facebook on the other hand, goes out of it's way to make contact with me. I have to use special browser add-ons to keep it from tracking my web behavior across other sites. Is this contact? What happens if 3rd party sends me something that is Facebook related and I accidentally click on it?

The order to not contact Facebook reminds me a little of an order to stay away from any physical location where post or shipping activities are conducted or offered. Potentially so broad an order that would preclude me from visiting any location in my community (grocery stores with post offices) or any website online (www.momandpopwebsite.com with integrated FB social media stuff).

enjo 1 day ago 0 replies      
I assume this is the same Power Ventures Facebook previously won an important judgement against: https://en.wikipedia.org/wiki/Facebook,_Inc._v._Power_Ventur....

In that case Facebook asserted that it was a copyright violation when Power Ventures copied data to it's servers, parsed out the pieces they cared about (for contact aggregation), and dropping the rest. The court agreed, ruling that making even a temporary copy in computer memory was indeed a copy.

This appears to be another part of that same case, this piece dealing with the promotional campaign Power Ventures ran. These two have a long history of litigation.

__b__ 1 day ago 0 replies      
What if Zuckerberg had received a cease and desist letter when he was accessing computers without authorization at Harvard?

Before any student willingly sent him personal information, he had to exfiltrate such information i.e. photos of other students so other students would be compelled to look at websites he created using said photos.

He did eventually receive a cease and desist letter, and he ignored it. But of course it was not from the people charged with protecting students' personal information nor the students themselves. You know the story.

As with Google, under today's culture it's acceptable for Facebook to aggressively collect personal information in bulk and pay little attention to obtaining permissions, but it is not acceptable for anyone to attempt to collect information in bulk from Google or Facebook. This make no sense to me, but I gues I am just obtuse.

Maybe what Kerr is wondering is when the necessity of sending costly snail mail cease and desist letters will give way to some less expensive digital form of notice. When that happens, the threat of the CFAA can be used on a mass scale. Perhaps then we would see it in every Terms of Service. Maybe we could create a new HTTP response code: HTTP/1.1 606 CFAA Notice.

cm3 1 day ago 1 reply      
How would you know that the link was requested deliberately by a user and not by JavaScript or some other autonomous network client? You cannot, unless there's a tamper-proof and indisputable (aka we know how to interpret brain waves) 24/7 audit log of a person's brain activities. Clients prefetching resources or even looking up DNS is fully automatic and already happens today. If this becomes law, a random website need only insert URLs on an otherwise innocuous page, for someone to get into trouble.
chatmasta 1 day ago 2 replies      
I wonder if the ruling would be different if Power had done the scraping client-side instead of using its servers. For example, if they had distributed a tweaked Facebook app for jailbroken devices that allowed users to "log into" Facebook and scrape data using their own device and Internet connection.

Or, more simply, a third party app that users install on their phone and calls the "private" Facebook API directly.

Would the same arguments hold?

morekozhambu 1 day ago 1 reply      
Does this apply to google bots and other spiders/crawlers?
ChicagoBoy11 1 day ago 0 replies      
Could this insane logic be applied in reverse? Suppose I just announce that I don't allow Facebook to "invade" my computer and to track me while I visit websites (FB now does this even with logged off users). Could I then sue FB when those scripts appear inside of pages I visit?
gm-conspiracy 1 day ago 1 reply      
Can somebody please clarify, Facebook initiated a civil suit base on the CFAA violation, after the cease-and-desists?

Facebook then sued, claiming that Powers conduct violated the CFAA

So, was there a government initiated prosecution for this criminal violation?

How can you use CFAA violations in civil court without a criminal conviction of the violation?

By using preponderance of evidence in lieu of reasonable doubt seems troubling for a criminal violation, or is this not the situation?

OJFord 1 day ago 0 replies      

 > when it continued to access Facebooks computers
Isn't this the crux of the problem? Viewing it as "accessing" a company's "computer", rather than making use of a service in any truer "analogue world" manner?

sriku 1 day ago 1 reply      
If I understood the ruling right, then I send FB a "cease and desist" letter prohibiting FB from accessing my computers, et voil, all carte blanche privacy violations by FB will go poof because they would be committing a federal crime if they so much as showed their icon in another website I visit?
mankash666 1 day ago 0 replies      
Disappointing! Especially since the data in question is technically the ownership of the individual (regardless of what Facebook and it's BS lawyer-speak makes you believe).
tmaly 1 day ago 0 replies      
Regardless of how you feel about the opinion. The one thing that stood out for me was the walled garden. If you really want more and more of these cases to happen, keep using these giant walled gardens like Facebook.
ReFruity 1 day ago 0 replies      
I misread the title as "It's a crime NOT to visit a website after being told not to visit it" at first and it made a lot more sense that way.
lisper 1 day ago 0 replies      
Previous submission 11 days ago:


seesomesense 1 day ago 0 replies      
The article badly misunderstands the court's ruling
tomohawk 1 day ago 0 replies      
Just stop using Facebook
csydas 1 day ago 1 reply      
I'm not convinced that the article is entirely on point with the reading of the case. From the summary, Power Ventures, Inc (Powers) wasn't just visiting Facebook, they were interacting and removing userdata via a method that was not through the Facebook Connect program at the time:

>"Facebook has tried to limit and control access to its website. A non-Facebook user generally may not use the website to send messages, post photographs, or otherwise contact Facebook users through their profiles. Instead, Facebook requires third-party developers or websites that wish to contact its users through its site to enroll in a program called Facebook Connect."


>"In many instances, Power caused a message to be transmitted to the users friends within the Facebook system. In other instances, depending on a Facebook users settings, Facebook generated an e-mail message. If, for example, a Power user shared the promotion through an event, Facebook generated an e-mail message to an external e-mail accountfrom the user to friends. The e-mail message gave the name and time of the event, listed Power as the host, and stated that the Power user was inviting the recipient to this event. The external e-mails were form e-mails, generated each time thata Facebook user invited others to an event. The from line in the e-mail stated that the message came from Facebook; the body was signed, The Facebook Team.

>"On December 1, 2008, Facebook first became aware of Powers promotional campaign and, on that same date, Facebook sent a cease and desist letter to Power instructing Power to terminate its activities. Facebook tried to get Power to sign its Developer Terms of Use Agreement and enroll in Facebook Connect; Power resisted. " [1]

It's not made clear from the summary exactly what technical means Powers was using, but it seems like they were using functionality intended to be only available to developers via the Connect program and through access of the API through some other means.

That is, the access that is being described by the judge isn't just visiting a page, it's doing things like sending messages or photos or creating events as the user.

I don't really use Facebook and most definitely have not read the ToS for either developers or users, but it seems to me like the contention is that Powers was performing actions that should have been done through the Facebook Connect platform, not via the links they made on their page to have users post, and the users likely are not permitted within the TOS to grant someone developer access like this.

I'm not able to comment intelligently on how that should be handled, but I think that this is very different from the author's position of "it is a crime to visit a website you're not told to". Powers was very clearly not just visiting, they were interacting with data and exfiltraing data. Facebook said "as a dev, you need to access this data this way", Facebook blocked their method, and Powers circumvented these blocks. That is the contention, not that Powers "visited".

The rest I will leave for more intelligent people to argue.

[1] - https://cdn.ca9.uscourts.gov/datastore/opinions/2016/07/12/1...

Aelinsaar 1 day ago 1 reply      
...This is why we have a SCOTUS.
fapjacks 1 day ago 1 reply      
So then am I able to put an alert() on my page demanding that all law enforcement personnel not visit my site? And is this now enforceable? What about if I demand that black people not visit my page?

In my mind, it's a distant cousin of Sony telling people they can't mess with their Playstations even though they bought them. This puts so much power into the hands of the site owner. And it's for a criminal offense!

Zigurd 1 day ago 0 replies      
I'm usually very wary of slippery slopes but this a business-to-business dispute. Based on the description, it appears that Facebook has a legitimate beef because Power's monetization depends on using users' accounts for Power's gain, and not simply to provide a useful tool for Facebook users. Facebook has a reasonable case that Power was underhanded. Building contact management tools that spam your contacts is, sadly, a pretty common dark pattern and Facebook should want to prevent that happening to Facebook users.
pasbesoin 1 day ago 0 replies      
So, I suggest a few prominent web presences communicate to the judge and prosecution that they are no longer welcome at those sites.
lasermike026 1 day ago 0 replies      
goto supremeCourt;
EGreg 1 day ago 1 reply      
Here's how I usually do it:Send the email from a domain name that's not gmail, and your new fake law firm has to have LLLP in the company name

I've gotten so much stuff removed from the internet without even touching the DMCA, it's kind of sad actually

Linux 4.7 Released lkml.org
222 points by jrepin  2 days ago   78 comments top 7
voltagex_ 2 days ago 5 replies      
Honest question: how many years do you think it'll be until embedded device manufacturers (routers, various TV boxes, wifi hard drives [1]) ship recent kernels? A $200 modem/router bought 6 months ago ships with a hacked up version of that's barely buildable - mainly because the wifi chipset vendor refuses to open source their code and refuses to update the BSP [2].

1: http://www.seagate.com/au/en/support/downloads/item/wireless...

2: http://www.tp-link.com.au/gpl-code.html?model=Archer%20D9

jrepin 2 days ago 2 replies      
dominotw 2 days ago 1 reply      
>CPU accounting controller: Split cpuacct.usage into user usage and sys usage commit

Has anyone used this to calculate how many containers can be packed onto a machine based on historical usage data?

Sir_Cmpwn 2 days ago 4 replies      
I see there's support for the new Radeon RX480. I've been thinking about picking up a Radeon card. Can anyone speak to their experience of Radeon support on Linux and whether or not you think it's a good idea?
dopeboy 2 days ago 2 replies      
Could anyone comment on the state of power management in Linux? Are we up to par with OSX and Windows yet?
jshap70 2 days ago 4 replies      
too bad we'll never get a name as good as 'Hurr durr I'ma sheep' again
qwertyuiop924 2 days ago 3 replies      
Just so long as kdbus hasn't been merged yet...
Log Structured Merge Trees benstopford.com
257 points by kushti  1 day ago   25 comments top 8
MaulingMonkey 1 day ago 1 reply      
>> Yahoo, for example, reports a steady progression from read-heavy to read-write workloads, driven largely by the increased ingestion of event logs and mobile data.

> Sorry if this an ignorant question. Are these applications really that sensitive to write throughput ? event logs are only read of-band, so whats the rush. Seems like any benefits are offset by the GC stall anyways.

There's two variables here: Latency, and volume.

I'm not sure how much of an issue latency is for event logs / analytic metrics (RE: "whats the rush"), but if you've written e.g. Pokemon Go with several million installs, there's a sheer volume of inbound data that - no matter how little you care about latency - leaves you with one of two options: throttle it and throw some of it away, or write something that can scale to handle that much sheer volume.

And the first option is not one: Throwing away player gameplay progression will piss them off.

mutex007 1 day ago 3 replies      
I am curious how an inverted index will be implemented efficiently atop an LSM tree. Maybe batch parts of the posting list under a numeric key and when performing an intersection operation between two lists, you interate using a range scan. Could work but i wonder how fast Read operations will be. Anybody attempted yet?
onetwotree 23 hours ago 3 replies      
Thanks for this. I really feel like I understand the algorithm, and more importantly, the tradeoffs involved, after reading that.

My company makes a product that involves a sort of primary feature that's insanely read heavy (something like 10k to 1 read to write ratio), and a secondary feature that's exactly the opposite. We just use Postgres with for both (and it's obviously the winner for the read heavy workload) but I think I'll have to look more into this stuff.

One interesting thing to note about our write heavy workload is that the reads are usually in chronological order, with a filter. Anyone know what that implies for LSM based stores?

coleifer 22 hours ago 0 replies      
I wrote some python bindings to the sqlite4 lsm store:


serialx 23 hours ago 2 replies      
It would be more interesting to see more modern approaches with KV storages like forestdb[1]. Couchbase already replaced their storage engine with forestdb[2].

[1]: https://www.computer.org/csdl/trans/tc/preprint/07110563.pdf[2]: https://github.com/couchbase/forestdb

cmrdporcupine 1 day ago 3 replies      
Why is sequential access still faster than random access on an SSD (author claims this)? I understand why for mechanical magnetic media but I do not understand why this is still a thing for SSD? I remember reading about LSM a long time ago, and thinking that with the advent of cheap SSD that this kind of optimization technique wouldn't be as relevant.

Is the advantage of sequential reading a product of the interface and protocol used to speak to the SSD device? Or is it actually a product of the physical media?

EDITED: (I had written sequential slower than random meant to write the opposite)

bogomipz 16 hours ago 0 replies      
In the section on read workloads the author states:

"External file: leave the data as a log/heap and create a separate hash or tree index into it."

I am confused by this because a heap with a separate hash or tree index would be a "clustered index" and not a heap at all correct? Or am I misunderstanding?

kristianp 1 day ago 1 reply      
Wow that thin font-weight is hard to read. In chrome; Hit f12, in the styles tab change the font-weight to 400.
Employee #1: Apple themacro.com
221 points by craigcannon  9 hours ago   61 comments top 10
GuiA 7 hours ago 3 replies      
One of the first projects they collaborated on was this huge sign of a hand with the middle finger raised. It was a huge cloth poster and they put it up on the roof of our school and weighted the ends with rocks, I think. This was the end of the building that all of the parents faced during graduation. And the idea was that during graduation they would cut some strings which would release this thing to roll down over the side of the building and it said, Best Wishes, Class of 72! and it was giving them the finger. [...] So that was like, their first prank together.

Sometimes, like when a student was entering the telephone booth, Woz would call the telephone booth and it would ring and student would answer it. Then Woz would say, This is Ramar the Mystic. I see wetness in your future, and as the guy is saying, What? Woz would throw a water balloon at him from the second floor. The guy would be all angry and Woz would say, Well, Ramar was only trying to help.

These kind of remind me of those YouTube pranks where guys randomly kiss girls on the street or pick fights with people for "social experiments". Not really "pranks", just idiotic fun at other people's expense.

Jobs got a printed circuit board made and he figured out where to get all the parts.

Jobs often gets put down as just "the marketer" from Apple's early days, with Woz doing all the execution, but when you get deeper it sounds like Jobs enabled a lot of the logistics, supply chain, etc. Without him, Woz's prototype would have remained just that - a one off. In this light, Woz almost appears as the "idea guy" (where idea includes initial concept + first execution), whereas Jobs is the one who made it a viable product and company.

aresant 7 hours ago 2 replies      
I've always considered Job's advertising campaign - "Here's to the Crazy Ones" - which announced his return and Apple's new direction to be such a masterpiece because it was so PERSONAL to Jobs.

A recognition of what it had taken to START apple and the recognition that to survive, and thrive, they would need to get back to those roots, toss the beige boxes to the wind, devil may care here comes the blue bomdi iMacs dammit!

Fernandez' quote puts a finger on it:

"It wasnt like this glamorous thing. It was this huge risk. Basically people would say, Why would you quit Hewlett-Packard to go work for a couple of lame ass guys, you know, one of whom is like this hippy guy who wears Birkenstocks and torn jeans and dropped out of school and had to sell a beaten up VW van to just afford to get started on this. . . The short way of saying this, I guess, is there was no startup culture."

And as much of an egoist as Jobs was he nevertheless realized that he had to squarely define & embed that culture of "be crazy / think different" into Apple, and make it bigger than just more to the Jobs' mythos:

"According to Jobss biography, two versions were created before it first aired: one with Richard Dreyfuss voiceover, and one with Steve Jobs voiceover.[5] In the morning of the first air date, Jobs decided to go with the Dreyfuss version, stating that it was about Apple." (1)

(1) https://en.wikipedia.org/wiki/Think_different

m1c0l 42 minutes ago 1 reply      
FYI, "the Hamurabi game" links incorrectly to http://themacro.com/articles/2016/07/employee-1-apple/[https... which causes an error 404.
bluedino 7 hours ago 3 replies      
Look at the diversity in the first three employees: Jobs, half Middle-Eastern, Woz, a every-day American descending from mix of a handful of European countries, and Bill Fernandez, a Hispanic/Latino.
smegel 2 hours ago 1 reply      
> We became fast friends. I got him interested in electronics and so

> Craig : Wait, really?

I don't know what is so surprising about think. Jobs was not an engineer or even really a geek. He was a visionary and a businessman. If he had gotten Woz into electronics, that would have been a big deal.

ktRolster 1 hour ago 0 replies      
Cool guy. His analysis of his economic situation is much different than we'd do today:

I figured that this could be pretty interesting and I was living with my parents and my car was paid off and I was very employable. So I figured that if this fell through that it would be easy for me to get another job and theres no big loss, right?

bluedino 7 hours ago 0 replies      
mahyarm 7 hours ago 1 reply      
So what is his new company that he is starting?
robotmlg 8 hours ago 7 replies      
Technically speaking, Woz was employee #1 and Jobs was employee #2, although Jobs assigned himself #0 and always had 0 printed on his badge.
HillaryBriss 4 hours ago 0 replies      
> The infrastructure was there so you could say, I want sheet metal done. I want a printed circuit board made. You could just go out and someone would do it for you. I want to buy parts, someone could do it for you.

The ability to reach out locally and have hardware components custom made seems kind of important for the small innovator in today's fast economy. The speed of revisions and product evolution and all that.

       cached 27 July 2016 02:11:01 GMT