hacker news with inline top comments    .. more ..    27 Mar 2017 News
home   ask   best   1h 25m ago   
1
Gcam, the computational photography project that powers the Google Pixel camera x.company
95 points by PleaseHelpMe  3 hours ago   36 comments top 13
1
mihaifm 8 minutes ago 1 reply      
> One direction that were pushing is machine learning

Is machine learning the answer to everything these days? Is it over-hyped...or should I start worrying that I don't know anything about it (as a developer I mean).

2
valine 3 hours ago 1 reply      
> So the team started to askwhat if we looked at this problem in an entirely new way? What if, instead of trying to solve it with better hardware, we could do it with smart software choices instead?

I feel like the lofty language takes something away from the article. It's not like using software to improve image quality is anything new. The image stacking technique they describe isn't particularly novel. I've done stuff like that for years. It even works on 3d renderings from a path tracer. Its cool their camera is fast and provides a nice ux when stacking images, but that's only possible because of the cameras hardware.

3
boulos 2 hours ago 1 reply      
Sigh. I'm disappointed that this piece seemingly gives no credit to Andrew Adams, Jiawen (Kevin) Chen, and all the folks that made gcam happen. There isn't even a mention of Halide...

Edit: just to be clear, I like Marc just fine, but I don't see the point in excluding the team.

Disclosure: I work at Google and know the folks involved.

4
kumarm 3 hours ago 6 replies      
Surprised by Negative Tone of Comments.

I have been using PixelXL since launch and every one of my friends who looked at photos are amazed by the camera (Many of iPhone Users). PixelXL made me default photographer at parties (And fact that I can share Photos using Google Photos with everyone at Party).

Pixel Camera is amazing compared to any existing SmartPhone camera.

Side Note: The only issue I have with Pixel is that its easy to break glass.

5
legohead 10 minutes ago 0 replies      
Had an interesting/cool moment with my Pixel at the theatre today. There was a line to get your picture taken in front of the show's giant poster. The attendant would take your phone and snap a pic.

When the attendant used my phone, she couldn't tell it took a picture -- because when you press the button it's instant. She was so used to other phones taking a second or two before a picture was taken.

6
felixfurtak 3 hours ago 1 reply      
Surely many of these improvements are simply due to the Sony Exmor sensor? HDR processing is only really possible when you have fast (and full resolution) burst mode available on the sensor.
7
srtjstjsj 3 hours ago 1 reply      
Article does not deliver on the headline (of course). It says that X wrote software that became HDR+ , Google's version of HDR.
8
tambourine_man 2 hours ago 1 reply      
Sorry to hijack the thread but I found your handle a bit disturbing.

What do you need help with?

9
ipsum2 1 hour ago 0 replies      
This blog post is remarkably light on details. For the actual research paper behind the software in the Pixel camera app: http://www.hdrplusdata.org/hdrplus.pdf
10
ChuckMcM 3 hours ago 1 reply      
[Heh, the cynic in me is amused when these things come out on the heel of a lot of bad news about Google.]

I think computational photography is perhaps the biggest change to picture taking since Ektachrome. Seriously it takes pictures with existing hardware that are better pictures and does it by applying some interesting science to the mechanisms in the pipeline. I've been very impressed with the results and how rapidly the camera comes up with the image after taking it. I wish my Canon SLR had this as part of its software load.

11
iamleppert 2 hours ago 1 reply      
I like how they have taken credit for implementing HDR along with a smartphone-sized burst camera.

This is purely a technical implementation and is not novel in almost any way.

12
pasbesoin 1 hour ago 0 replies      
Ok, since maybe Google people will be looking at this:

I have a Nexus 5x. The Google camera app recently, finally, received a revision that fixed some significant bugs, e.g. crashing on "zoom out".

It also introduced ISO offset/adjustment. However, the widget to make that adjustment appears only briefly, at the top of the screen, immediately after touching the screen -- to set a metering point or to zoom. This makes that control both difficult to trigger, at the top of the screen, and difficult to use with its small scale and position that leaves one's finger obscuring the readout. On my 5x with a Pleson glass screen protector, the resulting value also tends to jump a bit as my finger is removed (1). Finally, the setting only remains for a matter of seconds, until the app clears any custom (i.e. via touch point) metering setting (2).

I like to adjust the ISO offset setting until I get the exposure I'm after on the screen. I wish the control was easier and always available to trigger, easier to use, and that the value set would persist at least until the shot is taken and preferably, or optionally, until it is manually changed back.

You finally gave me ISO offset control -- thank you! But the control is difficult to the point where I mostly don't even try to use it.

The app update also introduced a widget along the right side that displays an ungradated meter of (digital) zoom level. Beyond seeing the relative zoom setting, this control doesn't really do much that I find useful. (Actually, sometimes I find myself waiting for it to clear so that I can see my composition better.)

That widget position on the right side would be the perfect place to optionally display a touchable ISO offset scale. More real estate, making fine adjustments easier. Finger not in the way of seeing both the values and the shot, while adjusting it.

If not by default, then maybe something that could be selected in the options?

Friends really like my photos from the Nexus 5x. I'd love to have more-better controls when composing them. Thanks.

--

1. My zoom level can similarly jump a bit when I lift my fingers from the screen. I assume this is because of the screen protector, as opposed to the underlying phone/app, and have come to live with it as the price for the screen protection. The jump in ISO offset, maybe because of the small size of its widget as well as its placement, is harder to control.

2. This momentary clearing was already happening before the app update. I like to take my time composing my shot, and this clearing already made doing so somewhat frustrating at times.

13
ktta 3 hours ago 1 reply      
The surprising part about this is that X decided to go with Medium as their publishing platform.
2
Python for Android github.com
126 points by gregorymichael  5 hours ago   29 comments top 8
1
danso 3 hours ago 1 reply      
OT but this made me think of RubyMotion, which I remember being wildly popular (among Rubyists) back in the day but don't hear much about it now [0]. How many people still use it?

[0] https://hn.algolia.com/?query=rubymotion&sort=byPopularity&p...

2
guelo 3 hours ago 1 reply      
From a very cursory look this doesn't seem to hook into the android framework at all including the native UI or other system goodies like notifications and locations, etc. The apps would run in their own little world with their own provided non-native UI.
3
rtpg 2 hours ago 1 reply      
This project(Kivy) doesn't use the native Android UI components, so it's a bit of a negative. You don't get any of the stuff like accessibility.

So I remember hearing that a difficulty with hooking Python to Android is that there's some sort of limit to the amount of Java methods you can access through the JNI, but I could never find a proper explanation of this.

If I just had a version of CPython compiled on Android, given Java's nature of running on a virtual machine, wouldn't it be possible to dynamically load arbitrary classes through the JNI and use the native Android stuff?

What are the primary difficulties to this, if you ignore performance?

4
Humphrey 3 hours ago 0 replies      
Firstly, Looks like it's been around for a couple of years. Is the idea that you could deploy your python code as an android app. I'd like to see some examples apps in the wild, so I could check out performance, etc. But could be cool. I can imagine redeploying my web-app with this, and then doing some funky database syncing with the cloud.
5
fencepost 3 hours ago 1 reply      
Can't do a lot of digging on this right now(phone), but for just running Python of various versions take a look at termux, QPython and SL4A. Some may now be defunct, but searches on those should guide you to current info.

It this is instead a way to package and distribute compiled Python it may be interesting but I'm not sure Android is the best target.

6
voltagex_ 4 hours ago 1 reply      
From the issue tracker, it looks like the project could use some help - anyone want to hop on and triage?
7
tyingq 4 hours ago 1 reply      
How big are the generated APKs compared to a normal Android app of similar functionality?
8
agumonkey 29 minutes ago 0 replies      
I used Qpython for a little while. I think it's based on kivy.
4
Facebook activated my dormant account and wont let me deactivate it smashcompany.com
28 points by lkrubner  2 hours ago   8 comments top 6
1
averagewall 6 minutes ago 0 replies      
Since his Facebook account is clearly important to him, he shouldn't have reused a password that he shared with unimportant sites.
2
darkFunction 21 minutes ago 0 replies      
For those who still have access and don't just want to deactivate- https://www.facebook.com/help/delete_account
3
deadbunny 12 minutes ago 0 replies      
My guess is that someone is using password db dumps and then using those credentials to login to Facebook accounts, in this case it reactivated it.

TL;DR Don't use the same password on multiple accounts, no matter how "unimportant".

4
krashidov 2 minutes ago 0 replies      
Is this a possible phishing attack?
5
Velox 25 minutes ago 2 replies      
What would the alternative be? Allow anyone to request that an account is deleted? Facebook needs some form of verification due to the password being breached (and its great that they are checking for that sort of thing)
6
Nugem_ 1 hour ago 0 replies      
I uploaded a picture of my driver license with my finger covering my D.O.B. and address. They responded by deleting my account I was locked out of and did not want online any longer.
5
Logic Production Systems: A new mixed logic/imperative programming language lambda-the-ultimate.org
61 points by zephyrfalcon  5 hours ago   10 comments top 5
1
notthemessiah 2 hours ago 1 reply      
I recall trying a few mods that made Kerbal Space Program programmable so you can automate your rocket (Some have used it to make SpaceX-style reusable boosters: https://www.youtube.com/watch?v=sqqQy8cIVFY), and mods that provided a domain-specific language were more convenient than mods that used an existing conventional programming language. However the kOS language was still ultimately an imperative script, and too inflexible to be a complete programming language. It got me thinking about what kind of language would best be suited for the purpose of controllers (I was also writing PID controllers at the time), and I noticed there wasn't a language for it.

I also recall a Minecraft mod called ComputerCraft where you program a "Turtle" (a la Logo) to perform automated digging actions, but the task of programming it in the easily-embedded Lua proved somewhat inconvenient for simple actions, if only for the ALGOL-style syntax. Another mod tried Forth, but get the sense that while one could be productive in Forth, it's a lot to learn.

The developers have already incorporated it in a proof of concept game: https://bobthesimplebot.github.io/, but I could imagine RTS style games or mods where you not only have to scavenge for resources, you also program your bots to do your bidding, like Dwarf Fortress with programmable drones instead of independently-minded Dwarves.

2
andrewflnr 12 minutes ago 0 replies      
"Domain specific causal theory" means "program you wrote", right? Your if-then statements?
3
trapperkeeper79 2 hours ago 1 reply      
Logic programming and productions are a wonderful idea that I studied in undergrad back in pre-2000. These days, people seem hostile towards them. People just don't like the idea of specifying rules. I couldn't find modern intro textbooks on the topic or classes that cover it on MOOCs. I'm very surprised and confused.
5
hbk1966 3 hours ago 0 replies      
This is pretty cool.
6
AsciiMath An easy-to-write markup language for mathematics asciimath.org
266 points by mintplant  12 hours ago   80 comments top 25
1
jwmerrill 9 hours ago 1 reply      
To everyone who says "why do we need this when we have LaTeX?" I ask the question "why do we need Markdown when we have HTML?"

The nice thing about Markdown is that it's quite legible in its source form, which makes it less distracting to edit. Same deal with AsciiMath and LaTeX: AsciiMath is more legible in its source form which means that it has lower overhead during editing.

One of these is more legible than the other:

 (f'(x^2+y^2)^2)/(g'(x^2+y^2)^2) \frac{f'\left(x^2+y^2\right)}{g'\left(x^2+y^2\right)^2}
In my experience, most people who learn LaTeX don't do so until sometime around the middle or end of their undergrad career (certainly in Physics--maybe mathematicians learn it sooner). Earlier than that, people struggle with junk like the Microsoft equation editor. No big deal?

2
techwizrd 12 hours ago 2 replies      
I don't know about other math departments, but most of the students and professors I knew during my math degree knew LaTeX.

From my cursory glance over the page, this isn't much simpler than LaTeX and it mostly just reduces a number of backslashes. It doesn't save me much time when typesetting equations. Nowadays, I mostly type LaTeX for MathJax or Jupyter notebook. Adding asciimath to Jupyter seems to be on the backlog[0], and it's dependent on CommonMark coming up with an extension system.

0: https://github.com/jupyter/notebook/issues/1918

3
sameera_sy 17 minutes ago 0 replies      
Everything is about how we get used it to quickly though! Takes a little time to get used to latex, but this is definitely something worth trying. Especially the word syntax seems much more effective here! Thanks!The website named http://www.mathifyit.com/ helps in getting latex syntax through plain english. Seems like something I'll use!
4
tarjei 11 hours ago 1 reply      
AsciiMath has one large benefit over Latex: It fits how you would write math in an email.

AsciiMath is perfect for users who do not know Latex (or code for that matter) but needs to use mathematical notation on a daily basis.

I applaud that AsciiMath has resurfaced. I've used it in combination with Katex a cuple of times with great results.

5
jostylr 11 hours ago 1 reply      
This is about a decade old before MathJax and back when its predecessor, jsMath, was still pretty new. It was targeting students the most, not those who use latex professionally. I used it to create TidlyWiki notebooks for my students and it was something they actually did!

The goal was also about being translated into MathML. LaTeX is not necessarily concerned with mathematical sense while MathML (sometimes) is. I think this was also a motivation.

And, quite frankly, replacing \frac{a}{b} with a/b is a huge win for ease of writing basic math.

6
davesque 5 hours ago 0 replies      
It's probably worth somebody investigating a more short-hand notation for this kind of task. However, I feel compelled to say that I've never found the syntax which is commonly used to typeset equations with LaTeX to be all that complicated. When I was first learning it, I remember repeatedly thinking to myself, "This is it? This really isn't so bad!" Furthermore, the thing I like about LaTeX is that the syntax is very extensible. You can easily add more directives or macros and there are really only a few syntactic constructs that you can use to represent them. If I'm not mistaken, AsciiMath's approach requires that more specialized syntax would be needed when adding more features.
7
harmonium1729 7 hours ago 0 replies      
Despite knowing LaTeX, this is much more intuitive for me when communicating in plaintext. It just matches how I'd write it anyway. In an email I'd always use 1/2 or (f(x+h)-f(x))/h over their LaTeX alternatives.

If, however, the goal is to more easily edit LaTeX -- especially for folks who are less confident with LaTeX -- I suspect WYSIWYG is frequently a better option. MathQuill (mathquill.com), for example, is a fantastic open-source WISYWIG editor for LaTeX.

Disclosure: we use MathQuill heavily at desmos.com, where I work, and have contributed to its development.

8
TheRealPomax 10 hours ago 3 replies      
I'm curious who the audience is for this. If it's people who actually care about maths, then this doesn't have any real value, because they already know LaTeX and will most likely appreciate the higher precision that offers (I personally fall in that category). If it's people who normally don't really need to write mathematics, then for the few times they need to, LaTeX might still make more sense due to convenient quickly-googled online LaTeX creators like https://www.codecogs.com/latex/eqneditor.php

If there's a demographic between those two groups, then I might simply have a blind spot, but... whose problem does this solve, and what is that problem? Because just saying "LaTeX is too much effort", the immediate counter-question is "for whom, exactly?" because it won't be people who already use LaTeX, or need reliable maths typesetting on a daily basis, and it probably isn't for people who need to use maths maybe a handful of times a year. So who's left, and what problem do they have where asciimath makes life easier?

9
lilgreenland 8 hours ago 0 replies      
After using MathJax to render LaTex on my website I switched to KaTeX and saw a dramatic decrease in load times. I hope that asciiMath doesn't also suffer from the same speed issues from MathJax.

https://github.com/Khan/KaTeX

10
runarberg 8 hours ago 0 replies      
One of my earlier programming experience was writing a more expressive alternative to ascii math [1]. I learned later that this was also the first compiler that I ever write. It fixes some of the shortcomings of asciiMath, like you should never have to resort to latex like syntax, you can enter Unicode characters directly, and it has a nice mapping to MathML.

1: https://runarberg.github.io/ascii2mathml/

11
andrepd 11 hours ago 0 replies      
What about this is so much different than LaTeX? It seems to have the same basic syntax but the commands don't start with a backslash. Also the symbol list seems severely limited.
12
throwaway7645 6 hours ago 0 replies      
Not that this and Latex aren't great, but I wonder if there is a more outside the box solution. APL can represent mathematics very well using Iverson Notation as the design and it is executable to boot. I haven't spent a ton of time with it, so I'm not sure if I could read complex equations as easily with it or not once suitably trained. Other benfits of APL's notation is no order of operations and all you need is the character set which is really easy to deal with I would guess. If it hasn't become popular after 1/2 century, perhaps there really is something to the critical mass of our current notation.
13
lenkite 10 hours ago 0 replies      
AsciiMath is more readable than Latex for just about everyone except perhaps professional mathematicians. Simplicity versus power.
14
polm23 3 hours ago 0 replies      
Anyone remember eqn?

https://en.wikipedia.org/wiki/Eqn

I've enjoyed the part of this interview with Linda Cherry, one of its creators, talking about it in comparison with Tex (incorrectly transcribed as "tech").

http://www.princeton.edu/~hos/mike/transcripts/cherry.htm

15
krick 3 hours ago 0 replies      
This is fabulous. It seems crazy to me, that some in this thread are like "meh, no big deal, LaTeX is fine".

Except, I guess it would be better if that could be compiled to LaTeX instead of rendering it directly. LaTeX is still de-facto standard, and surely there are situations when it would be more powerful. So this mid-layer still would be useful, I guess. But otherwise, I would gladly write everything I need in Markdown+AsciiMath instead of pure LaTeX.

16
a3_nm 8 hours ago 1 reply      
This looks nice, with a much more legible syntax than LaTeX. I'd love to use this, e.g., on my blog. The reasons why I won't:

- No server-side rendering. I don't want to burden my reader's browser with Javascript. (With MathJax, you can do it server-side, I explained how here: https://a3nm.net/blog/selfhost_mathjax.html)

- The project looks dead: https://github.com/asciimath/asciimathml/pulse/monthly

17
slaymaker1907 7 hours ago 2 replies      
Thank you! I've been wanting to do this for a while for myself after becoming fed up with the verbosity of LaTex. My strategy has been a little different in that I've been working on plugging into equations using a Pandoc filter.

Instead of rolling my own or hacking into SymPy, I'll use asciimath.

18
thyselius 10 hours ago 3 replies      
I would love to have the opposite, get the code from writing maths as it is printed. Has that been made?
19
stu_douglas 4 hours ago 0 replies      
I actually wrote a little compiler that converts AsciiMath to LaTeX for a course in school.

Hooked up the executable to an Automator service so I could highlight some AsciiMath text and replace it with LaTeX from the right-click menu. Much faster for writing math notes in LaTeX!

If anyone's interested, the project's at https://github.com/studouglas/AsciiMathToLatex. Haven't touched it since I made it, so don't judge too hard :p

20
dbranes 7 hours ago 1 reply      
Great. Would love to have some support for diagrams drawing.

This seem to tackle the issue that Latex equations are not very readable, which is great. A related problem is that tikz code for drawing commutative diagrams in latex is basically completely incomprehensible. Looks like this project is in a good position to start tackling that problem.

21
wodenokoto 11 hours ago 1 reply      
What are the benefits of this over just using latex with mathjax?
22
dylanrw 8 hours ago 0 replies      
As someone who doesn't have a math background, and doesn't know latex. This is very handy as a teaching and learning tool.
23
sigvef 7 hours ago 0 replies      
In a perfect world, everything is generated from ASCII: https://github.com/sigvef/sigvehtml .
24
devereaux 10 hours ago 2 replies      
That's nice, but we are in 2017. It may be better to support unicode. I mean I prefer to write things like:

, , =, ...

a + b ...

What I think is needed are generic 2d composition diacritics for unicode, to have text above/below/to the upper left/UR/LL/LR angle -- I mean, some more generic version to write things like =, with composition characters instead of the dedicated numbers, or letters.

I don't like LaTeX because I want WYSIWYG, which is what unicode is for. Even in the body of an email. Even in a reply on YN.

25
seesomesense 5 hours ago 0 replies      
At my kid's school, some children used Latex for their maths. Surely, adults can grok Latex too.
7
We like impostor stories because were afraid were impostors (2016) nautil.us
36 points by happy-go-lucky  4 hours ago   7 comments top 4
1
fauria 4 minutes ago 0 replies      
A young man pretends to be the son of actor Sidney Poitier, conning his way into wealthy Manhattan homes.http://www.imdb.com/title/tt0108149

A wily Frenchman passes again and again as a long-lost orphanhttp://www.imdb.com/title/tt1966604/

2
sametmax 43 minutes ago 1 reply      
But we are imposters. Nobody gives you a manual on how to handle work, children, people, sex, money, life in general. You are suppose to figure it out on yourself, and everybody act like the others know what they are doing. We are literally faking it until we make it for the most basic things in life.
3
Namrog84 1 hour ago 1 reply      
I think this also touches on a 80/20 type scenario. For many professionals that take serious time and experience to have achieved. Anyone could probably have done 80% of their job with minimal training. It's the smaller less common cases that matter and make it up.

It's easy to be a general when your winning. Perhaps there are no wrong moves. But to be one on losing side and turn it around requires I think a special type of person or experience.

It reminds me of a 1990s movie where some rich people make a successful and non successful person switch places and prove many things are a matter of circumstances more than most.

But then again watching recently The Most hated woman in America, reminds us how much an individual can pioneer something.

4
chme 1 hour ago 0 replies      
We also like murder stories because we are afraid of being murdered.
8
Stockfish Strong Open-Source Chess Engine stockfishchess.org
102 points by mabynogy  7 hours ago   22 comments top 8
1
xal 4 hours ago 3 replies      
One aspect of chess that's really interesting is UCI which stands for univeral chess interface. It's a simple protocol that all engines "speak" and almost all pieces of chess software support. Chess players tend to collect engines that are good at certain things and then use whatever UX they are most comfortable with. At least in the past before Stockfish became so dominant.

The chess community in general is pretty technical. lichess.org is an absolute marvel of a website. It's full of brilliance and it's all open source. Interestingly it's computer analysis feature went through a notable progressions of approaches:

First it was server sided stockfish. Later they compiled stockfish via emscripten to .js so that analysis could run locally. This recently became PNACL for speed and finally even more recently WebAssembly for portability. Pretty cutting edge for the community of such an old game!

2
komaromy 5 hours ago 1 reply      
Not just strong; it hasn't failed to place at least second in any TCEC season (the de facto computer chess world championship) since 2013 [0].

[0] https://en.wikipedia.org/wiki/Top_Chess_Engine_Championship#...

3
billforsternz 3 hours ago 0 replies      
For Windows computers Tarrasch Chess GUI http://www.triplehappy.com is a very simple way to experiment with Stockfish (it's installed as the default engine). Disclosure: I am the author of the Tarrasch Chess GUI.
4
shpx 1 hour ago 0 replies      
Someone organized a tournament of iterated prisoner's dilemma with access to the opponent's source code. Can't seem to find examples of anyone trying this with chess on google.

http://lesswrong.com/lw/hmx/prisoners_dilemma_with_visible_s...

5
Aron 26 minutes ago 0 replies      
So how are all the magical constants optimized?
6
glinscott 3 hours ago 0 replies      
Very cool to see Stockfish up here :). One of the cool parts is that the testing framework is fully open as well, at http://tests.stockfishchess.org/tests. It's been pretty amazing to watch the progress of SF since it was put on github by Marco and Joona. http://spcc.beepworld.de/ has some nice graphs. Those graphs are in ELO scale as well, so a linear line is getting exponentially stronger.

Currently for example, there 87 machines, contributing 341 cores of computing power for testing patches. That's pretty awesome for completely volunteer driven community!

7
posnet 4 hours ago 0 replies      
Wow, it's only 8000 lines of code.
8
mrcactu5 4 hours ago 4 replies      
is this an effective tool for beginners?
9
Show HN: Cross-Platform Apple IIe Emulator Written in Go octalyzer.com
32 points by empressplay  4 hours ago   5 comments top 3
1
yobert 1 hour ago 0 replies      
So cool. I'd love to read the source if it's available?
2
empressplay 2 hours ago 1 reply      
Currently the Octalyzer features full-screen Apple IIe emulation with decent compatibility, USB and mouse-controlled joystick support, cloud disk library (when logged-in to Octa-Link), 3D camera support, 3D LOGO, enhanced BASIC interpreters, custom file browser and editor, and remote screen sharing.
3
loppers92 1 hour ago 0 replies      
That is awesome
10
Michael Puetts book The Path draws on the insights of Chinese philosophers theguardian.com
9 points by a_w  2 hours ago   1 comment top
11
Catching integer overflows with template metaprogramming (2015) capnproto.org
68 points by jgeralnik  9 hours ago   22 comments top 5
1
DannyBee 5 hours ago 4 replies      
"Some day, I would like to design a language that gets this right. But for the moment, I remain focused on Sandstorm.io. Hopefully someone will beat me to it. Hint hint."

Some language already did beat him to it: Ada

http://www.ada-auth.org/standards/12rat/html/Rat12-2-4.html

Ada's rules for type invariants check them in all the places you would expect, and are straightforward to use.

It's about as correct as you can get without it being impossible to compute :)

2
jbapple 5 hours ago 1 reply      
Two other ways of checking integer bound invariants:

1. Undefined behavior sanitizer, which checks integer overflows at run-time, like INT_MAX + 1. Can optionally check other invariants; see https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.h... and https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html.

2. Abstract interpretation - using a lot of constraint solver cycles and enough math to choke a dinosaur, you can check some integer bounds at compile time "relationally", like knowing that x < y. See, for example, "Pentagons: A Weakly Relational Abstract Domain for the Efficient Validation of Array Accesses": https://www.microsoft.com/en-us/research/publication/pentago.... I'm not sure, but that code might be available in https://github.com/Microsoft/CodeContracts.

3
staticassertion 5 hours ago 1 reply      
That's a pretty impressive response - committing to fuzzing with AFL, improving current fuzzing test coverage, adding new coding guidelines, and a paid security audit.

In regards to 4 (valgrind) perhaps utilizing other sanitizers would be a good idea.

Awesome that they tested whether those methods would actually have caught this bug - and multiple did. That's a smart metric.

And a writeup about TMP and compile time safety to boot.

I'm struggling to come up with anything more that I'd want in a disclosure.

4
winstonewert 2 hours ago 1 reply      
I'm unclear on how this would work out practically. Cases where my integers belong to a fixed range seems rare. I'd like to have seen more discussion of how it caught the bugs it did.
5
danschumann 4 hours ago 0 replies      
Boo I thought it was a reference to fictional Tom Parris's holodeck program
12
Last call for the phone booth? cbsnews.com
21 points by walterbell  5 hours ago   15 comments top 6
1
TheGRS 41 minutes ago 1 reply      
I'm just going to take a moment to say that Sunday Morning on CBS has been one of my favorite news programs since...well as long as I've been watching television. It was usually on before we went to church in the morning when I was a kid, I'd catch one every now and then in high school, and I later would put it on my DVR because I tended to sleep through the early morning slot in college. I'm actually super excited to learn that they host segments on their website and I think I know what I'll be binging on this week. Also super glad Mo Rocca is still with them, it really entertained me that they hired him after his tenure with The Daily Show (he was one of my favorite correspondents) and he's put out some great segments on the show since. Alright, trip down Nostalgia Lane over.
2
smelendez 1 hour ago 0 replies      
One ongoing use of payphones: people with cheap, pay-by-the-minute cell phone plans calling toll-free numbers for utilities, banks, etc. so they don't have to use up their airtime while they're on hold.

About two years ago I was in Chicago and noticed a well-maintained payphone. The company that operated it had even attached a decal with the toll free numbers for Comcast, Bank of America, the local electric company, etc.

3
unimpressive 1 hour ago 0 replies      
I'm not sure exactly how long it will be before pay phones are gone completely, but their density has been decreasing for nearly two decades now:

1998 - http://www.channelpartnersonline.com/articles/1998/12/paypho...

2002 - http://www.casa.ucl.ac.uk/cyberspace/requiem_for_the_pay_pho...

2008 - http://www.phonelosers.org/2008/09/pay-phones-are-doomed/

2010 - http://www.mnn.com/green-tech/gadgets-electronics/questions/...

2010 - http://www.cnn.com/2010/TECH/04/02/pay.phones.irpt/

2011 - http://www.phonelosers.org/2011/04/alternative-uses-for-pay-...

2017 - https://www.theatlantic.com/technology/archive/2017/02/objec... (Some are even ready to say they're already dead.)

From the perspective of computer history, pay phones are interesting because of their being the main host for phreaking subcultures. The best book on such is still probably Lapsley's Exploding The Phone:

http://explodingthephone.com/

4
cbanek 1 hour ago 1 reply      
Can't help but be reminded of The Adventures of Pete & Pete - The Call. All revolves around a payphone out in the middle of nowhere.
5
lacampbell 2 hours ago 3 replies      
I walked past this phone booth every day with my kid when he was three years old, Ackerman said. And at a certain point, he said to me, Why is that phone in a box? And I realized that he didnt know what a phone booth was, which is so bizarre!

I am constantly amazed at peoples lack of awareness of time. Of course a 3 year old in 2017 wouldn't know what the hell a phone booth was. How could that possibly surprise you. I bet this person also sees a celebrity they haven't seen for twenty years and then says something insightful like "wow, he looks so old!".

6
LeoPanthera 2 hours ago 1 reply      
How am I going to Hack The Planet now?
13
How to Be Someone People Love to Talk To time.com
156 points by knrz  8 hours ago   105 comments top 14
1
onmobiletemp 57 minutes ago 0 replies      
I started paying attention to people and discovered a lot of this on my own over the course of three years. At some point i realized that whenever i talked to someone their eyes would glaze over and their face would go stony. Then theyd talk to someone else and their eyes would become focused and their face alive and animated. Laughter. I figured out this was because they didnt care about what i was saying or about my opinions. So i tried various things and looked at their eyes. Sometimes their eyes would become alive again and i could tell they cared. Slowly you learn what people want to hear. And its so true about smiling and body language, people feel uncomfortable if you dont project wellbeing. What you need to understand is that there is no logic in any of it. Humans are machines and the algorithms that they employ for attention and emotion are surprisngly uniform and very unintuitive for autists like me and you. Dont worry about the logic of whats hapenning, just think of what their algorithm is doing. Its verry dificult because you cant verify what people are thinking, you cant debug it and you cant start over -- you have to guess a lot. Overall people want to see big smiles and confident body posture. If you are slouched over people dont like it. If you stand up straight you will be amazed at how differently you are percieved. But it all has to be genuine. If youre trying to manipulate and understand people in a clinical way you will fail. All you need is a genuine desire to bond with people and the patience to pay attention to what seems to work and what doesnt.
2
xor1 2 hours ago 6 replies      
I think the article severely downplays the importance of attractiveness. If the other party finds you attractive, the bar is lowered to the point of you simply being normal/average in terms of intelligence, wit, and whatever else you want to include in your definition of what makes a person "interesting". You basically need to be a vapid idiot to give anyone a bad impression as an attractive person.

It's a huge factor. I've started putting some of my big programmer bucks into improving my appearance before I hit 30, starting with braces (family couldn't afford them as a kid), eyelid surgery to fix some mild ptosis, and a nose job. I've also started using sunscreen and moisturizer on my face on a regular basis.

The past few years have made me realize that your appearance only becomes more important as you age and progress in a white-collar career -- not less, as I was led to believe as a child. This is especially disheartening to realize while working in CA/NYC tech, which have always been billed as one of the most meritocratic and progressive sectors. Getting into shape only takes you so far. I consider myself average now, but I want to be hot.

3
tyingq 7 hours ago 5 replies      
The best bit of advice in the article:

The right question is How do I get them talking about themselves?

I've noticed that even if the only thing you do is ask someone their opinion, and listen attentively, there is some sort of distortion field effect.

They will often later recall you as knowledgeable, insightful, etc...even though you never did anything but ask questions.

4
non_sequitur 5 hours ago 0 replies      
I learned a while ago that just asking questions isn't enough - sometimes people don't want to talk, or are really boring, there's too big a group to focus on one person, or just constantly interrogating a person gets weird, etc. So you should have some good stories in your back pocket as well. If you think about the most popular people you know, they aren't well received in social settings because they pepper everyone with questions - they're usually funny, chatty, quick witted, and can either carry or let someone else carry a conversation. Be like that guy/gal, not the one that can only ask questions.
5
nunez 5 hours ago 0 replies      
Skimming through the article, I observed that they missed the most important step one must do to get better at talking to people:

Practice!

One doesn't learn how to write code without writing code. One also doesn't learn how to tie their shoes without actually tying shoes. So it follows that one doesn't learn how to become good at people without talking to people.

You've gotta go out! And I'm not talking about grabbing a drink and staying on the sidelines or going to that conference and being glued to your Mac the entire time. You've gotta approach people, and you have to get rejected.

People will walk away. People will ask to be excused. This stuff hurts, but just like a startup, you treat the mistakes as learnings and try again next time. It helps a lot to have a buddy that will help you through the process and give you feedback, since learning on your own (like I did) generally sucks.

How did I learn how to talk to people? I approached hundreds of women to start conversations with them during the morning rush and on the street. nothing deep; usually stuff about food. My dating skills improved slightly, but my conversation skills went through the roof.

There are other things to keep in mind, too. People care way more appearance than they let on, so dressing well and staying healthy go a long way to help you be more. Body language is also something that people look out for without knowing that they're looking out for it. Fixing posture goes a long way towards fixing this too.

6
brownbat 2 hours ago 0 replies      
> How can you strategically make a good impression? From the outset, frame the conversation with a few well-rehearsed sentences regarding how you want to be perceived.

Klosterman comes to the same realization in Sex, Drugs, and Cocoa-Puffs, though from an unlikely angle--the dawn of reality television.

On The Real World, producers had no time to explain anyone's personality in depth, so they boiled each housemate down to a simple stereotype and selectively edited to play up that caricature. On the one hand, it was a trick of production that was massively distorting and harmful to several (most?) of the housemates.

On the other hand, we're all just like the producers when recalling our own interactions.

Like a 20-minute episode, there's just too much ground to cover to get a perfect reproduction of any person's life in a first meeting. A short working draft is the best anyone can hope for. If you help people form that, you can nudge it in a positive direction while also making yourself more memorable.

7
superasn 1 hour ago 1 reply      
Oh internet and self-help gurus. Why do you have to be the "best" at everything and get the most of out stuff.

It's like those things they teach you that before giving a bad review first start with the good points then add a "but". Sound great in theory but just absurd when you realize someone is doing it to you on purpose.

You know you can do all this and create great rapport and win the title for best conversationalist but if this is not your nature you still won't have fun nor create that connection which you can have by just being you, with all your flaws, moles and warts. If you're not a total asshole, people like you anyway.

Just imagine if your friends were like this. Trying to be the best conversationalist they can be with you instead of being the usual silly dickheads they generally are..

8
vinceguidry 1 hour ago 1 reply      
Everything about this is highly contextual and varies across cultures. Smiling in some situations makes you look powerful, in many others it makes you look weak. Being very animated can make you look carefree in some situations and just wild in others.

The more time I spend in cultures that I didn't grow up in, the more convinced I become that there just aren't any universalities in this direction. Any attempt to do so is to try to generalize over all human behavior and the effort will either be wrong, being that there will be some cultures or situations where the rule doesn't hold, or it'll be useless, essentially telling you what you already know.

9
dkarapetyan 4 hours ago 0 replies      
There was that one time I observed a peculiar quality about a certain CEO. No matter what he was talking about it somehow would always circle back to talking about whatever company he was currently at and the conversation would always end with a joke and hearty laugh for all involved. This happened consistently enough that I thought it was a pre-determined act on his part.

Once I realized he was always practicing I kinda stopped talking to the guy because there was never any genuine interaction. He was always on the job and he was always practicing selling. Every conversation was just another opportunity for him to practice his messaging. I dubbed this mode of interacting and talking ceoesque.

10
hamandcheese 1 hour ago 0 replies      
"Ask people questions since people love talking about themselves" is common conversational advice I hear.

In general I agree, but it's a bit disheartening when you realize that many people are so happy to talk about themselves that they never bother to ask you about yourself.

11
hoodoof 5 hours ago 5 replies      
I know a small number of very charismatic people. It seems to just be a natural function of who they are.

I have always wondered if there was a way to "become charismatic".

12
bitL 6 hours ago 13 replies      
Am I the only person that feels "hacking other people" for my own benefit is wrong?
13
AndrewOMartin 4 hours ago 2 replies      
I've been the recipient of active listening on more than one occasion and it's made me want to tear that persons lungs out through their mouth. It feels like you're the victim of a corrupt bureaucrat's evil stalling tactic.
14
mythrwy 3 hours ago 0 replies      
In some ways being a person people love to talk to is a burden. It takes time. Sometimes it's worth it. Quite often (and this sounds cold but it's true) it isn't.

Other people do make life good though and it's certainly a valuable skill. Just, it comes at a price.

14
Residual Value: Electric Batteries vs. Internal Combustion Engine Vehicles ark-invest.com
32 points by hunglee2  7 hours ago   19 comments top 7
1
URSpider94 4 hours ago 3 replies      
This is a terrible analysis. First of all, it assumes that the cost of a kWh of battery remains at today's high value. If battery costs are falling rapidly (and they are), then the value of a used battery pack will be driven down as inexpensive new packs come on the market. Second, it ignores the fact that 10-year-old packs will have a substantially higher rate of catastrophic failure than new packs, and if reliability is paramount then a used pack that is closer to failure won't have that much value. Third, the fact that every car brand uses very different pack designs means that it would be a nightmare to design some kind of heterogeneous power storage system made up of batteries from Leafs, Bolts, Fiats, Teslas, i3's, etc.

The current situation is that EV resale prices for everything but Teslas are plunging like a rock. Three year old Fiat 500e's are selling at auction for $4k, which is already less than the author's purported resale value of their batteries at 10 years.

2
Aron 13 minutes ago 0 replies      
I believe that as batteries get on in in life their rate of decay accelerates. It's not a linear process.

Maybe there's room for recovering the raw materials effectively, and I wonder if there are upfront tradeoffs that could be exchanged to make that easier. Musk has repeatedly said that scale cost reductions leads ultimately to material cost limit and it's already up to ~50$\130$ per kwH.

3
kristianp 4 hours ago 2 replies      
Do the batteries retain so much of their value because energy density (J/kg) of batteries hasn't improved for a long time? If a battery new tech comes out that is sufficiently energy dense (and is mass produced), these old batteries won't have much value except as scrap.

https://www.technologyreview.com/s/602245/why-we-still-dont-...

4
intrasight 1 hour ago 1 reply      
I am of the opinion that the EV market is going to be held back until we have standard battery packs of known price, quality, and warranty. There's no good reason that this can't be taken on by some standards body. ANSI? Milspec? Please, somebody!
5
madengr 4 hours ago 1 reply      
I was surprised there was only about 10% battery loss after 100k miles.
6
tyingq 4 hours ago 0 replies      
If there is sufficient demand from utilities for these batteries, wouldn't the supply get ramped up?

I assume at a certain scale, the costs go down for everyone. Part of why used combustion engines don't retain much value.

7
lsternlicht 4 hours ago 2 replies      
This argument avoids the issue of battery depreciation as well as the fact that if you took the battery out of the car it's value goes to the scrap value.
15
Who Killed tzi the Iceman? Clues Emerge nytimes.com
144 points by gerbilly  11 hours ago   34 comments top 8
1
woogiewonka 11 hours ago 6 replies      
I wonder if Otzi was the instigator in the described conflict. The piece describes him as someone who was not a laborer but did plenty of walking. I can imagine a vagabond type individual traveling place to place taking his pickings from the weaker groups (or perhaps stealing and stabbing in the process seeing how he had little upper body strength but carried a dagger). His clothing is made of multiple furs of multiple species yet he doesn't have a finished bow? I mean, it's possible he was an excellent trapper and caught all them animals OR did he take his possessions elsewhere? I can just picture him coming up on a small village, stealing some stuff and stabbing the individual who confronted him, then retreating back into the mountains. Upon discovery of a crime, a more practiced village member tracks the villain into the mountains and without so much as a flinch pierces him in the back from a distance far off.
2
berberous 6 hours ago 2 replies      
After reading the article, I was curious about all the objects they mentioned he had carried. This site has descriptions and pictures of each:

http://www.iceman.it/en/equipment/

Very interesting.

3
simonhn 9 hours ago 1 reply      
The museum in Bolzano, Italy where tzi is held is well worth a visit and the items found with the body mentioned in this article are also on displaywith theories about their use and significance: http://www.iceman.it/
4
xutopia 11 hours ago 1 reply      
If this fascinates you I recommend you read https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Natur.... It's a book that talks about violence in history. tzi is mentioned and so are a few others.
5
hn_throwaway_99 9 hours ago 1 reply      
Every time I read stories like this, I still am amazed at modern science. A man is killed thousands of years ago, but we can still deduce with fairly reasonable certainty the circumstances of his death, and all sorts of facts about him. Fascinating!
6
Jabanga 4 hours ago 1 reply      
Otzi looks remarkably similar to Kris Kristofferson

http://www.writeups.org/wp-content/uploads/Abraham-Whistler-...

I wonder if this is an indication that modern people from Scandinavia, where half of Kristofferson's ancestors are from, are more similar to older European populations, while the European South and Centre saw more change through migrations.

7
srinivaskag 1 hour ago 1 reply      
Where are mummies of other villagers? Everyone of the villagers should also be preserved as icy mummies in the same region. Right?
8
dannylandau 11 hours ago 0 replies      
Glad to see this piece on HN. Very fascinating.
16
An Illustrated Book of Bad Arguments (2013) bookofbadarguments.com
228 points by rbanffy  9 hours ago   76 comments top 21
1
woodruffw 5 hours ago 3 replies      
I've seen lists like these floating around the Internet.

While it's important to be able to recognize a fallacious proposition, it's equally important to realize that the presence of an informal fallacy in an argument does not imply that the argument is either invalid or that the conclusion is false. To give two examples: ad hominems are frequently used in moral characterizations, and most moral analogies are functional strawmen by virtue of exposing the circumstances under which a particular claim is weakest.

At best, the presence of informal fallacies indicates that an argument needs more attention from its audience.

2
SCHiM 7 hours ago 3 replies      
I've found the No True Scotsman a hard one to judge. There's a grey area where both parties could argue that something is/isn't a logical fallacy.

Popular/controversial/recognizable example:

Terrorists are no true believers of <faith>.

Those in the 'you are committing a fallacy camp':The terrorists profess to follow the core tenets(perform the rituals, say the prayers, etc...) of <faith> and adhere to <faith>, therefore they are true believers of <faith>, therefore saying that they are not true believers of faith because of their terrorist actions is a fallacy.

Those of the 'not a fallacy camp':The in/out borders of religious groups are defined by consensus, it's normative. A large group of people may feel that <trait> is actually part of true belief in <faith>, then this is a truth for them. If a <trait> were to be abstinence from violence, then obviously terrorist are not true believers of <faith>.

Truly, language is too unspecific to properly pin down the meanings of normative/subjective facts. This fallacy can probably only truly be committed when one has codified a closed loop of rules, and then breaks those rules. But religion is too nebulous a subject to be clear about such things.

3
nabla9 5 hours ago 3 replies      
The problem with list of logical fallacies is that people quote them in discussion without understanding what they exactly mean.

For example, slippery slope is only argumentative failure if the slope is implied without justification. If you can provide good argument that slippery slope exists, it's not a bad argument.

Good idea to name appeal to authority as appeal to irrelevant authority.

Appeal to relevant authority is often important shorthand. Some issues are too complex to or require too much domain knowledge to go trough. For example, I consider IPCC reports authoritative on the subject of climate change.

4
okket 8 hours ago 0 replies      
Previous discussion: https://news.ycombinator.com/item?id=8061469 (3 years ago, 75 comments)
5
WheelsAtLarge 7 hours ago 7 replies      
I've always wondered about the usefulness of books that point out argument fallacies. The reality is that most people don't follow any rules when it comes to arguments. They just follow what feels good, ie what they(we) have learned via social immersion. They are as useful as using a tea cup to rescue a sinking ship. Recognizing an argument fallacy may make us feel good but it does little towards persuading someone towards your side of the argument. I can tell you that most people don't care about their argument fallacies they just know that they have chosen a side in an argument and they are right.

I'd like to see more books about how people actually argue in life and how to change their view.

6
jampa 7 hours ago 2 replies      
Reminds me of the "thou shalt not commit logical fallacies" poster:

http://i1.kym-cdn.com/photos/images/original/000/531/718/f03...

8
mikeash 8 hours ago 1 reply      
I really love these collections of logical fallacies. Knowing what to look for can really help in spotting bad arguments.

Has anyone built a similar collection for "bad arguments" that are logically correct but completely unpersuasive? I'm often frustrated with these coming from people I agree with, because a bad argument for something is often worse than no argument at all.

9
renegadesensei 6 hours ago 0 replies      
I don't know if it's accurate to call these "bad" arguments. If we judge arguments by their ability to convince people, then many logical fallacies are actually quite good arguments. One need only watch a cable news debate or have a five minute conversation with a typical voter for evidence. Most people vastly overestimate their own rationality. Fallacies endure because they work.

Perhaps the idea is that they are morally bad arguments. Still I would rather call it a collection of illogical arguments.

10
zw123456 6 hours ago 1 reply      
Excellent work and very entertaining. However... over the years I have read many similar books although not as fun as this one, and what I have found is that the people who most need to read and understand it won't and wouldn't. A man who I greatly admire once said "never argue with a stupid persons and never try to reason with an irrational one, both are a waste of time" (probably an adaptation of previous similar quotes admitted).
11
spenrose 7 hours ago 0 replies      
Ali's followup is just published: https://bookofbadchoices.com
12
myst 7 hours ago 0 replies      
How comes there is no mention of "The Art of Being Right" [1,2]?

[1] https://en.wikipedia.org/wiki/The_Art_of_Being_Right

[2] http://coolhaus.de/art-of-controversy/

13
Steeeve 5 hours ago 2 replies      
Those that rely on logic as a weapon in a debate limit their arsenal significantly. Logic and critical thinking are fantastic problem solving tools. Their utility in an argument are minimal.
14
egwynn 7 hours ago 0 replies      
I love how on the "This tiny print serves no purpose page it has:

 For more information, please contact JasperCollins Publishers, 99 St Marks Pl New York, NY 94105.
I used to live on this exact block in nyc, and that is not the correct zip ;)

15
gabrielgoh 2 hours ago 0 replies      
these fallacies are vague enough that most arguments can be twisted to be in violation of one or more of these and dismissed
16
draw_down 8 hours ago 2 replies      
Just please be aware that knowing the latin name for a logical fallacy is not the same thing as an actual argument for or against something. Thanks
17
dredmorbius 3 hours ago 1 reply      
Understanding logical fallacies is useful, but an incomplete strategy.

As I've been exploring various topics of logic, argument, epistemology, propaganda, "fake news", and more, over the past several years, I've turned up a few concepts I'm finding particularly useful.

Dialectics vs. rhetoric. There are numerous modes of communication. In exploring areas of disagreement or uncertainty, two principle modes are dialectic discussion, in which all parties are engaged in identifying the truth of a matter, regardless of their initial position, and rhetorical discussion, in which at least one party is engaged in promoting their initial position, regardless of the truth.

There's a heated debate on the merits of both methods, going back to Plato (he had some very unfriendly things to say of the Sophists, who practiced rhetoric).

https://en.m.wikipedia.org/wiki/Dialectic

https://en.m.wikipedia.org/wiki/Rhetoric

The useful takeaway for me is to recognise when I'm in a dialectic or rhetorical discussion. My own default is to engage dialetically (more chances to learn), though, as noted above, if any one participant isn't playing the same game, they tend to spoil things for the rest. I'm not aware of any way of ensuring that all are engaged in didactic discussion, though again, recognising the game you're in is quite useful.

There are a number of diversionary tactics which are surprisingly effetive at derailing discussion. Being aware of these, calling them out, and if at all possible, eliminating those who are engaged in such practices from discussions you're hoping to maintain as productive, is particularly effective. Several sets of common tactics (one attributed, though not confirmed, to Karl Rove) are listed here:

https://www.reddit.com/r/dredmorbius/comments/2d0r1d/the_rea...

There's another long line of tactics adopted by bullshitters here:

https://www.reddit.com/r/dredmorbius/comments/28ge14/on_nons...

For those interested in epistemology and criteria of truth, I recommend Wikipedia's "Criteria of Truth" or the Stanford Encyclopedia of Philosophy's "Truth" pages:

https://en.m.wikipedia.org/wiki/Criteria_of_truth

https://plato.stanford.edu/entries/truth/

This isn't a greenfield. There are authorities to consult.

18
JCzynski 4 hours ago 0 replies      
I have this book. It's cute.
19
muninn_ 6 hours ago 1 reply      
Does this include "Love it or leave it" ?
20
21
esaym 8 hours ago 1 reply      
This crap is confusing!
17
Ask HN: What are the best resources to properly learn Scala?
17 points by godmodus  1 hour ago   10 comments top 8
2
vasshu 1 hour ago 1 reply      
There is a course on Coursera thaught by the Scala creator: https://www.coursera.org/learn/progfun1
3
ohmygeek 24 minutes ago 0 replies      
Scala Cookbook: http://shop.oreilly.com/product/0636920026914.do (most of it is available online here: http://scalacookbook.com/)

Once you are done with this, you can pretty much pick any of the projects here and start reading the source code:

https://github.com/lauris/awesome-scalahttps://github.com/adamw/awesome-scala

Reading code is the probably the best way to learn any programming language IMO

4
rollulus 14 minutes ago 0 replies      
I started with Scala a year ago for work. I used a mixture of the resources listed here, but I found this [1] series the most valuable, after the other resources gave a basic foundation.

[1]: http://danielwestheide.com/scala/neophytes.html

5
SatvikBeri 13 minutes ago 0 replies      
I found Functional Programming in Scala to be the best source: https://www.amazon.com/Functional-Programming-Scala-Paul-Chi...
6
raystar 44 minutes ago 1 reply      
I used https://twitter.github.io/scala_school/ to learn, I found it useful.
7
gh0zt 33 minutes ago 0 replies      
As you are asking for books- Scala for the impatient (http://www.horstmann.com/scala/)- Programming in Scala (https://booksites.artima.com/programming_in_scala_3ed)

I found those books very good resources. The Scala website lists a few others (https://www.scala-lang.org/documentation/books.html)

Aprt from that I found Daniel Westheides blog a very good starting point (http://danielwestheide.com/scala/neophytes.html)

8
partycoder 23 minutes ago 0 replies      
"Another tour of Scala": http://naildrivin5.com/scalatour was informative for me.

It's a bit dated, mostly targetting Scala 2.8. Current version is 2.12. But should not be fundamentally different.

18
Mandatory File Locking for the Linux Operating System (2007) kernel.org
30 points by luu  8 hours ago   19 comments top 3
1
Animats 4 hours ago 4 replies      
As a retrofit, that never worked out.

I've suggested before that files should be divided into several types - unit, log, temp, and managed. This also addresses the locking problem.

Unit files are always updated as a unit - once written and closed, they can't be changed, only replaced. Using "creat" creates a new file, which replaces the old one on clean close. A crash or a program abort reverts to the old file. So they implicitly have a form of mandatory locking. Should two processes be able to create new updates at the same time? Probably not. This is the default kind of file.

Log files should be append-only. Multiple users can write, but only at the end. That deals with the locking problem there. That's what open for append should do.

Temp files disappear at reboot, and should have N-readers or 1 writer locking. Anything in /tmp gets this treatment.

Managed files are for databases. Shared access and partial file locking is supported. Managed files support an API where you get a callback when the data has been safely committed to disk, something ACID databases care about. Only a few programs use managed files, and you know which ones they are.Managed file mode has to be explicitly turned on for a file.

That's how locking ought to work.

2
tedunangst 4 hours ago 1 reply      
Mostly solved this problem with other solutions. Maildir for mail, SQLite for everything else.
3
emmelaich 4 hours ago 0 replies      
When UNIX started its commercial success, some mainframers sneered at the newcomer because it didn't even have mandatory locking.

Turns out, neither does MVS. The locking was done by the DB system, not the OS.

20
Self-Experimentation and Its Role in Medical Research (2012) pubmedcentralcanada.ca
25 points by wslh  6 hours ago   2 comments top 2
1
beefman 1 hour ago 0 replies      
The team that discovered ibuprofen routinely tested candidates on themselves.[1] It wasn't uncommon. My dad did it at Merck in the '70s. In one case, he and his lab partner took a candidate appetite suppressant (serotonergic) and then went out for Chinese. They ate like kings and concluded it didn't work.

[1] http://www.bbc.com/news/health-34798438

2
kirrent 4 hours ago 0 replies      
I remember attending an interview where Barry Marshall jokingly claimed that he was the only person who'd ever earned a Nobel prize with a study size of n=1. At the time it seemed like a reasonable claim, not just a joke, and it's now interesting to me to see how wrong he was with that joke.
21
Ask HN: Is Accidental Complexity Growing Faster Than Essential Complexity?
92 points by ern  4 hours ago   35 comments top 23
1
jnbiche 6 minutes ago 0 replies      
I really can't address the issue of overuse of microservices in a line of business app type project, but I want to push back about little about "SPAs" being "accidental complexity" here. I imagine by "SPA" you're referring to the use of a JS framework like AngularJS, Backbone, EmberJS, React(and some type of models), etc.

While I agree that sometimes there is no need for these in a simple CRUD app, I'd also caution that depending on your basic needs (frontend validation, autosave, keyboard shortcuts, etc) and frequently, on more complex client requests (like for fast JS-driven form navigation that requires some sort of frontend routing, or to save to some industry-specific file format, and such), you can easily end up with a big spaghetti ball of JQuery if you're not careful.

I've seen these balls of tangled JS all too often in line of business apps, and they're a nightmare to maintain or modify. The point of using a framework is to provide some sort of organization for frontend JS, and to provide common features.

If your team has the maturity to organize their frontend JS into neat MVC/MVP apps without the use of a JS framework, great. But in my experience, few "full-stack" developers are comfortable enough and/or care enough about frontend JS to do this. Thus the use of frameworks.

If you need a very minimal framework, BackboneJS is both tiny and simple to use. Even Backbone can save simple apps like this a lot of grief.

The alternative to something like this is to stick to a full-stack framework like Ruby or Django and stay strictly in that environment/best practices. But even there you're bringing a lot of complexity into the picture, or sometimes you still end up needing some sort of organizing principle or framework for the frontend JS.

2
alistproducer2 2 hours ago 0 replies      
I work for a large company in the automotive space. I can say that most of the accidental complexity I see is related to shoe horning solutions into technologies that we happen to have licenses for as opposed to what is the simplest and most robust solution.

The kind of over-engineering for the CV polishing almost always comes from our college hires. It's not really their fault, as I understand the attraction to the shiniest object when your a newb. When their monstrosities are allowed to make it into production, however, is when you have a dev manager that doesn't really understand software engineering.

For my part, I used to scream from the mountain tops that we were over-engineering everything, but now I just let them fuck shit up cause I'm tired of trying to save people from themselves.

3
cjhanks 3 hours ago 0 replies      
It sounds to me like you are in a software company with a lot of inexperience.

I have seen people spin up massive map-reduce clusters to perform something AWK could do in half the time. And I have seen spend enormous amounts of time training neural nets for something a simple SVM could do. In both cases (and most similar cases I have seen) a lot of needless time and energy was expended.

But why? This is my hypothesis anyways...

There seems to be two primary pools of people who feed into the software engineering. Those from engineering disciplines and those from research disciplines. Engineering schools can be fairly cut throat; limiting the number of candidates who can move on at various stages, or designing courses to reduce the number of enrolled students. Those from research disciplines are required to create novel ideas which are worth publishing just to graduate their programs. And those that actually like academia believe (and rightfully so) that creating new ideas is the way towards notoriety and success.

That means a lot of the craftsmen who would prefer making tiny refinements on well understood paradigms are simply weeded out from the discipline before they can even enter it.

I have known very smart people who quit engineering in college because they could simply not stand being in a room of people where everybody wanted to be the "smartest in the room".

Of course... the incessant CV polishing in Silicon Valley probably doesn't help either.

4
_Codemonkeyism 1 hour ago 0 replies      
Many companies have too many engineers (and some not enough). This is mainly due to #engineers = f1(revenue || vc money) while it should be #engineers = f2(problem). Mostly because our industry has no clue about f2 and so falls back to f1.

But this also works with business thinking, e.g. CEO thinking

techbudget = 0.1 * revenue

With the law of the used budget (all people always use up their budget of fear to not get the budget next year when they might need it), CTOs use the whole budget

#engineers = techbudget/salary

so f1 is often

#engineers = f1(revenue) = (revenue * 0.X - hosting - laptops - licenses)/ salary

and with salary >> hosting, salary >> laptops, licenses -> 0 due to open source usage,

f1(revenue) = revenue * 0.X / engineer salary

(the 0.X is determined by VC experience/push, negotiation between CEO and CTO, how much of a tech company the CEO sees his company or from his previous experience, most probably on a wholly different tech business model).

with no need to understand the tech challenge at hand.

For startups and high margin tech business, there is often a large tech budget, so many engineers are hired (also other forces like capability building, war of talents etc. - some startup CEO tell me their VC said they need to hire 100 engineers until the end of the year - this is without knowing anything about the tech problem at hand).

Coming to your point:

If there are too many engineers for the essential complexity they create accidental complexity.

5
Denzel 4 hours ago 0 replies      
You'll enjoy these two videos from Alan Kay and Uncle Bob Martin, respectively:

"Is it really 'Complex'? Or did we just make it 'Complicated'?" - https://www.youtube.com/watch?v=ubaX1Smg6pY

"The Future of Programming" - https://www.youtube.com/watch?v=ecIWPzGEbFc&list=PLcr1-V2ySv...

6
mikekchar 2 hours ago 0 replies      
I honestly don't think it's any worse than it has been before. Back in my youth it was "components". If you can encapsulate functionality, then you should be able to compose it like tinker toys. So let's build these "reusable" components which act like black boxes. We'll nail up the API to the wall, and use some kind of versioning scheme to ensure that it's always backwards compatible. We can even build frameworks that standardise the communication between these black boxes and provide language agnostic object representations. Hell, we can even build database interfaces around that concept and pass business domain objects back and forth directly. And we can hide the complexity of the object-relation mapping in another framework.

Yeah, that will simplify everything :-). It only looks shiny because the people doing it weren't around in the 90's to get sick of it the first time around. Ah... who am I kidding? 4GLs have been around since the 70's... :-P

7
donatj 1 hour ago 1 reply      
I'm just getting to the point in my career where I've seen this happen a couple times. I believe it's cyclical. OO came and complicated everything and that died down. WYSIWYGs came and complicated everything, those have died down. Huge frameworks came and complicated everything, and largely died down. SPAs and Microservices came and will soon die down. I think every few years of new blood introduces a critical mass of complexity.
8
Nomentatus 2 hours ago 0 replies      
The consequences of accidental complexity are now much greater because everything's more intertwined, and the systems as a whole, larger. This puts us in "Ghengis John" (John Boyd) territory, close to the point where we don't have enough attention to fix all the problems that arise, including from the fixes themselves. Probably you youngsters are actually better about not introducing complexity needlessly, since the consequences of doing so are ever more obvious.
9
holri 1 hour ago 0 replies      
"... perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away ...", Antoine de Saint Exupry

This is true for requirements but also for used technology.

10
molticrystal 2 hours ago 0 replies      
It is easy to look at this from a functional and practical standpoint, take any program in your stack, and look at what it does vs what you need it to do.

There is a parallel to this where people were trying to where people write c++ programs in visual studio realize they often can do so without needing the microsoft visual studio runtime and its routines and their program drops from 68k to 2k and a dash of code to bring back what is missing in most cases, if it is even needed. https://hero.handmade.network/forums/code-discussion/t/94

I love looking at things and thinking about this, people in this field often find themselves over-implementing(or using software that is a source of over-implementation), setting up a tremendous amount of foundation to get a little thing done, getting carried away, and finding out it got too involved and over-engineered when there is a tight and simple solution that matches exactly what is needed.

Of course there are situations where microservices are still needed: https://semaphoreci.com/blog/2017/03/21/cracking-monolith-fo... Everything should be thought out and joining the latest craze isn't necessary. If you really need it, you will find yourself looking for or implementing the solution, regardless of what it is currently called, microservices have been around in one form or another before the hype, and used as needed when needed, instead of getting being used to wax a resume.

Your accidental vs essential comparison has real world monetary consequences. There are many examples that while it might take 2-3 months to stop and think and analyze what is going on instead of adding to the pile, you can save 100k-300k a month for even smaller scale deployments.

11
rtpg 2 hours ago 0 replies      
Are microservices more complex or less complex?

there's more operational complexity, because now you're running two things instead of one. But there's also less complexity in other areas, because parts of your code are firewalled off from each other, so you have a smaller problem space when making changes/fixing bugs.

I think that there's a tendency to view things that are harder to set up initially as more complex, when the long-term complexity might be lower. It mainly depends on what you think is important.

I think a lot of the difference in viewing complexity is due to difference in foundational knowledge. People doing Haskell see monads for DSLs as nice and easy abstraction, but people outside see it as a crutch, needed because of a lack of native effects in the language.

Some people see Javascript toolchains as needlessly complex, others see people using Javascript instead of <some compile to JS language with better abstractions> as needlessly complex.

Differentiating between what kind of complexity is brought is important. Because a team of people bad at operations shouldn't be rolling out a new service every week. But a team good at operations but bad at separation of concerns might gain a lot from being "forced" to chunk things out.

12
arikrak 2 hours ago 0 replies      
There have been various improvements in programming that reduce accidental complexity but they don't apply in every area. For example, Ruby on Rails makes it easier to create an MVP, but scaling an application to millions or billions of users will still require working on many layers of "accidental" complexity (since scaling isn't an inherent part of solving the problem). Running applications in the browser also adds "accidental" complexity since certain details of the product now need to be defined in both the browser and the server. Testing can also get more complicated as there are more layers and services interacting with each other. It seems as hardware and software improve, the demands on them increase as well. So we can't just relax and enjoy a world of Essential complexity.

Meanwhile as machine learning gets more advanced, it's able to tackle certain "essential complexity" problems that were supposed to always require hand-coding by a human programmer. So there are trends working in many different directions.

13
Alex3917 3 hours ago 1 reply      
> A lot of it seems to be driven by a need for developers to keep their CVs shiny

This is immaturity, not a real need. The people who make the most money in tech aren't the ones with the most buzzwords on their resumes.

14
_Codemonkeyism 2 hours ago 0 replies      
Yes. This is one one of the most prevalent problems in engineering I see when consulting for startups.
15
kevan 2 hours ago 0 replies      
The software field is still very young, and there's lots of things we haven't explored yet. A great example of this exploration is the ongoing evolution of javascript from simple scripts to SPAs and a universal compile target. It seems like a lot of effort is being wasted (because it is), but it's similar to startups in the economy. Most efforts fail, but enough succeed that we end up in a better position overall.

The field may hit a point where there's an obvious standard way to build things, (e.g. most houses in the US are wood-framed with drywall because it's cheap an durable enough for most use cases), but this won't happen for a long time. We've had thousands of years to figure out how to build houses and roads, but software really only started in the second half of the last century.

16
dcosson 2 hours ago 0 replies      
> tamed through more responsive (agile) dev practices

Sure, for a given project, you can make it more or less complex by taking different approaches. But there's some lower bound on essential complexity and when the product itself requires a lot of features that interact in complex ways, your essential complexity is going to be high. And once the complexity gets too high, every new feature you add starts making future features even harder & slower to add. (A few caveats: 1. You can of course argue that it doesn't really "need" all those features, and everyone would be better off by sacrificing a little on the user experience to make it a lot less complex, less buggy, cheaper to build/maintain, etc. 2. By the time you run into these issues, the program is large enough that some accidental complexity has snuck in, and it can be hard to estimate how much of your problem is from that vs how much is from the essential complexity.)

Anyway, If a project never gets to that size where the essential complexity becomes unmanageable, then great. I'd agree it's probably a mistake to introduce extra complexity with microservices or anything else, and certainly some people make this mistake (or miscalculate whether their project will stay small and simple).

But if it does get to that point, some of these solutions you mention like microservices can be an independent, constant complexity cost. Ideally each microservice itself is a small app with manageable complexity. On the other hand, a naive, monolithic app will continue to have its essential complexity grow super-linearly with variables like time, lines of code, number of developers, etc. The bet is that by introducing the new thing, you can bring the overall complexity of the entire system down.

So my short answer would be "no" - in many large projects, essential complexity is growing faster than accidental complexity. This has led to a proliferation of tools to help bring it under control, and yes, if you look at these tools in isolation they are pretty complex.

(And there are hundreds of billions of dollars worth of companies using these tools, if it is all just a waste of time then there's a massive business opportunity waiting for someone to undercut them)

17
jnbiche 3 hours ago 1 reply      
Can you please cite some examples of this accidental complexity? I'm very curious to hear, if for nothing else than to avoid it if I agree that it's indeed accidental.

I do agree that because of the "GitHub resume" phenomenon, lots of devs are engaging in the type of engineering you describe.

18
mpweiher 1 hour ago 0 replies      
Yes.

Sort of.

One of the reasons for this is actually the success of both OOP and COTS/FOTS to provide us with workable reuse (just as Brooks predicted, by the way). Just about every GUI program reuses huge libraries of graphics and UI code, usually provided by the OS vendor. Every Rails app reuses significant functionality. We scoff at the huge dependency lists of programs, yet in some sense this is a sign of success: we no longer have to rewrite all of this functionality from scratch like we used to.

However, we now have to glue all of this stuff together. With glue code. Lots and lots of glue code. Which is by definition accidental complexity. And which appears to be growing non-linearly.

So as is usually the case, our success in the last iteration (OOP/COTS) gets is to a new stage, and at this stage we face new problems that are a result of our previous success. In this case, I'd venture that the problem is that we don't really have a good approach to gluing these pieces together. Yes, we can do it, but we don't know how to do it well.

I think John Hughes in Why Functional Programming Matters[1][2] hit the nail on the head when he said that we need new kinds of glue, and where FP has been successful I think it is largely because it provides one new type of glue: function composition. (He says two, but whether pervasive lazy evaluation is actually a good kind of glue is at best controversial).

Architectural Mismatch: Why Reuse is [still] so hard[3](1995)[4](2009) shows the general problem while Programs = Data + Algorithms + Architecture: Consequences for Interactive Software Engineering shows that the problem is particularly pernicious in GUI programs, which are notoriously tricky.

For me, the solution is to make it possible to (a) use different types of glue within a program, (b) define your own kinds of 1st class glue and (c) adapt existing glue to your needs. I've built a language to enable this[6] and have been applying the ideas both in research and practice. So far it appears to be working.

[1] http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html

[2] https://www.infoq.com/interviews/john-hughes-fp/

[3] http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...

[4] http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074...

[5] https://www.semanticscholar.org/paper/Programs-Data-Algorith...

[6] http://objective.st

19
tabeth 4 hours ago 3 replies      
In regards to the SPAs, what exactly is the alternative?

In my mind you have two possibilities:

1. The most ideal is to "pick the right solution for the problem." Meaning, you will analyze the problem and do the right thing, e.g. content mainly? server side rendering. web application? SPA.

2. Do what you know, consistently. In many cases, this means using an SPA basically all of the time, whether or not it's necessary. This also might mean server side rendering all of the time, adding javascript only when absolutely necessary.

In regards to SPAs, I feel there must be a solution. Ember tries to do this with fastboot, allowing you to essentially have the flexibility of an SPA with the SEO and other advantages of serve side rendering, but I'll ignore that for now.

Is there a paradigm/language that allows you to write the same templates for both the server side and SPA view? I guess this would mean the view is the same, and the model and controller would be your choice.

I think it's natural to just do what you know, which is why you're probably seeing what you are.

20
nstart 2 hours ago 0 replies      
Context intro to my answer: I'm come from a service economy. Much of what I've seen is from companies that go out there, get clients from various types of businesses - construction to finance - and create custom solutions for them that are then maintained by the same service companies. I've also seen a few product companies and I have worked with one for a year.

---

After reflecting on this question, I feel like a good way to think about it might be this:

Companies are hitting extreme points right now. On the one hand you have companies that have stuck to their way of working for years. It worked, and in the process of getting customers and making the money come in, they never really thought to upgrade the company processes. After all, back in the day, tech moved at a much slower pace and that's what they are used to. There's been little to no incremental improvement in the technology or even the practices that have been used throughout the companies life. These types of companies still use FoxPro for ERP's, much much older Java enterprise tech stacks (and practices) for their server side stuff. I don't say that that's bad mind you. The ERP company has been around for over 25 years.

On the other hand you have the fresh companies started by people who really don't want to enter the juggernauts of the market. The establishment. Guided by the excitment that reaches us from Facebook listicles, and Techcrunch, they want to ride their own paths and build a fresh future. They want to go in and show businesses that the establishment is giving them "boring" and that they can do it better. I've met with lots of these people as well. Generally they'll start explaining their business by start from their tech stack. "We are a SME ERP business using react and a full JS based stack". That is not an exaggeration. That is near verbatim.

In between these companies you have the graduates who need to pick a side to go work in. And those that are entering the establishment, want to make their mark. Their impact. They walk into a company that is using SVN for their source code management, and they groan. They see Java being used and they say "why not NodeJS?". And what I've seen that happens, is that they run into the people of the establishment who have no interest in upgrading. Instead of having mentors who work with them towards incremental improvements where each improvement is justified by developer productivity and improvements to the customer they hear "what we have is good enough. Forget it". Or the more common "too much work. Don't bother". As a personal note, the latter really bugs me. Of course it's too much work. You've set it up to be too much work. But, what happens then is that some people will become the establishment, while the others will bide their time waiting for a greenfield project to come and for them to be given a PM role (mind you I see PM roles being handed out to people with almost no technical knowledge, little ability to evaluate tech or specs, after being in a company for 1.5 years). And when they get it, "LET'S DO NODE JS!!!!".

Oh and what of those graduates who left college to join the "entrepreneurs"?. They too have gone through university being taught web app development using "ASP MVC" and "Microsoft SQL" and they long to be let out into the world to play with the tech that they hear their peers are working with around the world. Admittedly, the Open source world of react and angular and what not is super exciting in terms of pace of announcements these days. And then they join their peers and everyone gets to feel excited that they are working with new things because they believe new must mean better.

---

Ultimately the answer is yes. Accidental complexity is growing faster than essential complexity. Business practices change much slower than tech in today's world. I also feel like understanding the reasons behind it is worth pondering on.

For me, I feel that a lack of mentorship has a lot to do with things. There are far too few Uncle Bob's - veterans of the software industry who've kept pace with the change and understand things with a deep historical context - who lead architectural decisions and project management at a company. It doesn't even have to be Uncle Bob level veterans. But my observation of companies from where I am shows me very clearly that after 3-5 years of being in the industry, the deeper level constant learning vanishes. Which means that the older employees and the relatively new employees have stopped growing and the mentorship they provide is based on a tiny amount of work that they've done at some point in their life.

So to recap, we have people who've come out into the industry, fascinated by the flashy stuff (which is fine! That kind of youthful enthusiasm is also needed in the eco system). These people quickly make their way up to the management level within a couple of years, and then get to push the flashy stuff, and it doesn't go beyond there. Understanding the balance between tech decisions and what's required to solve the problem efficiently for the customer doesn't happen. And then these people guide the next generation which multiplies the effect.

From my POV and from where I live, this is the service industry today. Accidental complexity is a growing beast heading towards an exponential curve.

21
good_vibes 1 hour ago 0 replies      
I agree 100%. I'm beginning to feel a lot of 'developers' live in a circle jerk where they share about how awesome they are and other developers tell them how awesome they are, unless it makes the other 'geniuses', what many above-average intelligent people in my generation are told from a young age, feel too inferior.

Hubris is a real observable phenomenon in the history of economic bubbles, paradigm shifts in scientific/technologic progress, and in recent events in media, politics, and business.

We need to start questioning business-as-usual as much as possible. Everything can be simpler, less egocentric, and more beneficial to all parties.

22
api 3 hours ago 1 reply      
It's always been a problem. Premature optimization may be the root of all evil, yeah, yeah, but that quote is sort of a waste of breath as premature optimization is not that big of a problem. Over engineering is the great disease of the software profession.

I suppose premature optimization is one source for it, but more common is premature generalization and excessive levels of abstraction. Other major sources include backward compatibility needs, the creation of virtual layers to escape calcified bad design, and of course the need to show off and look smart.

23
trowaway8u6 3 hours ago 0 replies      
I get your drift, mostly, but I'd say accidental complexity comes from forces more powerful than technical geekery: money.

I work at a fairly well known startup that has raised many tens of millions from VCs. Having raised this money we are now fully expected to hit quarterly numbers. In this pursuit we often end up adding random shit to the product or doing extra non-core shit to get a deal done. Product roadmap be damned.

Unless you have a principled and powerful (dictatorial, even) leader who can say "no, we should stick to our vision rather than introduce new tech debt" and enforce it through product and engineering, the accidental complexity will continue to pile on as selfish quota-carriers eat up all resources available to them.

22
The High-Speed Trading Behind an Amazon Purchase morningstar.com
94 points by prostoalex  8 hours ago   57 comments top 10
1
Spooky23 7 hours ago 2 replies      
Sounds like a solution looking for a problem to me.

For all of this complexity, Amazon rarely has a significant price advantage versus most retail stores.

From a customer point of view, competitive categories have a flea market quality to them. For a company that is usually optimized for customer experience, this is a weird science experiment.

IMO, they should make price adjustments less fluid. Require merchants to get to the lowest price as soon as possible, and punish stupid resellers that throw up divergent high/low prices by forcing them to live with it.

2
ikeboy 6 hours ago 0 replies      
I've used several repricers, it's a must when selling competitive products with a large numbers of skus.

One thing I'll note is that there's up to a 15 minute delay before a price change takes effect.

Also, Amazon will often "suppress" the buy box if no sellers have a good price. You can still buy, but there's an extra step needed. "Other sellers", then add to cart.

3
KKKKkkkk1 7 hours ago 3 replies      
I don't see what's so special about continuously updating your price as a seller. This happens in pretty much every grocery market in the Middle East or Eastern Europe.

I also don't understand how the supposedly fierce competition between sellers ends up yielding prices that are higher than offline stores. Seems the reporter missed the true story.

4
thinkloop 7 hours ago 2 replies      
It's interesting to know that products on Amazon might not be the lowest price, or always close. I thought there was more manual price research and setting, like Walmart.
5
israrkhan 5 hours ago 3 replies      
This is a very unfortunate trend where sellers are selling not based on the value of the item/service, but demand/need of the consumer. It has been a common practice in airline industry, and upto some extent in healthcare. But now this practice is creeping into online shopping too. This can soon fall into unethical territory. Imagine an item, that costs $1. Just because it could be life-saving for someone, or someone needs it badly, you sell it at much higher premium. You are not charging for your service, but exploiting other people need.
6
mirimir 5 hours ago 0 replies      
Amazons $23M book about flies (2011)

https://news.ycombinator.com/item?id=10289742

7
downandout 5 hours ago 1 reply      
I try to put this on all WSJ articles that make the front page. Here is a direct link to this story that will auto-kill the paywall:

https://m.facebook.com/l.php?u=https%3A%2F%2Fwww.wsj.com%2Fa...

If you want a draggable bookmarklet that will bypass the paywall for all WSJ articles, go here:

http://salzeko.com/wsj/

It would be nice if HN would make a change to their link submission code so that it automatically changes WSJ URLs to use the Facebook redirect workaround, since they are the one major site that disabled the Google workaround. But until then we have to do it this way.

8
whyenot 8 hours ago 3 replies      
9
berberous 5 hours ago 1 reply      
Off-topic: Am I the only who can never load the comments on WSJ articles? This is across browsers (Firefox/Chrome) and platforms (Windows/Mac/iOS), and even with adblockers turned off. This article says it has 36 comments, so it must be working for most, but I can't seem to figure out why it never works for me.
10
QuadraticFizz 8 hours ago 1 reply      
Does anyone have a mirror? The article is behind a paywall.
23
A Wall Street Informant Who Double-Crossed the FBI bloomberg.com
20 points by chollida1  5 hours ago   6 comments top
1
trendia 1 hour ago 4 replies      
Does anyone know how Bloomberg magazine is funded? It seems to produce very high-quality articles that are provided for free (unlike WSJ which has a very aggressive paywall).
24
Leverage Points: Places to Intervene in a System donellameadows.org
35 points by openfuture  9 hours ago   5 comments top 2
1
dkarapetyan 6 hours ago 2 replies      
I've recently started reading up on systems theory and cybernetic systems and contrary to what the names suggest learning more about the theory does not make you better at designing systems. You get much more attuned to how large and complex systems fail but there is no crank you can turn that will provide insights and then let you implement changes that will lead to positive progress.

No matter what you do at the end of the day you have to still convince people that what they're doing is probably wrong and no one ever wants to hear that. Cognitive dissonance and sunk cost thinking almost always trump any kind of analysis and the system continues to operate as it has always operated until a large enough shock shakes it and changes are made. Most often the changes are made too late and great human misery is the result. Current climate issues is one prominent example that comes to mind. Uber and Zenefits being the other examples that I can think of were earlier interventions would have helped.

2
saosebastiao 1 hour ago 0 replies      
One of the more interesting theory-driven rabbit holes I've dug into was the bullwhip effect in inventory management, which was a failure mode model popularized by Jay Forrester, who was Donella Meadows' mentor and colleague. One of the interventions that can be taken to minimize susceptibility to the bullwhip effect was to decrease information flow latency. In other words, faster information about demand fluctuations improves the ability to respond to them. This type of intervention is ranked #5 by Donella in this article.

The problem with that intervention is what happens after you've improved your information flows. Faster information flows about demand leads the profit-seeking enterprise towards tighter inventory tolerances. They eliminate "extra" safety stock that is no longer needed due to their faster information flows. And it works out phenomenally well...for a while. But strong demand fluctuations can still appear, and without equivalent response mechanism improvements, the risks become more fattailed: failures become less common but much worse in severity. And in context of this article, I'm claiming that a leverage point ranked at #1 (the profit incentive) overpowered a leverage point ranked at #5, ameliorating its benefits.

The new topic du jour on HN seems to be self driving cars, and one of the many claims is their ability to improve traffic, and one of the other claims is their ability to improve safety. And interestingly, a systems model of these claims would likely show situations that are suspiciously similar to a bullwhip effect. In other words, it is a stateful model with latent information flows (roadway conditions), physical responses to the information flows (brakes), and buffers (space between vehicles) which protect from failure due to information latency and reaction capability.

Self driving cars can improve upon baseline reaction times to changing road conditions. They have more and better sensors and well known algorithms for detecting dangerous situations. This fact isn't speculative at this point, we already have some proof of it [0]. The question becomes, what do we do with that improved information flow? Do we tighten buffer tolerances? If so, you improve roadway capacity the majority of the time and possibly still reduce the risk of accidents...but what happens to accident severity? Maybe traffic throughput isn't the be-all objective that we want it to be, and we should be content to let that information flow improvement result in increased safety and traffic resilience instead.

[0] http://www.nbcnews.com/tech/tech-news/tesla-autopilot-begins...

25
Attack of the Killer Microseconds acm.org
53 points by jgrahamc  10 hours ago   9 comments top 2
1
angry_octet 6 hours ago 1 reply      
This is a timely analysis. The virtual memory system, with its concept of paging to disk, is obsolete in the sense that hardly anybody that does bigger-than-ram computations rely on the kernel's algorithms to manage it (https://scholar.google.com.au/scholar?q=out+of+core+algorith...).

The current paging system doesn't have a sensible mechanism for flash-as-core memory (10x RAM latency, e.g. DDR4 12ns for first word, so 120ns), persistent memory in general, or using SSDs as an intermediate cache for data on disk. ZFS has some SSD caching but it is not really taking advantage of the very large and very fast devices now available.

So we do need new paradigms to use this effectively. I'd like to be able to reboot and keep running a program from its previous state, because it all sits in flash-core.

Also there is huge potential to move to more garbage collected memory storage systems. This goes hand in hand with systems which can progress concurrently, without the overhead of difficult multi-threaded code, such as parallel Haskell.

On the negative side, I find the use of the term 'warehouse scale computing' to be stupidly buzzwordy.

From https://gist.github.com/jboner/2841832

L1 cache reference 0.5 ns

Branch mispredict 5 ns

L2 cache reference 7 ns 14x L1 cache

Mutex lock/unlock 25 ns

Main memory reference 100 ns 20x L2 cache, 200x L1 cache

Compress 1K bytes with Zippy 3,000 ns 3 us

Send 1K bytes over 1 Gbps network 10,000 ns 10 us

Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSDRead 1 MB sequentially from memory 250,000 ns 250 us

Round trip within same datacenter 500,000 ns 500 us

Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory

Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip

Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSDSend packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms

2
YZF 8 hours ago 2 replies      
Really? We are good at nanoseconds and milliseconds but not at microseconds? Last I checked a microsecond was 1000 nanoseconds so you can't really be good at nanoseconds but somehow bad at microseconds.

This ties in a little to the recent HN discussions about the cost of a context switch. I think what they're trying to say, and not very well, is that there is somewhat of a discontinuity when you move between different levels of abstraction. There's examples of this phenomena in operating systems where the overhead of making a system call can be so high you can't get the latency down or in programming languages, e.g. running over a virtual machine or an interpreter. But this is far from new and there's a continuum of solutions from hardware like DSPs through real time operating systems, lightweight threads, lower lever languages, kernel bypass. Abstractions have cost, you want protected/virtual memory there's a cost and you pay that cost in your context switches. Not sure you can have your cake and eat it here but there's plenty of different choices on the menu for different situations.

26
Interview with Philip Zimbardo of the Stanford Prison Experiment nautil.us
18 points by happy-go-lucky  6 hours ago   3 comments top
1
jplayer01 1 hour ago 2 replies      
It's been nearly fifty years. It was a flawed experiment. He's been interviewed countless times. Why do we need yet another article about it that doesn't say anything new?
27
I Built a Bot to Apply to Thousands of Jobs at Once fastcompany.com
105 points by miraj  6 hours ago   26 comments top 9
1
donovanm 4 hours ago 5 replies      
So according to this article, most jobs aren't posted and most jobs are filled through referrals. Even if you somehow make it through the arbitrary automated tracking system gatekeeper you're still really unlikely to get the job from a random application. Any hiring managers here that can share if this matches their experience?
2
bigiain 5 hours ago 1 reply      
And people wonder why the hiring process is so broken. Of course recruiters and HR departments need to treat every application/resume like crap... Except yours. Because you're a unique snowflake and they can easily discern your application from this guy's spam...

(And I fully expect commission-renumerated recruiters are doing this pro-actively without even having any candidates yet to get ever-so-slightly-warmer intros than their cow orkers...)

3
abraae 4 hours ago 3 replies      
Fascinating but flawed.

> By targeting internet companies in particular, Id chosen an industry with a high likelihood of reliance on resume-processing algorithms.

Internet companies hire knowledge workers/creative types/snowflakes.

By contrast banks, insurance companies and departments like call center hire armies of drones.

You are much more likely to find automated resume analysis in the latter, it works much better there.

4
KarinneLima 5 hours ago 0 replies      
Good article! I've found out while working for the career center at a major university in the US that one should spend 80% of their job-searching energy in networking vs 20% applying. Even though applications are part of the process and personalized cover letters and thank you notes take up a lot of time, the odds are definitely in favor of those who make an effort to connect with the right people, who show genuine interest in their experience and learn as much as possible about the opportunities they pursue. Keeping a good relationship with former coworkers and employers is another great way to keep a healthy network. You never know where the next job or project will come from.
5
failrate 5 hours ago 1 reply      
The best jobs are not only not posted, they are jobs that don't exist until you create them yourself.
6
JSoet 2 hours ago 0 replies      
I was disappointed with the direction this article took - when he said he built a robot to send personalized applications I was hoping it would be reading the requirements and then sending personalized resumes and cover letters specifically built to get past the robot stage, but then waste the recruiter's time to show how stupid the automated system is and how easy it is to game... Something like that anti-spam bot that was posted here a few months ago...
7
ploggingdev 2 hours ago 0 replies      
Does anyone have experience with job marketplaces? I don't mean the mainstream ones, but ones like Stackoverflow jobs or Github jobs. Do they get closer to solving the problem of resume blackholes?
8
mcguire 4 hours ago 0 replies      
"Its not how you apply, its who you know. And if you dont know someone, dont bother."
9
ghufran_syed 3 hours ago 1 reply      
I think this data is interesting, like the author, I would have assumed some difference between the version that effectively said "written by a bot" and the other one. On the other hand, the conclusions are identical to what Richard Bolles [1] has been telling job seekers for the last 47 years. If you've never read this book, you really should, I remember when I first read it and realized "Oh, THESE are the actual rules of how to get a job, no wonder my job application success rate is so dismal"[1] http://www.penguinrandomhouse.com/books/537247/what-color-is...
28
StarCraft Remastered starcraft.com
258 points by alxmdev  5 hours ago   84 comments top 30
1
niftich 3 hours ago 0 replies      
Amazing news! Though I didn't play it competitively, StarCraft was a big part of my life -- my friends and I made custom scenarios to be played in 'Use Map Settings' mode, like dozens of iterations of tower defense and the like.

Given the impressive list of tune-ups, I'm hoping that some improvements will be made to mapmaking -- or perchance, even "modding", which was never really supported aside from third-party hacks. On the low end, support for mp3, Vorbis, or some other compressed audio format for custom sounds would be nice instead of .wav, but if they aimed higher they could rejuvenate a player-generated content community for years to come.

2
rublev 4 hours ago 4 replies      
Ridiculously excited. Nothing in my life was better than BW+1.6. Rotated those games religiously and went pro, wasted about 5 years of my life but damn it was the most fun 5. Love that new life is being breathed into such a mechanically (but graphically lacking) engaging game.
3
shiado 3 hours ago 1 reply      
For the uninitiated, StarCraft Brood War is perhaps the most mechanically demanding game there is:https://www.youtube.com/watch?v=uQRIxq_cJDE
4
richdougherty 4 hours ago 1 reply      
Sorta related a fun podcast interview with someone who's been involved in the Age of Empires II modding community and is now helping release official expansions. Some anecdotes about the performance impact of changing from assembly-coded 2D sprites to 3D graphics.

http://hanselminutes.com/568/forgotten-empires-amazing-games...

I played a lot of AoE II and StarCraft back in the day.

5
cwyers 4 hours ago 3 replies      
There may only be so much they could do and maintain compatibility with the old game, but I don't think the remastered graphics look that great. A clear step up from the original on modern screens, but not that great.
6
terrywang 1 hour ago 0 replies      
As a huge fan of Starcraft and its expansion Brood War, I am really excited.

I bought StarCraft 2 after it was released but I simply didn't like it, maybe I grew older and life was busy then. However, I think it's more about the focus was shifted, when adding fancy graphics effects, the RTS elements are gone, plus added complexity.

I think the remastered version will re-ignite the faded friendship between a group of guys/gals. The original game used to be our bond. However, the bond is fading over the years as I moved overseas, most folks have their own family/people/things to care about.

The best thing is: "Most importantly, the strategy gameplay that StarCraft perfected years ago remains unchanged."

Nuclear launch detected ;-)

7
Waterluvian 4 hours ago 3 replies      
Widescreen? Do I get to zoom out in multiplayer?

I'm very curious to see just how much they feel okay changing game-altering systems or behaviours. Ie. Even widescreen alters how the game will get played to some extent.

I think the hard part is knowing where to draw the line in order to keep it "authentic"

8
lebanon_tn 2 hours ago 0 replies      
"Revised dialogue and audio" makes me nervous. Hopefully they don't mess with perfection, could ruin an otherwise awesome sounding project.
9
ericzawo 3 hours ago 1 reply      
Boy am I optimistic for this game. I fondly remember UMS Sunken Defense, tower defense originators and of course Big Game Hunters maps. I'm really hoping a robust map editor is included.

Mac friendly would be nice too, especially considering Overwatch(?!) is Windows only.

10
Synaesthesia 2 hours ago 0 replies      
The following may be of interest to HN types; the Student Starcraft AI Tournament. http://sscaitournament.com
11
tomc1985 3 hours ago 3 replies      
Total Annihilation was waaay better! :)
12
intruder 4 hours ago 0 replies      
This must feel a bit like a blow to the shieldbattery developers. I hope the community will continue using it despite this announcement.
13
jshmrsn 4 hours ago 0 replies      
I had to double-check that it wasn't April 1st already. This is really cool and I personally didn't think something like this would happen (I was worried Blizzard would consider it an admission of defeat for SC2's competetive play).
14
edoceo 3 hours ago 1 reply      
Remember the HTML5 version? Too bad they shut that down. Would have preferred a paid/hosted version rather than their DMCA action.

Missed an opportunity there folks.

15
keithwhor 4 hours ago 0 replies      
'Member StarCraft? 'Member the Matrix? I 'member.

(This game was a huge part of my childhood. Love it.)

16
pfarnsworth 4 hours ago 2 replies      
I remember the competition between Starcraft vs Total Annihilation. I was very much into TA at the time, which was a beautifully crafted game. I could play against my friend across the country on 33.6kbps modem with 500 units each without crashing. Sure, each frame would take a second or so, but we would set up our attacks and just sit there and watch without worrying it would crash, it was amazing.

Obviously Starcraft ended up winning, but boy was TA an amazing game.

17
jimyl 3 hours ago 1 reply      
Will the remastered version attract some new gamers? It would be great if people are interested in RTS again. Now seems that everyone is playing MOBA.
18
cik2e 3 hours ago 1 reply      
I've played countless hours of this game on 56k when I was a kid. Needless to say, I was ridiculously excited after hearing about this. But seeing the before and after footage in the video on StarCraft.com has left me pretty disappointed. The maps nearly look identical and that's where I hoped the biggest improvement would lie. A little more detail than the original in the environments would go a long way towards me wanting to shell out some cash. What I've seen looks like a new skin on the old engine and literally nothing else in terms of aesthetics.
19
blackguardx 4 hours ago 1 reply      
I was going to say that they should offer a discount if you own the original version, but then I remembered I bought the game almost 20 years ago and probably have no proof of purchase.
20
imjustsaying 2 hours ago 1 reply      
Never got into Starcraft, Warcraft II on the other hand
21
cocochanel 2 hours ago 0 replies      
I want a Warcraft III Remastered!
22
notaplumber 4 hours ago 1 reply      
Anyone able to scan the qr code on the terran face? My phone can't pick it up.
23
melling 4 hours ago 1 reply      
24
mmgutz 3 hours ago 0 replies      
When? I want to buy NOW!
25
chj 4 hours ago 0 replies      
Hope it won't take years.
26
orionblastar 4 hours ago 1 reply      
I think there was a foss project to remake.

https://github.com/Wargus/Stratagus

I think it needed Starcraft data files to work. I got an old CD from 1998 but never tried it.

I heard it is cross platform.

27
jdubs 4 hours ago 2 replies      
Not sure why a carrier is shooting a blue beam at some planet, but what ever.
28
cykr0n 3 hours ago 0 replies      
3v3 ZC No Rules Xperts

See you soon...

29
trothamel 4 hours ago 1 reply      
Is that QR code on the Terran's forehead the introduction to an ARG? I don't have time to decode it at the moment, so I'm wondering if anyone else has.
30
justicezyx 2 hours ago 0 replies      
Many acclaim this. I am dismayed.

Remastering Starcraft should be done almost 10 years ago. While swimming in the money pool of WoW, Blz lost their entire charm of relentless demand on quality, innovation, and dedication.

They forcefully killed Starcraft with StarcraftII. They abandoned WarcraftIII, one of most popular game (among all genres) at its time. Warcraft III gets less balance patch then Starcraft, the game they intended to kill.

They remastered DiabloII, rebranded it DiabloIII. They released a great Moba game, called overwatch. After almost 10 years they witnessed WarcraftIII's demise and raise of the original Dota.

Now, they started reaping profit from their most loyal fans, with a remastered Starcraft. The very game that they tried to kill.

I think, to me, this is a milestone of Blz's own demise. Farewell Blz, you truly redefined yourself as a mediocre game developer.

29
Mathify Simple Text Equation to LaTeX mathifyit.com
89 points by wenqin123  14 hours ago   50 comments top 10
1
ephimetheus 14 hours ago 4 replies      
I don't get it, why not just use LaTeX? The syntax is already almost the same anyway..
2
simon_acca 13 hours ago 0 replies      
Related: http://asciimath.org/

Also, MathJax accepts asciimath as input, not sure if you can get LaTeX out of it though.

3
kevindong 3 hours ago 0 replies      
For Mac users, you can also use the built-in 'Grapher' application which comes with a decent GUI for writing equations/formulas.

1) Open up the 'Grapher' application from the 'Applications/Utilities' folder.

2) Click on 'Choose' (it doesn't matter what other options you pick from the initial loading screen).

3) From here on, just type your equation into the main input field. You can also use the equation palette from the dropdown menu on the right side of the main input field to access the templates for things like integrals and summations. From the dropdown, you should be able to click on 'Show Equation Palette' to get a window of all of the math symbols Grapher supports.

4) Once done writing your equation, select it all, right click, and then click on 'Copy LaTeX Expression'.

4
lucb1e 10 hours ago 1 reply      
Feedback for the author: this site is currently best viewed at 80% zoom and a browser width of 320 pixels. Originally (at 15.6", full hd screen) I have to keep jumping from center to left with nothing in between, which feels kinda weird on my eyes.
5
amenghra 11 hours ago 1 reply      
Is there any ocr to latex? Are they any good?

For desktop-based equation writing, something like Microsoft's equation editor is perfect. For tablet/touch screen, a pen based OCR might work really well?

6
idreyn 14 hours ago 2 replies      
Bookmarked as my new fastest way to generate a small LaTeX graphic when I need one though actually having the ability to input LaTeX might be useful as well for edge cases.
7
wenqin123 14 hours ago 4 replies      
By the way if you find any bugs or something doesn't work the way you expect it please let me know!
8
goerz 14 hours ago 0 replies      
I could see this as being useful if it was available as a library
9
gargarplex 12 hours ago 1 reply      
Combine this with Detexify?
10
umanwizard 12 hours ago 1 reply      
Edit: apparently I didn't read the site closely enough!
30
How much your computer can do in a second computers-are-fast.github.io
466 points by srirangr  19 hours ago   203 comments top 25
1
userbinator 16 hours ago 7 replies      
Alternatively, this could be titled "do you know how much your computer could do in a second but isn't because of bad design choices, overengineered bloated systems, and dogmatic adherence to the 'premature optimisation' myth?"

Computers are fast, but not if all that speed is wasted.

A recent related article: https://news.ycombinator.com/item?id=13940014

2
gizmo 16 hours ago 2 replies      
Pretty cool, but a number of the questions are totally unknowable.

For instance the question about web requests to google. Depending on your internet connection you've got more than a order of magnitude difference in the outcome.

In the question about SSD performance the only hint we have is that the computer has "an SSD", but a modern PCIe SSD like in the new Macbook pro is over 10 times faster than the SSDs we got just 5 years ago.

The question about JSON/Msgpack parsing is just about the implementation. Is the python msgpack library a pure python library or is the work of the entire unpackb() call done in C?

The bcrypt question depends entirely on the number of rounds. The default happens to be 12. Had the default been 4 the answer would have been 1000 hashes a second instead of 3. Is the python md5 library written in C? If so, the program is indistinguishable from piping data to md5sum from bash. Otherwise it's going to be at least an order of magnitude slower.

So I liked these exercises, but I liked the C questions best because there you can look at the code and figure out how much work the CPU/Disk is doing. Questions that can be reduced to "what language is this python library written in" aren't as insightful.

3
realo 15 hours ago 6 replies      
Yes, modern computers are fast. How fast?

The speed of light is about 300,000 km/s. That translates to roughly 1 ns per foot (yeah, I mix up my units... I'm Canadian...)

THUS, a computer with a clock speed of 2 GHz will be able to execute, on a single core/thread, about 4 (four !) single-clock instructions between the moment photons leave your screen, and the moment they arrive into your eye 2 feet (roughly) later.

_That_ should give you an idea of how fast modern computers really are.

And I _still_ wait quite a bit when starting up Microsoft Word.

4
chacham15 12 hours ago 5 replies      
Be careful what conclusions you attempt to draw from examples when you arent sure what exactly is happening. These examples are actually very wrong and misleading.

Take for example, the first code snippet about how many loops you can run in 1 second. The OP fails to realize that since the loop isnt producing anything which gets actually used, the compiler is free to optimize it out. You can see that thats exactly what it does here: https://godbolt.org/g/NWa5yZ All it does is call strtol and then exits. It isnt even running a loop.

5
munificent 14 hours ago 4 replies      
If, like me, you spend most of your time in high-level, garbage collected "scripting" languages, it's really worth spending a little time writing a few simple C applications from scratch. It is astonishing how fast a computer is without the overhead most modern languages bring in.

That overhead adds tons of value, certainly. I still use higher level languages most of the time. But it's useful to have a sense of how fast you could make some computation go if you really needed to.

6
dom0 16 hours ago 1 reply      
More impressively, sum.c could go likely an order of magnitude or so faster, when optimized.

> Friends who do high performance networking say it's possible to get network roundtrips of 250ns (!!!),

Well stuff like Infiniband is less network, and more similar to a bus (e.g. RDMA, atomic ops like fetch-and-add or CAS).

> write_to_memory.py

Is also interesting because this is dominated by inefficiencies in the API and implementation and not actually limited by the memory subsystem.

> msgpack_parse.py

Again, a large chunk goes into inefficiencies, not so much the actual work. This is a common pattern in highly abstracted software. msgpack-c mostly works at >200 MB/s or so (obviously a lot faster if you have lots of RAWs or STRs and little structure). Funnily enough, if you link against it and traverse stuff, then a lot of time is spent doing traversals, and not the actual unpacking (in some analysis I've seen a ~1/3 - 2/3 split). So the cost of abstraction also bites here.

If you toy around with ZeroMQ you can see that you'll be able to send around 3 million msg/s between threads (PUSH/PULL) from C or C++, around 300k using pyzmq (this factor 10 is sometimes called "interpreter tax"), but only around 7000 or so if you try to send Python objects using send_pyobj (which uses Pickle). That's a factor 430.

7
bane 9 hours ago 0 replies      
This is awesome. The real lesson here is, when you make a thing, compare its performance to these kinds of expected numbers and if you're not within the same order of magnitude speedwise, you've probably screwed up somewhere.

My favorite writeups are the ones that gloat about achieving hundreds of pages served per second per server. That's terrible, and nobody today even understands that.

8
Eliezer 13 hours ago 1 reply      
What an excellent teaching pattern - you're far more likely to remember what you learned if you first stop to think and record your own guess, and this is excellent UI and UX for doing that routinely and inline.
9
tomc1985 3 hours ago 0 replies      
One second on what?

A Core i7? A raspberry Pi? A weird octo-core dual-speed ODROID? An old i915-based Celeron? My cell phone? An arduino?

"Your computer" has meant all the above to me, just in the last few weeks. The author's disinclination to describe the kind of hardware this code is running on -- other than "a new laptop" -- strikes me as kind of odd.

10
sriku 1 hour ago 0 replies      
The grep example should search for one character. Grep can skip bytes so that longer search strings are faster to search for. On my machine, I get from 22%-35% more time taken if I changed "grep blah" to "grep b".
11
alkonaut 15 hours ago 0 replies      
Don't some of these examples run in O(1) time because the value in the loop isn't used? E.g in the first example 0 is returned instead of the sum.

Obviously we are talking about real world c compilers with real world optimizations so presumably we'd have to also consider whether the loop is executed at all?

12
asrp 15 hours ago 0 replies      
This reminds me of "Latency Numbers Every Programmer Should Know"

https://gist.github.com/jboner/2841832

Edit: Just realized halfway through that there's already a link to this from their page!

13
bch 10 hours ago 0 replies      
Hard to believe there are 124 comments here and nobody has brought up Grace Hopper's talk[0][1] yet. With good humour she gives a example of what various devices' latency are, and a simple tool to comprehend the cost and orders of magnitude.

 [0] short - https://www.youtube.com/watch?v=JEpsKnWZrJ8 [1] long - https://www.youtube.com/watch?v=ZR0ujwlvbkQ

14
gburt 14 hours ago 0 replies      
The `bcrypt` question seems out-of-place. It has a configurable cost parameter, so almost any of the answers is correct.
15
gibsjose 15 hours ago 0 replies      
I'm curious to see the data collected on guesses. Some were quite difficult to guess, like hashes per second with bcrypt not knowing the cost factor, but I guess we can assume some sane default.

I would have really liked to see all these numbers in C, and other languages for that matter. Perhaps add a dropdown box to select the language from a handful of options?

16
alcuadrado 14 hours ago 0 replies      
This reminds me to this email from LuaJIT's list:

Computers are fast, or, a moment of appreciation for LuaJIT https://groups.google.com/forum/#!msg/snabb-devel/otVxZOj9dL...

17
paulsutter 12 hours ago 1 reply      
That's nothing. Here's code that does 77GFLOPS on a single Broadwell x86 core. Yes that 77 billion opertaions per second.

http://pastebin.com/hPayhGXP

18
wtbob 2 hours ago 0 replies      
Well, my computer won't display an image apparently inserted with JavaScript, although it could if I wanted to grant execute privileges on it to computers-are-fast.github.io

Does anyone have a link to the image(s)?

19
kobeya 8 hours ago 0 replies      
Was disappointed to find that nearly all the examples were Python and shell script. I'm not interested in knowing random trivia about how slow various interpreters are.
20
Lxr 15 hours ago 3 replies      
Why isn't the first Python loop (that does nothing but pass) optimised away completely?
21
thomastjeffery 13 hours ago 0 replies      
Or "how fast can one of my 8 CPU cores run a for loop?" To put that in perspective: all 8 cores together give me about 40gflops. I have 2 GPUs that each give me more than 5000gflops.
22
joelthelion 13 hours ago 0 replies      
This could make a pretty good hiring test. Not expecting perfect answers, but a rough correlation with the results, and some good explanations.
23
d--b 15 hours ago 0 replies      
or "computers are fast, so we might just slow things down by using python for numerical calculations"
24
partycoder 11 hours ago 0 replies      
Computers are fast unless your algorithm is quadratic or worse, then there's no computer to help you.
25
grepthisab 12 hours ago 2 replies      
Edit: I'm an idiot
       cached 27 March 2017 07:02:02 GMT  :  recaching 1h 34m