hacker news with inline top comments    .. more ..    19 Jan 2015 News
home   ask   best   4 years ago   
Node.js and io.js Very different in performance
points by mschoebel  3 hours ago   20 comments top 6
bnoordhuis 1 hour ago 3 replies      
Interesting results, thanks for sharing. I can perhaps shed some light on the performance differences.

> Buffer 4.259 5.006

In v0.10, buffers are sliced off from big chunks of pre-allocated memory. It makes allocating buffers a little cheaper but because each buffer maintains a back pointer to the backing memory, that memory isn't reclaimed until the last buffer is garbage collected.

Buffers in node.js v0.11 and io.js v1.x instead own their memory. It reduces peak memory (because memory is no longer allocated in big chunks) and removes a whole class of accidental memory leaks.

That said, the fact that it's sometimes slower is definitely something to look into.

> Typed-Array 4.944 11.555

Typed arrays in v0.10 are a homegrown and non-conforming implementation.

Node.js v0.11 and io.js v1.x use V8's native typed arrays, which are indeed slower at this point. I know the V8 people are working on them, it's probably just a matter of time - although more eyeballs certainly won't hurt.

> Regular Array 40.416 7.359

Full credit goes to the V8 team for that one. :-)

richmarr 19 minutes ago 0 replies      
> Depending on what your Node application does, my > findings may or may not apply to your use-case.

I'm going to go out on a limb and say that the proportion of real world Node apps that will be noticably affected by this is less than 1%

explorigin 29 minutes ago 0 replies      
There are two very good comments at the bottom of the article. Here for your consumption:

Author: (Unknown)2015-01-19 12:54 UTC

io.js based on node v0.11, so you need compare

- v0.10 (nodejs)- v0.11 (nodejs)- v0.11 (nodejs)- v1.0 (iojs)

Author: Michael Schbel2015-01-19 13:01 UTC

I also downloaded sources and compiled the latest master branch of Node yesterday evening. Performance was within 2% of io.js for all three tests.

But most people won't compile themselves. Most will use the latest stable release.

wolframhempel 47 minutes ago 0 replies      
These are very interesting findings. On a higher level though: Are there any significant performance differences between the APIs of node and io? E.g. tcp package processing, file system access etc? I know that a lot of them are effectively C, so independent of the V8 version.
SixSigma 1 hour ago 0 replies      
On a slight tangent, there's an article using the Sieve of Eratosthenes demonstrating the use of Communicating Sequential Threads (CSP) on Russ Cox' website (one of the developers of Go)


Kiro 1 hour ago 6 replies      
> This can be extremely important if you have a project with heavy CPU-use

Would you recommend using something different than JavaScript when writing CPU heavy apps? I was under the impression that it's better suited when dealing with high I/O.

Using Go to improve your Ruby application's performance
points by vinnyglennon  1 hour ago   15 comments top 8
mitkok 5 minutes ago 0 replies      
And the point of this is what exactly ? The author advertises Goss "Easy and cheap concurrency", "Low memory overhead" and "Easy deployment", but the example does not show how Go will help in this particular case.
oinksoft 45 minutes ago 0 replies      

  > Ruby uses green threads (meaning only one CPU will get  > used) and they are no easy equivalent to Go channels.
Restriction to a single core is a consequence of the GIL. Also Ruby uses native threads, not green threads, since 1.9.

The author's `execute` method is dangerous because it doesn't escape its arguments.

JonnieCache 23 minutes ago 0 replies      
If you want to extract parts of your ruby app to go in a simple way, may I suggest a look at http://www.goworker.org instead of following this articles advice.

(It's basically a golang port of Resque::Worker)

_mikz 21 minutes ago 0 replies      
For the ones that do not know, the escaping issues could be handled either by Shellwords.shellescape or passing them as individual arguments to IO.popen & friends (Kernel.system, Kernel.spawn ...)
thinkingkong 43 minutes ago 1 reply      
It might make sense to put some compute intensive tasks in Go, or any other compiled language thats more efficient, but exec'ing and returning JSON isnt really a great idea.

If you were going to consider this seriously, then you should have the target app start once, listen for comands on some channel like a socket, and have it return information. JSON is probably a little too verbose for this kind of use-case as well, but it depends how efficient Ruby would be at unpacking alternative serialization formats.

obeattie 54 minutes ago 2 replies      

    JSON.parse(`go run json.go`)

joshcrowder 31 minutes ago 1 reply      
I agree there are some pretty big issues with this tutorial. However it's still a pretty good article the "ideas" here are useful, I started thinking about how I could use Go in my ruby stack.
chuhnk 54 minutes ago 1 reply      
Why would you do this?! Please tell me this is some form of elaborate trolling?
Show HN: The Silicon C++14 Web Framework
points by matt42  3 hours ago   16 comments top 6
vitriol83 1 hour ago 1 reply      
the requirement for pre-processing makes me a bit suspicious, it's going to be a pain to interact with other parts of the toolchain, so i would hope there's a very good reason for it. i would make this at the top of the README.md.

The same comments apply to a lesser extent to the C++14 requirement- does your framework really benefit from using C++14 features over say C++11 ? Again I would make this clear at the top of the README.

jasode 1 hour ago 1 reply      
If the author has seen Facebook proxygen[1] (another C++ http framework), a comparison to it would be helpful.

At first glance, it looks like the Silicon dependency microhttpd is somewhat analogous to proxygen. Is the Silicon value-added proposition the extra layers on top such as auto gen of client-side code, etc?

[1] https://github.com/facebook/proxygen

grandalf 58 minutes ago 0 replies      
the point of this kind of thing is that it can allow super fast performance for critical endpoints:

Suppose your app consumes a JSON api and it turns out that one or two endpoints constitute most of the server load.

Rewriting one or two endpoints using a lightweight, fast framework can be a great solution, so long as you are already being smart about caching and there aren't other bottlenecks in your architecture.

TillE 3 hours ago 1 reply      
I have to admit, I have no idea what the @ symbol does. Can anyone clarify?

It can't be a macro, so I assume it's a C++14 feature I'm not aware of.

igoldny 1 hour ago 1 reply      
why not Assembly? it's faster... :)
easytiger 2 hours ago 2 replies      
Building on an unused standard ensures failure and is little more than masturbation. The use of preprocessor sugar proves this yet further.
Path Tracing 3D Fractals
points by arunc  6 hours ago   6 comments top 3
joshu 5 hours ago 0 replies      
I've been following this blog for a bit. Fragmentarium is pretty interesting.
maulik13 3 hours ago 0 replies      
The results look great!
higherpurpose 3 hours ago 1 reply      
What gaming engines already implement or plan to implement path tracing in them in the future? Path tracing looks like the future of gaming graphics, and would do especially well with VR in a decade or so (hopefully by then path tracing will also be accelerated in hardware).
Command-line tools can be faster than your Hadoop cluster
points by wglb  18 hours ago   234 comments top 45
MrBuddyCasino 16 hours ago 4 replies      
To quote the memorable Ted Dziuba[0]:

"Here's a concrete example: suppose you have millions of web pages that you want to download and save to disk for later processing. How do you do it? The cool-kids answer is to write a distributed crawler in Clojure and run it on EC2, handing out jobs with a message queue like SQS or ZeroMQ.

The Taco Bell answer? xargs and wget. In the rare case that you saturate the network connection, add some split and rsync. A "distributed crawler" is really only like 10 lines of shell script."

[0] since his blog is gone: http://readwrite.com/2011/01/22/data-mining-and-taco-bell-pr...

danso 17 hours ago 3 replies      
I'm becoming a stronger and stronger advocate of teaching command-line interfaces to even programmers at the novice level...it's easier in many ways to think of how data is being worked on by "filters" and "pipes"...and more importantly, every time you try a step, something happens...making it much easier to interactively iterate through a process.

That it also happens to very fast and powerful (when memory isn't a limiting factor) is nice icing on the cake. I moved over to doing much more on CLI after realizing that doing something as simple as "head -n 1 massive.csv" to inspect headers of corrupt multi-gb CSV files made my data-munging life substantially more enjoyable than opening them up in Sublime Text.

crcsmnky 17 hours ago 5 replies      
Perhaps I'm missing something. It appears that the author is recommending against using Hadoop (and related tools) for processing 3.5GB of data. Who in the world thought that would be a good idea to begin with?

The underlying problem here isn't unique to Hadoop. People who are minimally familiar with how technology works and who are very much into BuzzWords will always throw around the wrong tool for the job so they can sound intelligent with a certain segment of the population.

That said, I like seeing how people put together their own CLI-based processing pipelines.

a3_nm 17 hours ago 0 replies      
I think it is unsafe to parallelize grep with xargs as in done in the article, because, beyond delivery order shuffling, the output of the parallel greps could get mixed up (the beginning of a line is by one grep and the end of a line is from a different grep, so, reading line by line afterwards, you get garbled lines).

See https://www.gnu.org/software/parallel/man.html#DIFFERENCES-B...

zokier 16 hours ago 1 reply      
Author begins with fairly idiomatic shell pipeline, but in the search for performance the pipeline transforms to a awk script. Not that I have anything against awk, but I feel like that kinda runs against the premise of the article. The article ends up demonstrating the power of awk over pipelines of small utilities.

Another interesting note is that there is a possibility that the script as-is could mis-parse the data. The grep should use '^\[Result' instead of 'Result'. I think this demonstrates nicely the fragility of these sorts of ad-hoc parsers that are common in shell pipelines.

pkrumins 17 hours ago 1 reply      
The example in the article with cat, grep and awk:

    cat *.pgn | \    grep "Result" | \    awk '     {        split($0, a, "-");        res = substr(a[1], length(a[1]), 1);        if (res == 1) white++;        if (res == 0) black++;        if (res == 2) draw++;      }      END { print white+black+draw, white, black, draw }    '
Can be written much more succinctly with just awk, and you don't even need to split the string or use substr:

    awk '      /Result/ {        if (/1\/2/) draw++;        else if (/1-0/) white++;        else if (/0-1/) black++;      }      END { print white+black+draw, white, black, draw }    ' *.pgn

aadrake 15 hours ago 1 reply      
Hi all, original author here.

Some have questioned why I would spend the time advocating against the use of Hadoop for such small data processing tasks as that's clearly not when it should be used anyway. Sadly, Big Data (tm) frameworks are often recommended, required, or used more often than they should be. I know to many of us it seems crazy, but it's true. The worst I've seen was Hadoop used for a processing task of less than 1MB. Seriously.

Also, much agreement with those saying there should be more education effort when it comes to teaching command line tools. O'Reilly even has a book out on the topic: http://shop.oreilly.com/product/0636920032823.do

Thank you for all the comments and support.

notpeter 12 hours ago 1 reply      
This article echoes a talk Bryan Cantrill gave two years ago:https://youtu.be/S0mviKhVmBI

It's about how Joyent took the concept of a UNIX pipeline as a true powertool and built a distributed version atop an object filesystem with some little map/reduce syntactic sugar to replace Hadoop jobs with pipelines.

The Bryan Cantrill talk is definitely worth your time, but you can get an understanding of Manta with their 3m screencast: https://youtu.be/d2KQ2SQLQgg

TeeWEE 1 hour ago 0 replies      
The gotcha here is: He is talking about 1.75GB of data. Off course you dont use hadoop for this. Hadoop is for BigData, not for a few gigs.

Use the right tool for the job. If you think you will scale to TeraByte size, dont start out with command line tools.

rkwasny 14 hours ago 2 replies      
Bottom line is - you do not need hadoop until you cross 2TB of data to be processed (uncompressed).Modern servers ( bare metal ones, not what AWS sells you ) are REALLY FAST and can crunch massive amounts of data.

Just use a proper tools, well optimized code written in C/C++/Go/etc - not all the crappy JAVA framework-in-a-framework^N architecture that abstracts thinking about the CPU speed.

Bottom line, the popular saying is true:"Hadoop is about writing crappy code and then running it on a massive scale."

sam_lowry_ 16 hours ago 1 reply      
Next to using `xargs -P 8 -n 1` to parallellize jobs locally, take a look at paexec, GNU parallel replacement that just works.

See https://github.com/cheusov/paexec

2ion 1 hour ago 0 replies      
Data analysis using a shell can be amazingly productive. We also had a talk about this at TLUG (http://tlug.jp/wiki/Meetings:2014:05#Living_on_the_command_l...).
mabbo 16 hours ago 2 replies      
I had an intern over the summer, working on a basic A/B Testing framework for our application (a very simple industrial handscanner tool used inside warehouses by a few thousand employees).

When we came to the last stage, analysis, he was keen to use MapReduce so we let him. In the end though, his analysis didn't work well, took ages to process when it did, and didn't provide the answers we needed. The code wasn't maintainable or reusable. shrug It happens. I had worse internships.

I put together some command line scripts to parse the files instead- grep, awk, sed, really basic stuff piped into each other and written to other files. They took 10 minutes or so to process, and provided reliable answers. The scripts were added as an appendix to the report I provided on the A/B test, and after formatting and explanations, took up a couple pages.

jacquesm 15 hours ago 0 replies      
See this very good comment by Bane:


ricardobeat 11 hours ago 0 replies      
Don't shoot me, but out of curiosity I wrote the thing in javascript: https://gist.github.com/ricardobeat/ee2fb2a6d704205446b7

Results: 4.4GB[1] processed in 47 seconds. Around 96mb/s, can probably be made faster, and nodejs is not the best at munging data...

[1] 3201 files taken from http://github.com/rozim/ChessData

NyxWulf 17 hours ago 1 reply      
This also isn't a straight either or proposition. I build local command line pipelines and do testing and/or processing. When either the amount of data needed to be processed passes into the range where memory or network bandwidth makes the processing more efficient on a Hadoop cluster I make some fairly minimal conversions and run the stream processing on the Hadoop cluster in streaming mode. It hasn't been uncommon for my jobs to be much faster than the same jobs run on the cluster with Hive or some other framework. Much of the speed boils down to the optimizer and the planner.

Overall I find it very efficient to use the same toolset locally and then scale it up to a cluster when and if I need to.

sleepythread 5 hours ago 1 reply      
One common misconception about using Hadoop is that use Hadoop if your data is large. Usage of Hadoop should be more driven based on the growth of data rather than size.

I agree that for the given use case, the solution is appropriate and works fine. Problem mentioned in the given post is not a Big Data problem.

Hadoop will be helpful in case if there are millions of games are played everyday and we need to update the statistics daily e.t.c. For this case, the given solution will hit bottleneck and there will be some optimisation/code change needed to keep running the code.

Hadoop and its ecosystem are not a silver bullet and hence should not be used for everything. The problem has to be a Big Data problem

decisiveness 5 hours ago 0 replies      
If bash is the shell (assuming recursive search is required), maybe it would be even faster to just do:

    shopt -s globstar    mawk '/Result/ {        game++        split($0, a, "-")        res = substr(a[1], length(a[1]), 1)        if(res == 1)            white++        if(res == 0)            black++        if(res == 2)            draw++    } END {        print game, white, black, draw    }' **/*.pgn?

philgoetz 8 hours ago 0 replies      
First, you don't score points with me for saying not to use Hadoop when you don't need to use Hadoop.

Second, you don't get to pretend you invented shell scripting because you came up with a new name for it.

Third, there are very few cases if any where writing a shell script is better than writing a Perl script.

linuxhansl 12 hours ago 0 replies      
So don't use Hadoop to crunch data that fits on a memory stick, or that a single disk spindle can read in few seconds.

Why is this first on the HN front-page?

Reminds me of the C++ is better than Java, Go is better than C++, etc, pieces.

Yes, the right tool for the right job. That's what makes a good engineer.

Somebody who thinks there is _no_ valid use case for Hadoop is a fool. (The author did not say that, but many of the comments here seem to imply that view)

kylek 17 hours ago 1 reply      
I feel ag (silver surfer, a grep-ish alternative) should be mentioned (even though he dropped it in his final awk/mawk commands) as it tends to be much faster than grep, and considering he cites performance throughout.
wglb 15 hours ago 0 replies      
A similar story: http://blogs.law.harvard.edu/philg/2009/05/18/ruby-on-rails-...: Tools used not quite the right way.

edit: with HN commentary: https://news.ycombinator.com/item?id=615587

sgt101 18 hours ago 2 replies      
on a couple of GB this is true, actually if you have ssd's I'd expect any non compute bound task to be faster on a single machine up to ~10gb after which the disk parallelism should kick in and Hadoop should start to win.
sabalaba 14 hours ago 1 reply      
I've had the pleasure and displeasure of working with small datasets (~7.5GB of images) in shell. One often needs to send SIGINT to the shell when it starts to glob expand or tab complete a folder with millions of files. But besides minor issues like that, command line tools get the job done.
colin_mccabe 12 hours ago 0 replies      
About 5 years ago I worked at a company that took the "pile of shell scripts" approach to processing data. Our data was big enough and our algorithms computationally heavy enough that a single machine wasn't a good solution. So we had a bunch of little binaries that were glued together with sed, awk, perl, and pbsnodes.

It was horrible. It was tough to maintain-- we all know how hard to read even the best awk and perl are. It was difficult to optimize, and you always found yourself worrying about things like the maximum length of command lines, how to figure out what the "real" error was in a bash pipeline, and so on. When parts of the job failed, we had to manually figure out what parts of the job had failed, and re-run them. Then we had to copy the files over to the right place to create the full final output.

The company was a startup and the next VC milestone or pivot was always just around the corner. There was never any time to clean things up. A lot of the code had come out of early tech demos that management just asked us to "just scale up." But oops, you can't do that with a pile of shell scripts and custom C binaries. So the technical debt just kept piling up. I would advise anyone in this situation not to do this. Yeah, shell scripts are great for making rough guesses about things in a pile of data. They are great for ad hoc exploration on small data or on individual log files. But that's it. Do not check them into a source code repo and don't use them in production. The moment someone tries to check in a shell script longer than a page, you need to drop the hammer. Ask them to rewrite it in a language (and ideally, framework), that is maintainable in the long term.

Now I work on Hadoop, mostly on the storage side of things. Hadoop is many things-- a storage system, a set of computation frameworks that are robust against node failures, a Java API. But above all it's a framework for doing things in a standardized way so that you can understand what you've done 6 months from now. And you will be able to scale up by adding more nodes, when your data is 2x or 4x as big down the line. On average, the customers we work with are seeing their data grow by 2x every year.

I feel like people on Hacker News often don't have a clear picture of how people interact with Hadoop. Writing MapReduce jobs is very 2008. Nowadays, more than half of our users write SQL that gets processed by an execution engine such as Hive or Impala. Most users are not developers, they're analysts. If you have needs that go beyond SQL, you would use something like Spark, which has a great and very concise API based on functional programming. Reading about how clunky MR jobs is just feels to me like reading an article about how hard it is to make boot and root floppy disks for Linux. Nobody's done that in years.

uxcn 14 hours ago 0 replies      
This kind of approach can probably scale out pretty far before actually needing to resort to true distributed processing. Compression, simple transforms, R, etc... You can probably get away with even more by just using a networked filesystem and inotify.
liotier 17 hours ago 1 reply      
'xargs -n' elicits fond memories of spawning large jobs to my Openmosix cluster ! I miss Openmosix.
weitzj 12 hours ago 0 replies      
There is also an interesting and fun talk to watch by John Graham Cumming from CloudFlare. http://www.youtube.com/watch?v=woCg2zaIVzQ using Go instead of xargs. Kind of fits into: "Using the right tool for the job". There is no Big Data involved but it shows a sweetspot where it might make sense(make it easier) to not use a shell script (i.e retries, network failure)
dundun 12 hours ago 0 replies      
What is missed in the article and many of these comments is that Hadoop isn't always going the best tool for one job. It shines in its multitenancy- when many users are running many jobs-each developed in their favorite framework or language(bash/awk pipeline? No problem) running over datasets bigger than single machines can handle.

It also comes in handy when your dataset grows dramatically in size.

zobzu 16 hours ago 2 replies      
Heres a probably unpopular opinion....Pipes make things a bit slow. A native pipeless program would be a good bit faster - incl. an acid db. Note that doing this in python and expecting it to beat grep wont work...

The other thing is that hadoop - and some others are slow on big data (peta, or more) vs own tools. Theyre necessary/used because of massive clustering (10x the hardware deployed easily beats making ur own financially).

I suspect its a general lack of understanding the way computers work (hardware, os ie system architecture) vs "why care it works and python/go/java/etc are easy for me i dont need to know what happens under the hood".

nraynaud 5 hours ago 0 replies      
on a tangential note, sometimes I use a slower methods for UI reasons. For example avoiding blocking the UI, or allowing for canceling the computation, or displaying partial results during the computation (that last one might completely trash the cache).
raincom 15 hours ago 1 reply      
Hadoop is replacing many datawarehousing dbs like netezza, teradata, exadata. In the process, many datwarehousing developers have become hadoop developers, who write sql code; after all, hadoop got a sql interface via hive.

Informatica (another ETL tool) also provides another tool called powerexchange, which automatically generates MR code for hadoop.

Whenever you hear hadoop, first ask yourself whether it is another disguised datawarehousing stuff.

JensRantil 17 hours ago 1 reply      
Reminds me of "filemap" - a commandline-like map/reduce tool: https://github.com/mfisk/filemap
dkarapetyan 17 hours ago 0 replies      
Oh ya and it turns out when all is said and done the average data set for most hadoop jobs is no more than 20GB which can again fit comfortably on a modern desktop machine.
ronreiter 14 hours ago 0 replies      
Hadoop is highly inefficient when using default MapReduce configuration. And a single Macbook Pro machine is much stronger than 7 c1.medium instances.

Bottom line - run the same thing over Apache Tez with a cluster that has the same computational resources as your laptop, and I'm pretty sure you'll see the same results.

greenyoda 18 hours ago 4 replies      
Shell commands are great for data processing pipelines because you get parallelism for free. For proof, try a simple example in your terminal.

    sleep 3 | echo "Hello world."
That doesn't really prove anything about data processing pipelines, since echo "Hello world." doesn't need to wait for any input from the other process; it can run as soon as the process is forked.

    cat *.pgn | grep "Result" | sort | uniq -c
Does this have any advantage over the more straightforward verson below?

    grep -h "Result" *.pgn | sort | uniq -c
Either the cat process or the grep process is going to be waiting for disk I/Os to complete before any of the later processes have data to work on, so splitting it into two processes doesn't seem to buy you any additional concurrency. You would, however, be spending extra time in the kernel to execute the read() and write() system calls to do the interprocess communication on the pipe between cat and grep.

Also, the parallelism of a data processing pipeline is going to be constrained by the speed of the slowest process in it: all the processes after it are going to be idle while waiting for the slow process to produce output, and all the processes before it are going to be idle once the slow process has filled its pipe's input buffers. So if one of the processes in the pipeline takes 100 times as long as the other three, Amdahl's Law[1] suggests that you won't get a big win from breaking it up into multiple processes.

[1] https://en.wikipedia.org/wiki/Amdahl%27s_law

Edit: As someone pointed out, my example needed "grep -h". Fixed.

robbles 14 hours ago 0 replies      
Is there a cached version of the original article that's referenced in this anywhere? Site appears to be down.
davecheney 8 hours ago 0 replies      
1.75gb is not big data. It's not even small data.
haddr 16 hours ago 1 reply      
great article!PS. probably some hardcore unix guy would tell you that you are abusing cat. The first cat can be avoided, and you might gain even better performance. Also using gnu grep seems to be faster.
vander_elst 17 hours ago 0 replies      
until ~10 GB you'd better keep on going with single core machines, you'll see some improvementes with bigger sets > 100 GB
exabrial 17 hours ago 2 replies      
tl;dr: You do not have a big data problem.
skynetv2 16 hours ago 0 replies      
its a sensational headline ... the reality is someone applied a wrong tool and got bad results.
dschiptsov 16 hours ago 1 reply      
Everyone with basic knowledge of CS could realize that Hadoop is a waste.

Unfortunately, it isn't about efficiency at all. It just memeization. Bigdata? Hadoop! Runs everywhere. Same BS like Webscale? MongoDB! meme.

smegel 16 hours ago 2 replies      
What about if you are processing 100 Petabytes? And you are comparing to a 1000-node Hadoop cluster with each node running 64 cores and 1TB of main memory?
wallflower 17 hours ago 1 reply      
Awk and Sed aren't very accessible to most people who did not grow up learning those tools.

The whole point of tools built on top of Hadoop (Hive/Pig/HBase) is to make large scale data processing more accessible (by hiding the map-reduce as much as possible). Not everyone will want to write a Java map-reduce in Hadoop. However, many can write a HiveQL statement or Pig textual script. Amazon Redshift brings it even farther - they are a Postgres compatible database, meaning you can connect your Crystal Reports/Tableau data analysis tool to it, treating it like a traditional SQL database.

Hacked. A Short Story
points by skazka16  11 hours ago   26 comments top 6
derFunk 3 hours ago 3 replies      
What's really important is that you never should keep a compromised system like this running, even if you think you found all modifications the attacker did.You probably didn't.So save your configs and set this machine up from scratch.
jwildeboer 39 minutes ago 0 replies      
AFAICS the system wasn't updated for a year? Well, that's just plain stupid.
ludwigvan 2 hours ago 2 replies      
He did not ask for money? Why on Earth? Dear fellow developers, know your importance and always ask for the work you have done.
ronnier 1 hour ago 1 reply      
Offtopic: blog headers are becoming larger by the year it seems. Have there been any studies to see what such large headers do to readership?
olalonde 4 hours ago 3 replies      
Stories like this is what makes me believe immutable infrastructure is the future.
Romkinson 3 hours ago 1 reply      
Read at habrahabr months ago. Great story
After Gmail blocked in China, Microsoft's Outlook hacked, says GreatFire
points by ForFreedom  1 hour ago   3 comments top 3
DangerousPie 6 minutes ago 0 replies      
The title makes it sound like Outlook.com itself was actually hacked, but according to the article they just MITM the connection with a bogus SSL certificate.
chvid 23 minutes ago 0 replies      
I've been using outlook after gmail IMAP was blocked here in China and it is working just fine. And as far as I can see the block of gmail IMAP is now gone again. It is still a lot slower compared to outlook though.
mtmail 1 hour ago 0 replies      
"The following screenshot shows what happens when a Chinese user accesses Outlook via an email client"


How we Failed at OpenStack
points by jsnell  5 hours ago   16 comments top 6
jpgvm 3 hours ago 1 reply      
Doesn't surprise me the slightest to be honest. Having worked on a customized fork of OpenStack that used a pure L3 networking model I know that you are set for pain the moment you don't want to run everything on a single Ethernet segment.

It doens't help that the Neutron data-model at the time that I was working on in (say 12 months ago or so) was terrible and basically impossible to scale/make to perform.

Inevitably you were then stuck with the deprecated and janky nova-network interface. Which while efficient and fast was also old and missing tons of stuff - meaning more monkey patching and janking around. Not to mention the fact that because of it's deprecation many completely ridiculous bugs befell it in later releases. (Grizzly and onwards basically)

TBH I am so disillusioned with the project I hope I don't have to work in or around it again.

marktangotango 3 hours ago 3 replies      
Sounds like these guys are doubling down on the IAAS model, 'premium bare metal'? Certainly there are a lot of people who'd like to run on bare metal, with a more configurable network, but how realistic is it at this time?

>>You see, physical switch operating systems leave a lot to be desired in terms of supporting modern automation and API interaction (Junipers forthcoming 14.2 JUNOS updates offer some refreshing REST APIs!).

This. Network hardware vendors have no incentive to make their devices more easily automated, and in fact face disincentive not to.

Doe anyone remember the excitement and promise around Google App engine when it was first announced, and before they changed the pricing model to per instance? The ability to put your app on the cloud, and scale up to the free tier, then out from the free tier on a paid plan if that's what you needed.

That model entirely disappeared. I miss it. Is anyone doing that now?

bhaisaab 3 hours ago 0 replies      
Not sure if people know about Apache CloudStack or not, it has all those IaaS feature and it just works with various basic to advance networking models.
Luyt 1 hour ago 1 reply      
The author writes:

"As we finalize our installation setup for CoreOS this next week (after plowing through Ubuntu, Debian and CentOS)"

Pity he doesn't elaborate on that. I understand that CoreOS is his choice, but it would be nice to know why the other distros aren't.

Rapzid 2 hours ago 0 replies      
Shouldn't take more than an evening for somebody experienced with hosting to pick up these red flags reading through the OpenStack documenation/source.

From what I recall the documentation left a ton to be desired. Just trying to figure out how Neutron and their "VPC" equivalent was supposed to be implemented left more questions than answers :|

kordless 4 hours ago 1 reply      
> Premium Bare Metal

Given that's the offering, it doesn't surprise me a bit they didn't go with OpenStack. That said, I guess they think running containers on bare metal is a better way to roll.

N.S.A. Tapped into North Korean Networks Before Sony Attack, Officials Say
points by sciurus  12 hours ago   116 comments top 14
shutupalready 56 minutes ago 0 replies      
> General Clapper praised the food; his hosts later presented him with a bill for his share of the meal.

Not only are they evil, but they're cheap too.

But the fact is that the hosts would have billed for the meal because the U.S. government asked to be billed.

The USG requires that officials traveling on business not accept gratuities, gifts, dinners, or anything above a certain value (which is about US$100 -- it gets adjusted for inflation, so it might be higher today).[1]

There is an exemption to allow acceptance of gifts of travel expenses of more $100 when officials travel outside the United States on business, but only if "such acceptance is appropriate, consistent with the interests of the United States, and permitted by the employing agency".[1]

In this case, General Clapper and his staff probably didn't want to deal with the question of whether it was "appropriate" or deal with reporting requirements, so they just asked for the bill. Or, their North Korean hosts, knowing U.S. policy, were proactive in making up a bill.

Either way, the NYT article should have mentioned the USG policy. If they can't get that little thing right, it makes me wonder about the accuracy of the rest of the article.

[1] http://www.gpo.gov/fdsys/pkg/USCODE-2010-title5/html/USCODE-...

chroma 10 hours ago 11 replies      
> Mr. Jang said that as time went on, the North began diverting high school students with the best math skills into a handful of top universities, including a military school specializing in computer-based warfare called Mirim University, which he attended as a young army officer.

I realize I'm not engaging the core topic being discussed, but stories like this are why I'm surprised people like Will Scott haven't gotten in trouble. (I don't want to single him out, but he's the best example I have at hand.) For the past two years, he's gone to North Korea to volunteer teaching computer science.[1][2] At best, his students' skills will be wasted on some silly Android apps praising the supreme leader. More likely, these students will go on to make software for less-than-ethical purposes: wargame simulation, nuclear explosion modeling, missile guidance systems, or network/server subversion.

I'm not saying this software shouldn't exist, just that the world would be better-off if the DPRK had more difficulty writing it. And I'm surprised the State Department hasn't fined or revoked the passport of any American who has aided the DPRK in this manner.

1. https://news.ycombinator.com/item?id=8869265

2. https://news.ycombinator.com/item?id=6829558

sandworm 7 hours ago 2 replies      
Anyone else notice this:

"We realized there was another actor [South Korea] that was also going against them [North Korea] and having great success because of a 0 day they wrote. We got the 0 day out of passive and were able to re-purpose it. Big win."

NSA learned of a 0-day exploit being used by South Korea (not five eyes) and re-purposed it. They had knowledge of an exploit in the wild. Did they share this with anyone in order to close this security flaw? They exploited it. This is not a case of the NSA developing an exploit in house. They took this from the wild. This would seem to confirm suspicions that NSA is/was willing to allow active 0-days to fester, leaving the general public exposed.

finid 12 minutes ago 0 replies      
So we hacked them first, now we've imposed a sanction on them for hacking one of our companies.


xnull1guest 11 hours ago 2 replies      
We know that the NSA tapped into computer systems and the backbone of essentially every country on Earth - I don't see how NK would have somehow been excluded.

What's interesting is what information the New York Times includes that is not covered in the NSA document, presumably from unidentified officials and former officials.

The document on Der Speigel speaks primarily about taking copies of intelligence from SK hacking efforts against NK and also taking copies of intelligence from NK hacking efforts that had in turn been hacked by SK (and in turn by NSA - "fifth party collection").

The document mentions the NSAs unwillingness to rely on intelligence filtered through so many third parties and made efforts to establish its own foothold.

Essentially none of the article is backed by the document as a first source and must have come from the unnamed sources.

GabrielF00 10 hours ago 2 replies      
This is the second NYTimes article I've seen that has suggested that the NSA was collecting information on a group while that group was planning an attack, but that the collection or the analysis was not sufficient to stop the attack. (The other article was on the Mumbai terrorist attack).

This is interesting and you could look at it a number of different ways:

- Collecting data is one thing, but understanding what it means is incredibly challenging and the NSA might not be doing a great job.

- Even when they can't prevent an attack, there is still value in having this data so that they can attribute the attack and understand something about the motives and methods of the attackers.

lucb1e 11 hours ago 1 reply      
Might be me, but I'd be surprised if they hadn't. They hacked so many countries including China[1], Mexico[1], Belgium[1], Syria[3], Iran[4], etc. (after saying that a digital attack is an act of war[2]). I don't remember each and every leak and I don't feel like looking up everything, but they seem to have targeted loads of people in various countries. I doubt North Korea (which is not even an ally) is the exception.

[1] https://en.wikipedia.org/wiki/Tailored_Access_Operations#Kno...

[2] https://en.wikipedia.org/wiki/Cyberwarfare_in_the_United_Sta...

[3] http://www.theverge.com/2014/8/13/5998237/nsa-responsible-fo...

[4] Stuxnet http://rt.com/news/snowden-nsa-interview-surveillance-831/

Estragon 11 hours ago 3 replies      
Typical for the NYT to bury the strong countervailing evidence against the official war-mongering story in a couple of paragraphs 2/3rds of the way through the article.

  Still, the sophistication of the Sony hack was such that many experts  say they are skeptical that North Korea was the culprit, or the lone  culprit. They have suggested it was an insider, a disgruntled Sony  ex-employee or an outside group cleverly mimicking North Korean  hackers. Many remain unconvinced by the efforts of the  F.B.I. director, James B. Comey, to answer critics by disclosing some  of the American evidence.    ... it would not be that difficult for hackers who wanted to appear to  be North Korean to fake their whereabouts.

snissn 9 hours ago 0 replies      
Gulf of Tonkin. Iraq having WMDs. It's important to hold governments to a very high standard in matters like these.
phkahler 11 hours ago 1 reply      
If that's true, who's to say our guys didn't launch the attack from their computers? Why would they even admit to being in there? The NSA doesn't say anything unless 1) they have to, or 2) they want to. I don't see why they would make this claim.
Eye_of_Mordor 2 hours ago 0 replies      
Does this mean the NSA hacked Sony (from NK)? Would explain both the 'Sony internal' nature of the attack and the FBI's assertion that this was 'from North Korea'.
timmytokyo 11 hours ago 2 replies      
According to the article, NSA noticed the first spear-phishing attacks against Sony in September. Yet they didn't realize admin credentials had been stolen until much later. Nor did they seem to notice terabytes of data being exfiltrated out of Sony. Fishy story.
grecy 8 hours ago 1 reply      
In 5 years time when this tit for tat results in some massive disruption in the US (power outage or something) people are going to be severely angry and say NK attacked them for no reason, etc. (i.e 9/11)

The US yet again going around the world making enemies, and giving them perfectly valid reasons to retaliate.

astkaasa 9 hours ago 0 replies      
I've read enough comments about "We already knew that blah blah blah ...", "What's interesting is that blah blah blah ...". Seems that you guys get used to the reality so fast, the only thing you can do is trying to dig into some detail about this kind of news and to avoid the discussing about whether this kind of things is RIGHT or WRONG from the beginning!

I'm planing to watch POI for the second time, may your god bless you American, and may there be a real-hero like Reese or Carter.

But we all know that most people are just as normal as Lionel, they don't have the courage to face the problem alone. So let's just wait for your bright future. LOL

Why Some Teams Are Smarter Than Others
points by HillRat  9 hours ago   36 comments top 11
lisa_henderson 1 hour ago 1 reply      
This part:

"Online and off, some teams consistently worked smarter than others. More surprisingly, the most important ingredients for a smart team remained constant regardless of its mode of interaction: members who communicated a lot, participated equally and possessed good emotion-reading skills."

can be read as a variation on John Boyd's OODA loop. Boyd made the point, repeatedly, that in war and sometimes in business, victory typically goes to whoever can iterate through ideas more quickly, as new information comes in. The winners are not necessarily smarter, they simply iterate faster based on the information they have. And the same seems to be true of the teams being described here.

simula67 2 hours ago 3 replies      
Finally, teams with more women outperformed teams with more men. Indeed, it appeared that it was not diversity (having equal numbers of men and women) that mattered for a teams intelligence, but simply having more women. This last effect, however, was partly explained by the fact that women, on average, were better at mindreading than men.

I wonder how this article would have been received if it stated : 'Teams with more men performed better. This can be explained by the fact that men, on average, are better at solving certain problems than women'. I almost never hear about a scientific study where they discover that men are better than women at something.

stolio 4 hours ago 0 replies      
My guess is that what make a group successful in the short run, like if you throw a group of strangers together and tell them to do tasks together in a monitored environment for 5 hours, may not be the same things that make a group successful in the long run.

The importance of emotional intelligence should decrease as roles are carved out and members define their niches, and the importance of individual ability should increase as the group optimizes itself.

armed10 5 hours ago 5 replies      
People overvalue (democratic) teamwork. I'd like to argue that a good leader with a team of followers is more effective than a team where everyone is equal. For example: the pyramids, cannot be build by one man, but wouldn't have existed if it wasn't for central leadership.

Take Steve Jobs, it was his vision that made Apple successful.

Teams need skill, but they must also be undivided. Democracy in teams essentially divides the team, those opposed and forced to act according to the majority will not be cooperative. The best results are that of a single visionair with or without a team of followers.

hnhg 5 hours ago 0 replies      
I might be wrong, but I had a look at the original paper's methods and the experimental process seems to be based around producing many p-values and then trawling looking for significance. They don't seem to correct for multiple-testing outcomes though. This resembles what's been described as p-hacking (http://www.nature.com/news/scientific-method-statistical-err...)

I could well be missing something though, so please correct me if I'm off the mark.

Hermel 4 hours ago 2 replies      
The inherent flaw of teamwork is that responsibility is actually atomic. As soon as you assign the same responsibility to a group of two or more persons, things will start to go wrong sooner or later. If you look more closely at successful teams of the past, you will notice that they succeeded because they actually worked as a well-coordinated group of individuals, with a clear separation of responsibilities. Ringo Starr never played the guitar, and John Lennon kept his hands from the drums. If a team is unable to assign responsibilities to its individual members, you don't get a team, you get a committee. And it is no accident that "design by committee" has a negative connotation. What you get out of a committee is not the greatest idea a single member had. What you get out of a committee is the lowest common denominator. And if that denominator happens to be low enough, you won't get any usable results at all.
blueskin_ 4 hours ago 0 replies      
cLeEOGPw 6 hours ago 0 replies      
Wonder how did they measure the individual skill of participants. Because as far as I am aware skill is what helps teams to succeed most. And if everyone was of the same skill, it's not that surprising if those with other advantages, like better emotion reading, performed better.
doczoidberg 4 hours ago 0 replies      
Emotional intelligence is also intelligence. I.Q. tests doesn't bear that.

So smart teams are better than less smart. But it's not about the I.Q. score but about the whole intelligence (emotional and logical) of the team members.

dang 6 hours ago 0 replies      
gogeek 3 hours ago 4 replies      
I am surprised that they say that more women lead to a better performance. There is no single female-dominated team that comes to my mind that achieved exceptional performance. All the great achievements and inventions of humanity, top notch startups, product teams etc. - all consisting of few or no women. Am I missing here something?
I think I've solved the Haskell records problem
points by nikita-volkov  15 hours ago   65 comments top 8
stingraycharles 10 hours ago 3 replies      
For the uninitiated, this solves a major (major!!) pain in the Haskell community: the lack of namespacing. If you declare two records with an 'id' field in the same module, you would get conflicts. People have been either using prefixing (fooId and barId) or splitting up modules to solve this problem, which can both considered to be ugly hacks.

I am so glad the author has been able to pull this off, and it is one of those "why didnt anybody think of this before, it is so obvious!" moments.

ximeng 2 hours ago 1 reply      
Nice article.

Could anyone comment on this:

"Haskell syntax parsing needs to be reimplemented in the record library"

That sounds like a big, unending task. How hard would it be for this to be implemented as an actual Haskell feature, without quasi-quoting etc.?

yummyfajitas 10 hours ago 3 replies      
It seems to me that most of these issues could be solved at the syntactic level simply by allowing recordval.fieldname syntax. I truly don't understand why Haskell doesn't allow this - anyone have some insight?
eru 9 hours ago 0 replies      
And I thought lenses were the canonical solution by now?

EDIT: Reading the article a second time enlightened me.

millstone 6 hours ago 3 replies      
What's a "quasi-quoter?"
slashnull 11 hours ago 1 reply      
String typing to the rescue!
pekk 10 hours ago 4 replies      
I think I have a radical idea: use a language which solves this problem by design from the beginning. Much simpler.
Why Kim Dotcom hasnt been extradited 3 years after the US smashed Megaupload
points by thursdayb  13 hours ago   10 comments top 4
corin_ 2 hours ago 2 replies      
When will the US DoJ hire a decent PR firm? Again and again during these three years they've made cock-ups that push people (like myself) towards supporting Dotcom even if we don't particularly like the guy, and even if we think for the actual charges he's likely in the wrong.

> In the forfeiture case, prosecutors will argue why Dotcoms claim on the frozen assets should not be allowedand therefore forfeited to the US governmentunder the "doctrine of fugitive disentitlement." That idea posits that if a defendant has fled the country to evade prosecution, then he or she cannot make a claim to the assets that the government wants to seize under civil forfeiture.

Hooray, a loophole in the laws that might let them seize his assets: call him a fugitive! Even though he hasn't visited America one single time in his life. And even though he is still in the country where he calls home, and has done for more than a year before this case started.

Maybe it's not the DoJ's fault, maybe Dotcom and his lawyers (and PR) are really, really good. But nearly every story in the past 3 years has made him look good and them look bad, which shouldn't be the case when you've got a government department charged with delivering justice against a guy with a track-record of being an asshole and breaking the law.

acqq 1 hour ago 1 reply      
This was insightful for me:

""Congress, initially as part of the War on Drugs but later expanded to include most federal offenses, criminalized almost every financial transaction that flows from funds that are the proceeds of specified unlawful activity," Bruce Maloy, an Atlanta-based attorney and an expert in US extradition law, told Ars by e-mail.

"In simplest terms, if you possess funds from a crime and do anything with the money other than bury it in the ground or hide it under the mattress, you have committed a new crime. Spending the money is a new crime, opening a bank account is a new crime. These expenditures do not have to be in furtherance of the original crime, but my recollection is that here it alleged that they are. In short, throwing in a money laundering allegation is quite common in US federal indictments."

Page Pate, another Atlanta-based defense lawyer who has also worked on international extradition cases, agreed. "It's almost automatic to add money laundering charges to any offense whether it's drug-related or not," he said. "I haven't seen it that often in criminal copyright cases. The US has been very aggressive in adding money laundering and forfeiture in criminal cases.""

Ain't it great, War on Drugs?

caractacus 4 hours ago 0 replies      
1. Because he had a lot of money to spend on lawyers

2. Because there are some fundamental issues - slowly getting cleared - on issues of search, procedure, etc

3. Because the legal system can take a bloody long time to achieve anything

brador 1 hour ago 0 replies      
Might be that thing where in exchange for working for them you get to keep your freedom.
Lightbot: A simple programming game
points by abraham_s  10 hours ago   17 comments top 15
wolf550e 3 hours ago 0 replies      
A long time ago I wrote a simplistic "genetic algorithm" and JIT for solving these puzzles as an exercise. The code is not good, but it did work.


userbinator 32 minutes ago 0 replies      
Definitely much easier than this one:http://en.wikipedia.org/wiki/Robot_Odyssey
madlag 48 minutes ago 0 replies      
My daughter has been playing it since she was 7, it's really cool to teach the basics of programming, and more generally to improve accuracy of spatial and time reasoning.
Orangeair 7 hours ago 0 replies      
I find this interesting for how it demonstrates recursion as a means of looping. Most simple code tutorials like this just follow the procedural style, and make you throw a set of instructions inside of a loop block.
anomie 4 hours ago 1 reply      
http://pleasingfungus.com/Manufactoria/ is another great game along the same lines
oflordal 6 hours ago 1 reply      
Try http://www.robozzle.com for a game in similar vein with a bit more depth.
mjmahone17 6 hours ago 0 replies      
This game has a similar code mechanic to CargoBot: https://itunes.apple.com/us/app/cargo-bot/id519690804?mt=8

CargoBot has given me hours of having to really think about how to structure my "code" to get the recursive calls just right. I haven't gotten deep into Lightbot yet, but it should be a fun puzzle if later levels have similar types of recursive challenges.

NaNaN 7 hours ago 0 replies      
There is an old one from the same team: http://armorgames.com/play/2205/light-bot
glesica 2 hours ago 0 replies      
Was anyone able to actually play the game? It took me to an introduction, the last slide of which was "I'm finished with my hour of code" and I couldn't go any further.
tensorproduct 2 hours ago 0 replies      
I played this game some years ago before I learned anything about programming. Coming back to it with some understanding of recursion makes it a lot easier (though still tough in places).
akkartik 3 hours ago 0 replies      
Right after the first level the question in my mind was, "hmm, I wonder what the tile for conditionals looks like." That question was never answered. Anybody have any ideas on how to answer it?
gasping 5 hours ago 0 replies      
This was really good but there were a few levels that left a bit to be desired. Some of them were basically "find a way to mash this code into these functions without exceeding the instruction cap" while others encouraged quite elegant programming. The last one of the final stage was probably the best example, I thought it was quite elegant.
artgon 7 hours ago 0 replies      
This is a really clever way to help people get an idea of what programming is all about.

Great work!

kolev 6 hours ago 0 replies      
It's not that simple. My 6-year-old son is having hard time - even with the Junior version on Android. Sometimes I do have issue with the regular version as well - it's much easier to write code than use the constructs.
otikik 2 hours ago 0 replies      
Auto playing music on load -> instant tab close.
Vega ready to launch spaceplane
points by lelf  2 hours ago   discuss
SPARTAN: Procedural Tile Generator
points by jsnell  17 hours ago   7 comments top 5
leafo 7 hours ago 2 replies      
Nice seeing itch.io here. I make itch.io, check it out if you're into indie games: http://itch.io
rcfox 3 hours ago 0 replies      
It would be awesome to see this turned into a library for dynamically generating aperiodic Wang tiles[0] for one "idea". (ie: grass or cobblestone that doesn't appear to repeat)

[0] http://en.wikipedia.org/wiki/Wang_tile

Animats 6 hours ago 0 replies      
At last, a solution to a problem that's plagued the developers of 2D sidescrollers since 1980!

It would be more useful today to have something like Photoshop content-aware fill, something that generates plausible tiles given a sample image. Check out ZenBG, which doesthat. http://mudcu.be/bg/

jwatte 9 hours ago 0 replies      
I want to see a comparison to other similar tools with longer history:- Filter Forge- Substance
EpicDavi 9 hours ago 0 replies      
Would be nice to see a cross platform release in the future! :)
Psyop Mistakes?
points by cbd1984  9 hours ago   2 comments top
chris-at 1 hour ago 1 reply      
"However, the operation failed when the rubber stamp on those cards intended for the city of Hamburg misspelled Hansastadt as Hanfastadt. Apparently, German is difficult language."

Funnily enough, it's actually "Hansestadt".


Why not DANE in browsers
points by Nimi  2 hours ago   2 comments top 2
mike-cardwell 9 minutes ago 0 replies      
I agree with his comments on HPKP. I looked in to adding HPKP headers to a couple of my sites, and figured out how to do it, but I'm nervous about enabling it. It seems far too easy to make a mistake and lock people out of being able to visit your site. The trouble is, if you make a mistake, they're not locked out until you get around to fixing it. They're locked out until the expiry date which you set in the HPKP headers, which could be months away.

We lack the proper tools to make this safe.

jaset 1 hour ago 0 replies      
Salary negotiations for techies (2011)
points by pmoriarty  19 hours ago   149 comments top 33
No1 16 hours ago 8 replies      
The main tenet of this blog post is that you should argue for your compensation based on the amount of value you add to the company.

That's nice in theory, but the techies' dilemma is that it's often difficult or impossible to put a hard number on the value they have added.

How many customers were retained because you decreased response times by 100ms? How many customers were gained because of that slick UI you created? How many discounts were not handed out because of downtime that wasn't suffered because of your cautious error handling and exemplary testing? How much money was saved because of that documentation you made that helped the new hire ramp up faster?

Even when you can put a hard number on work you've done, like decreasing hosting costs by $15k per month, isn't that why you're paid so handsomely already? How are you going to do that again next year? (Why haven't you done it already?) Wasn't it a group effort?

The reality is, you're basically going to get paid based upon what your employer has deemed everyone else in your position is earning, plus or minus some % based upon experience level, your reputation, and how badly the company needs the position filled. If you don't like that, time to go into management or sales.

vproman 1 hour ago 2 replies      

Recruiters always ask this up front and INSIST that they must know. I have NEVER been denied the opportunity to interview for refusing to give a number upfront.

If you're applying at a company, it means you've done at least a little research on what they should be expected to pay and you see somewhere around that range as acceptable. You don't have to tell them that you've researched their rates and find them acceptable, because that too would be like giving them a range, instead the research is simply to avoid wasting your time. You wouldn't want to interview for a job that pays the position with compensation worth at most $60k when you're already making at least $100k.

This way, you have an advantage: you know roughly how much they pay but they have almost no idea how much (i.e. how little) you will accept as compensation. Best case scenario, they offer you MORE than what your research said they would, and you negotiate a little more on top of it and accept, assuming you actually like the job. Even if they say no to your counter offer, you're still ahead. Worst case scenario, they offer you less, they say no to your counter offers, and you have to decline. Either your research was wrong or they were lowballing you, either way you've got multiple other interviews in process (right?) so move on. If you find your research is repeatedly off the mark, find better sources.

No matter what, don't give them a number. Make them give you a number first and negotiate from there.

NinjaTime 15 hours ago 7 replies      
Another B.S. article about negotiating IT salary's.If IT salary's kept pace back before the Dot bomb days everyone would be making 150k starting.

Here is why we don't

IT has for the most part never been a money maker of a lot of companies. They see it as a loss leader.IT is seen as the "Oh boy here comes the IT budget again."Unless of course your business is making software for the masses. But then I have been at 3 companies like that and IT was always the first on the chopping block.You are paid only as much as it takes to replace you, unless you walk on water then you should be at Google or FaceBook.

The only way to get a 20 to 25k raise is to find another job. Unless of course you are doing the job of 4 people which in IT 99.9% of the times you are.Don't believe me? Go ask for a 20k raise if their eyebrows shoot up like Mr. Spock then you know how much you are worth. I sure hope you have another job lined up because that's the queue to them you are looking.

This has been my experience and I have done exactly what I have stated above.

I now do security and compliance. No more wake up calls, no more you can't go on vacation because Oct. through January is the time of the year where we make our money.

Oh and that 100K salary mark gets very hard to justify every year especially when the CFO looks at the books and has a list of who is making over 100K per year. Unless of course you live in California and NewYork then that's welfare wages.

clairity 17 hours ago 1 reply      
this post can be summarized by two concepts that are typically taught in business schools:

1) do value based pricing not cost-plus for your labor (ask for what the market will bear, not your cost of living plus some small 'profit' on top of that)

2) have a batna (https://en.m.wikipedia.org/wiki/Best_alternative_to_a_negoti...), i.e., another offer so you have leverage in the negotiation

i've ignored these tenets exactly once, and that remains an unpleasant memory for me. every other time, it's resulted in increases of 10-40% in comp & benefits. of course, you should balance comp with qualitative advantages, but the point is not to just let yourself get screwed. =)

frodopwns 18 hours ago 3 replies      
Here is another similar guide. Easier said than done of course. http://www.kalzumeus.com/2012/01/23/salary-negotiation/
dustingetz 14 hours ago 2 replies      
here's an idea:

The market for developers is a lemon's market.

That means, if you have 100 good cars selling at $2000, and 100 lemon cars selling at $1000, all the lemons will sell first, because the people making purchasing decisions aren't expert enough to tell the difference.

So it is very rare to be paid based on value, because why would a [CEO|CTO|hiring manager|whatever] pay $1M for this guy, when he can pay $150k for that guy, and he can't really tell the difference between them?

Just an idea, I haven't seen this discussed anywhere, what do you guys think?

mikerichards 15 hours ago 0 replies      
Well, there's all sorts of factors involved, but I've found that the more you know about the company, the better you are at negotiating. That sounds like a no-brainer, but that's probably just as important as market value.

For example, I was contracting a company that had most of their software development teams working as contractors. About 6 months into contracting there, I learned that they had made a strategic decision to try and hire everybody instead of contracting and the reason they did that was to save on money and more importantly, retention.

So they would ask me to convert, and I would pretty much blow them off for a couple months, and they would ask me again a couple months later. So after a few times of this routine, I knew I was in a better position to start the negotiations. Also, I had waited for several other contractors to convert and got some tidbits from them.

At the end of the day, I negotiated for much more than average salary in my market.

If you're contracting, it pays to be contracting with a staffing firm that knows the decision makers in the company.

zupa-hu 13 hours ago 0 replies      
In (my) theory your salary is between the company's best alternative (hiring someone else for X) and your best alternative (go work for other company for Y).

Companies do well as they interview many candidates. Developers generally do worse as they go to less interviews - obviously due to resource constrains.

If you are shit hot good, both X and Y go up. The black magic of negotiating is a rounding error compared to how great & unique you are.

galfarragem 1 hour ago 0 replies      
> Technology people are without a doubt the most inept group when it comes to negotiating for compensation.

I would rephrase it as:

People that love what they do are without a doubt the most inept group when it comes to negotiating for compensation.

I think it doesn't matter if you are from IT, an architect or any other job. If you really like what you do, you don't mind to do it for minimal money and business people will exploit that weakness. For example, in architectural field, relevant architects are sometimes invited to be guest teachers in colleges. Once one said during his class:

If they discover that I like to come here, they will stop paying me.

gaius 18 hours ago 4 replies      
Free food is such a no-brainer, if you're a manager. For 5 of pizza you can get 50-100 of work, easy, and the workers think they're getting the better deal...
heisenzombie 17 hours ago 2 replies      
Nice advice, but I feel like articles like this make marginal improvements to my comfort and skill at negotiation.

I think perhaps it boils down to unfamiliarity: I've done it a very small number of times, and it feels alien and weird. Especially some of the language used. Compare that with a manager who has hired $x employees a month for a decade, and doesn't think anything of it.

I wonder: Are there any videos available of negotiations (real or realistically faked) where one could build familiarity with what a successful negotiation looks and sounds like? I think that might be a useful addition to textual "how to" guides like this.

Bahamut 19 hours ago 1 reply      
There is one danger to taking stock that should be made more explicit - you weaken your future negotiation power with the company, since payment by stock already ties your hands some, especially if you haven't gotten past the 1 year cliff.
watty 16 hours ago 7 replies      
I'm about to ask for a raise for the first time but I'm not entirely sure how to handle it. I've been with this company for 7 years and only received 2-3% increases the past 3 years. I interviewed at a competitor and they offered me a 20% increase, which I turned down.

During negotiations should I simply ask for them to match? I don't want them to think I'm actively looking for a new job but want to get paid my worth.

siliconc0w 12 hours ago 0 replies      
my short version:

Every two to three years start looking for another job - chances are you are being underpaid and holding out for a meaningful raise or promotion is usually a bad idea. Few companies seem to keep salaries properly indexed to the market. At least in IT.

When you get an offer - always always counter it. No reasonable business is going to withdraw an offer if you counter. Figure out what you're worth and add 5-10%. If the first offer is already on the 'high end' of the spectrum still counter and ask for a signing bonus, options, better year end bonus, or more vacation. That will most likely be the most profitable email/phone-call you'll have - at least until you repeat the process in 2-3 years.

ryanmonroe 16 hours ago 1 reply      
Mandatum 15 hours ago 0 replies      
As an intermediate developer I've managed to quadruple my wage after going from full-time (40 hours) to contracting rates (30+ hours, 6-month term). I brought up the fact that my side projects are starting to gain traction, but I know the work I'm doing for the company is important and I'd like to finish what I started.

I also brought up that I've had a few job offers with highly reputable firms, and my current career growth had stagnated where I was currently positioned.

Basically; me me me.

atom-morgan 18 hours ago 0 replies      
This post is a condensed version of an already short book I recently read - Breaking the Time Barrier [1]. I recommend reading both.

Like the author of this post says, calculating the value you're adding with your work is the only true way to accurately price yourself. It may not be as easy to do as someone who works in sales but it's worth your time to do so.

Shortly after reading BtTB, I had a new contract opportunity come my way. I doubled my hourly rate.

[1] https://www.freshbooks.com/breaking-the-time-barrier

pmoriarty 14 hours ago 1 reply      
Does just receiving an annual performance bonus make your negotiating position stronger or weaker?

How long would you wait after the bonus to ask for a raise?

What if you don't want to stay at your company much longer? Should you still ask for a raise, and would you feel unethical for leaving soon after getting a raise?

z3t4 17 hours ago 2 replies      
Make sure you start high then request a 5% raise Every year! If you wait five years and request 25% it's Never gonna happen!If possible, try to get 2.5% every six month.
jerguismi 6 hours ago 0 replies      
You have to take into account, that it is very difficult for the employer as well to evaluate the value of the techies to the company.
wallflower 17 hours ago 0 replies      
As long as you do not own what is being produced you will always be in a 'time for money' situation. You are trading your time for money (sometimes 5% more a year - or 15% more a year, if you strategically play chess with the job market and your skills niche).

I believe we are all much better off if we focus on building our own niche community/product (a.k.a. http://javascriptweekly.com - the number of subscribers is not as important as the quality of the subscribers).

azakai 15 hours ago 1 reply      
> Typically, in a mature company the salaries of the dev team are a rounding error on the total operation.

I don't think that's true at all. From small startups to large successful no-longer-startups, often dev salaries dominate spending, from what I usually hear.

Perhaps my experience is limited? Or perhaps the author means something specific by "mature company"?

montz1 14 hours ago 2 replies      
If you want to know what companies offer as base salaries check this website I made that uses H1B wage data. The tech companies have a very wide range for negotiation at each level and it's important to know that when you go into salary negotiations. Although this is H1B data, for top companies these are the same base salaries that US residents are also making, and they are pretty damn high....Look at google for example: http://salarytalk.org/search#%7B%22qcompanyName%22%3A%22goog...
zyxley 16 hours ago 1 reply      
I'm very happy to be with a company that has a standard policy of yearly 10-15% pay increases. We tend to do unexciting and sometimes just plain weird contract work for other companies, so my company has to pay us what we're worth to keep us around.
curiously 16 hours ago 2 replies      
A tech employer in Vancouver once told me that if an engineer tries to negotiate for a higher salary that is a signal that they care only about money and that they wouldn't be good hire, regardless of their experience and skillset. This is one of the "big" tech companies in Vancouver, BC. Never mind the hundreds of small sweatshops here.
mikekchar 7 hours ago 2 replies      
In something like 25 years in this industry I have seen this opinion time and time again. "I am not being paid what I'm worth". You can negotiate to higher levels of compensation, but there is more to life (and your job) than money. I have often seen people price themselves out of a job.

A normal high tech company runs an R&D budget that is less than 10% of earnings. The rest of the money goes to cost of sales, infrastructure (building, chairs, etc) and various other things. This means that on average your contribution needs to pull in at least 10x your cost in order for you to be seen as being worth it. There are also a lot of costs for hiring employees -- employee tax, insurance, benefits, etc, etc. So if you make $100K in salary, then your contribution has to bring in maybe $1.2 - $1.5 million.

You may think "Oh those bastard sales people are making way too much and aren't providing any benefit", but you will find that the company will still budget less than 10% of earnings for R&D. Whether it is justified or not, quite a few of the people in the "management side" identify a lot better with sales and understand their value a lot more. Unless you think of a way to significantly increase earnings, then you are depleting the pool of cash for R&D when you ask for a raise.

"Not my problem," you think.

Except Jane down the corridor appears to be very nearly as productive as you are (whether it is true or not is completely beside the point because everything will be judged by its appearances). She makes something like 60% of what you make. She's a freaking bargain! You, on the other hand, bitch and moan that you can't make ends meet on $100K and that you are living out of garbage bins. Plus you see yourself as the saviour of the company and without you everything will just collapse. Managers think, "God, please don't make me talk to that guy again".

The order comes down from above -- either 1) Our competitors are kicking our ass and we need to downsize R&D OR 2) We need to ramp up explosively to hit the next big business wave, so we need more programmers!

How will we reduce our expenditures or hire more programmers with the same amount of money? Easy! We'll do away with those bitchy-moany prima donnas and hire more of the absolute bargains that never complain.

Here's a secret I've learned. Being seen as worth significantly more than you are paid means your boss always approaches you with a sense of gratitude. In fact, creating a sense of solidarity with management in this respect shows loyalty. While it is true that, in general, companies do not return such loyalty, individuals in management will tend to select a handful of people that they trust and "can not do without". Those people will not be the guys that constantly threaten to leave for greener pastures, or that constant complain that they aren't appreciated.

I have never negotiated salary. I have either taken what has been offered and then worked hard to become an integral member of the team or I have refused the job. I have left jobs that I didn't like, but I have never left to make more money. Nor have I ever threatened to do so. I probably get paid less than I might if I pushed hard, but I can tell you that I enjoy the privileges of being "that dependable guy" much more than any salary could provide.

beachstartup 16 hours ago 2 replies      
as a business owner/operator involved in everything from sales to hiring to vendor purchasing, here's the most important thing:

if you can't walk away from a negotiation, it's not really a negotiation. if you feel, at any point, compelled to stay even though your interests are not being met in a reasonable matter, you're going to get screwed - and it's your fault, not the other person's.

in a true negotiation both parties are attempting to find an optimum solution that solves for 2 sets of 'peer to peer' requirements. it's supposed to be a cooperative endeavor. if at any point it turns contentious, you back out immediately. the opportunity was never there. it was just an illusion. this is the hard thing for people to grasp.

unfortunately, for most people these are things you learn by doing. "not all that glitters is gold".

100timesthis 15 hours ago 0 replies      
another angle: How much would the company loose if they had to recruit another person exactly like you? The headhunters get 20% of your salary the first year, than it's cost time and working hours to find new employee, bring them up to speed, the insecurity that the new guy is not the right person, etc..
ddingus 14 hours ago 1 reply      
First, know things and people are worth what others will pay for them

As a floor, you need to fund a modest life, and that means retirement, healthcare, personal improvemwnt, food, home, etc...

Add some small margin onto that for quality of life.

Second, get business minded. It helps to take an interest in the company, know it's financials, and it's goals. This helps you position your value in direct, meaningful terms.

I very strongly agree with being ready to walk when it comes time for salary discussions. If you can't walk, you will get the lowest comp, unless you have some relationship or other leverage to play on.

The bigger the increase, the more this matters.

The idea that you are as valuable as a replacement and training is false. This ignores you as a person, relationships you have, etc... this is often the framing, but do not be afraid to expand the discussion.

Set goals and justify them. This is something sales people do all the time. They are motivated by those goals and communicate them easily and consistently.

Ie: I need 200k this year to fund my travel and home investment plans.

A tech person may have identical goals, or maybe is wanting to build something, or send a kid to school.

Value these and frame your motivation to work in terms of your goals, company, love of the work, etc... Managers, and higher level people in the company understand and respect goal oriented people. Make sure your goals and the company align, or make basic sense and there are no conflicts.

This all speaks to the work being worth it and it also speaks to reasonable expectations as opposed to just greed. Greed isn't bad. Clear, meaningful goals are easier to sell and for others to identify with. When others identify with your life purposes, they can value them and very easily see how you are inclined to stay and work for them. They also nicely dodge the "how much is enough?" Type questions.

Get your other offers and or secure relationships needed to know you can land on your feet should you need to make a change.

Be flexible. The company isn't seeing it's goals play out all at once, and you won't either, but there should be a path to get there that is realistic.

All of this boils down to a "this is where I need to be and why it matters" conversation.

Shared and aligned goals is a great basis for a loyalty type arrangement. People will work hard for others who take care of them and seeing that play out is worth a lot.

Another advantage of goals is there sometimes is more than one way to get there besides cash. Nice to have options on the table.

If it goes well, great. If not, you have your fall back.

You may be able to contract too. Where a company really cannot pay you what you need to realize your goals, perhaps a more flexible arrangement can leave you empowered to do it your self.

This does not need to be a conflict of interest, particularly where you may have more skills not being used, or relationships where you can add value in atomic, easy to execute ways.

Another consideration is involvement with sales and marketing. If you can take some risk, you may find opportunities to capture nice rewards by being part of that process. This takes some people skills, but getting them is worth it.

Ask sales how valuable a tech person who can handle and understand the sales process is. They could be your most potent value advocates.

You help them close a big one, and it directly benefits them. You leaving will present an opportunity cost they will have zero problem justifying.

Of course there are spiffs and such potentially mixed up in this and it all depends on who you are. Taking some risk will differentiate you from other techs and that can be worth a lot.

The first time you walk it is hard. The first time you cultivate advocates is hard. The first time you take risk is hard, and the first time you get a nice increase is hard.

All worth it. Actually it all is as worth it as you think it is. And people count on those things being hard enough and not worth it enough to keep you inexpensive.

What is worth what? That is your primary question to answer. Sort your life goals out, value them, decide on risk and alternatives, and then proceed to have the dialog needed to get you there.

Once you start down this path, you do not stop. It becomes part of you and others will see that mindset and treat you accordingly.

geuis 14 hours ago 0 replies      
Very timely repost. Just had coffee with an old female colleague an hour ago about negotiating salary at a company she's joining. I remember reading this a few years ago and it's been helpful advice.
hnriot 11 hours ago 5 replies      
interesting to read all these from the engineer's perspective. from the manager's side of the table things are quite different. If an employee comes to me and asks for a raise then I begin the process to replace them. We give reasonable raises, we pay fair market value and an engineer might make a little more elsewhere but they'll be giving back their RSUs and the opportunity to work on really cool stuff. If, however, they want something else then good luck to them, people are all different and it's a free country, but asking for raises means they aren't happy so they will leave anyway, either way, I begin looking.

when someone joins, unless they are at vp level they really don't have much negotiating opportunity, we make a decent offer and they either take it or leave it. we very seldom adjust the offer.

eoghan 18 hours ago 2 replies      
"If they cant pay you what youre worth, consider being paid in part in stock, but only if you truly believe in the product, the management, and the company as a whole."

Successful "techies" never work at companies they don't believe in.

It's quite telling that the author includes this "only if" clause.

Ask HN: My son wants to make side money programming
points by Vivtek  1 hour ago   25 comments top 21
underyx 1 hour ago 0 replies      
Hey, I work at a startup in Budapest (allmyles.com), I'll ask the team if we could teach him a thing or two by passing off some work. We mainly work with Python, and having him around (office is next to Dek Ferenc tr) might prove fun for all of us. Remote works for us as well, of course.

Disclaimer: We're rather bootstrapped, so I can make no promises. (Jeez, we might not be able to afford a 15 year old programmer.) An even larger issue might be that I think it's illegal to give work to anyone under 16. I'll have to ask around and if you send me some contact info to bence.nagy@allmyles.com, I'll get back to you once we've got things figured out.

glxc 7 minutes ago 0 replies      
Learn programming fundamentals through TopCoder and HackerRank, which have programming puzzles and teach algorithm and data structure fundamentals

Then also get involved in an open-source project, just keep looking for interesting subjects and find one you like. Programming is more about managing projects, and it will give you something to put and talk about on your resume, and meet new people, and lead to jobs

Everything you need is online. "Pick a job and become the person that does it" - Mad Men

nkuttler 27 minutes ago 0 replies      
It seems like almost everybody who replied is a web developer, and assumes your son wants to build websites. Let me suggest something different.

Figure out which part of programming he's interested in, why he likes it. Is he good in maths/physics, does he want to write games, websites, apps, etc.

If he's just interested in "programming" in general, get him a nice book on algorithms and suggest different languages he can use to implement them (something C-like, a scripting language, a functional one, a modern Lisp dialect).

Oh right, get him to use Linux or BSD, as that will give him access to a ton of free development tools. I also mention free because I don't think he should focus on making money at 15, but find something he enjoys learning. If necessary, you could provide incentives to learn this properly, instead of following possibly short-lived market trends.

Beltiras 41 minutes ago 0 replies      
Python. Fullstop.

Is he interested in games? Make him implement a roguelike, or study Panda3D, if that's where he leans.

Does he like to design UX? Use one of the wonderful web frameworks (my fav: Django, but also tornado, Flask, Pylons) and do some webdesign.

Useability applications for the Desktop? https://wiki.python.org/moin/GuiProgramming more frameworks than you can shake a stick at.

Best calling card you get when looking for work is pointing to what you built. Rent a DO dyno and post the work there. Showcase cool stuff.

kamalisreddy 9 minutes ago 0 replies      
I suggest he start with creating his profile at odesk.com and freelancer.com . Initially, he needs to apply for smaller projects and at significantly lower prices than the other bidders. He can build a good portfolio and start moving to bigger and paying clients.

He could start with web development initially. It's the fastest way to get going. Then move to other subjects like algorithms, machine learning, Big data.

Ask him to build some stuff which can solve some real world problems that he faces in every day life. If other people also face such problems, they will pay to use his tools.

Good Luck

toyg 1 hour ago 1 reply      
15 is late -- by mainstream reports, at that age you're either a billionaire already or you're never going to make it. </joking>

Back in my days, I'd have built websites for friends and some would turn into (badly) paid gigs. If he's got other interests (sports club etc) they could become engagements.

If really he wants to do websites, you'll have to start with JavaScript/jQuery, then graduate to server-side (node, PHP, Python/Django, Ruby/Rails, pick your poison).

Consider that the real dazzle these days is mobile, and it's not that much harder than making websites (or he can make mobile-optimized websites, which can be almost as cool). It might be easier to get engagements on that side at the moment, since most people/clubs/businesses already have a website but they likely don't have an app.

karterk 1 hour ago 0 replies      
I started along the same path 12 years ago, so I can share my experience.

* I started with PHP. My suggestion today is to start with Django (Rails is fine too, but Django has less magic and so things are lot more explicit)

* Bootstrapping out of nowhere is difficult. He will have a lot to learn from the knowledge perspective. So, looking out for opportunities, negotiating payment etc. can be draining. This is where you can help so that he focuses on learning his craft first.

* It will be tempting to work for free. Advice him against that. He can work for a low fee, but getting paid puts the work in the right context.

* Stay away from portals like Elance and ODesk.

* Working as an intern at a company would really help him with the other meta aspects like planning, team collaboration etc. which are all important to pursue his long-term goals.

steedsofwar 24 minutes ago 0 replies      
If it's solely for the purpose of making money (as opposed to learning for the challenge/curiosity) then do some research: freelancer.com, odesk etc etc. I'd check which technologies are in demand and go from there. It's also important to play to your strengths, for instance i find web development very finiky, i tend to excel in more meaty things like transaction processing. hth
vayarajesh 41 minutes ago 0 replies      
If he is interested in web development I think Khan Academy is a good place to start (https://www.khanacademy.org/computing/computer-programming)

Great teaching lessons and lot of jobs around the world for the web

Thiz 46 minutes ago 0 replies      

The money is in mobile right now, and for the foreseeable future. Try Java for android too. In a couple of years he'll be making good money without leaving home.

Or anywhere in the world for that matter.

edtechdev 1 hour ago 0 replies      
There's contract work - like creating little websites or web apps for people. So, learn things like Wordpress, CSS, Javascript, image editing, to start.

If I were him, I would first ask around - family, local businesses, organizations, etc. to see if someone needs a website or internal database or something like that, and then learn what you need to learn to do it. Another option might be to find groups or small businesses or individuals that do coding and see if he can learn from them and help out.

What I wouldn't do is just try to learn programming for its own sake - such as taking a course or buying a book without any idea of how it might be useful. He'll just forget it and perhaps even decrease his interest in programming. Flip it around and find a project first, a reason to learn programming.

luch 1 hour ago 0 replies      
Generating money from programming is hard, really hard. I'm a professional developer (though a junior one) and I can't find ways to do it reliably.

If it's still interested in programming, here is my list of languages :

- Desktop : Python obviously. You can make little graphical interfaces, easy scripts, manipulate data in Excel or Word, even some remote automation since there are network libraires.

- Web : I would recommand against full-blown web frameworks like Django or Rails. Start small by using some simple static sites using HTML+CSS and then learn to build dynamic ones using PHP and Javascript.

I would also add that there are others ways to make money than programming : I know a 17 y.o. which rigged up a farm of Minecraft servers for his highschool and he's being paid by his classmates for the hosting.

chippy 55 minutes ago 0 replies      
How much does your son want to do programming? Does he do it for fun already?

What other interests does he have? Are there parts within these interests that could have relevance for programming? For example he might like bird watching, so perhaps a mobile app for birding might be good, or he might love platformer video games, etc etc.

In short, he should do what he is interested in, don't worry about the choice of technology - by far the most important thing is that there is some interest, passion and enthusiasm. The choice of technology is less important than the choice of what to do.

facepalm 1 hour ago 0 replies      
My personal opinion is that JavaScript is the most fun and the most versatile. For example if he is into making games, creating HTML5 games with JavaScript works very well.

Not sure what is best for generating money.

But for learning, starting with a personal project (like a game) tends to work well.

thejew 1 hour ago 0 replies      
I started at 15 (15 years ago). Although today I am a Python guy and generally a little anti PHP. I think that the ease of PHP with web servers translates almost immediately with beginners. It will teach him to search for what else is out there. Plus if he is going to be doing little church/synagogue websites, he start to grasp HTML/CSS/Javascript at the same time. Same way I started.
bobx11 1 hour ago 0 replies      
Maybe you guys should offer wordpress services for setting up websites. Then you would be:

1. working within a large piece of software (learning)

2. up and running fast and feeling good about making progress

3. able to use google to resolve more of your issues since stackoverflow and many blogs talk about the common beginner mistakes

4. have more customers up front (since money is the motivation) where many people are willing to pay for wordpress setup and management

davidw 1 hour ago 0 replies      
I'd recommend this book and the bootstrapping community in general:


He could probably do ok with Ruby on Rails, Django, Node.js or something like that.

jkot 34 minutes ago 0 replies      
Programming is very competitive market. Python is probably good choice for first language. Together with programming language I would recommend to learn some database, perhaps Cassandra, RethinkDB or SQLite.
willholloway 1 hour ago 1 reply      
A $450 Seiki 50" 4k display makes an excellent teaching tool. It's so large you can see code from a higher elevation so to speak. Visualizing and "seeing" how a script flows become much easier. You can stand by the display and point to how a function call executes the code where the function is defined and things like that.

This is the stack I would set up with, optimized for ease of use, elegance, and market demand:

Debian sid or Ubuntu

Tiling window manager


Python flask on the backend

Bootstrap on the front end.

A hacker that is comfortable with Linux and the command line, python, html, css and js can find work anywhere.

jacquesm 1 hour ago 1 reply      
KISS: Python, Ruby or PHP, Django, Rails or Yii.

If any of those take the world will be his oyster.

gghootch 39 minutes ago 0 replies      
A lot of kids start with Wordpress customization or selling themes on ThemeForest.
Local MirageOS Development with Xen and Virtualbox
points by amirmc  51 minutes ago   discuss
Web scraping with Ruby
points by hecticjeff  4 hours ago   9 comments top 4
Doctor_Fegg 2 hours ago 1 reply      
I'd suggest going with mechanize from the off - not just, as the article says, "[when] the site youre scraping requires you to login first, for those instances I recommend looking into mechanize".

Mechanize allows you to write clean, efficient scraper code without all the boilerplate. It's the nicest scraping solution I've yet encountered.

k__ 1 hour ago 2 replies      
Can anyone list some good resources about scraping, with gotchas etc.?
richardpetersen 2 hours ago 1 reply      
How do you get the script to save the json file?
mychaelangelo 2 hours ago 0 replies      
thanks for sharing this - great scraping intro for us newbies (I'm new to ruby and ROR).
QBasic game programming
points by StylifyYourBlog  11 hours ago   24 comments top 12
petercooper 10 hours ago 1 reply      
It basically has zero value now, but perhaps my only claim to "fame" in the 90s was writing the first (at least popularly known) raycaster in QBasic which then spawned a ton of clones (the comp.lang.basic.misc community was strong then). It was absolutely hideous and naive but getting credited in far smarter people's larger creations as their inspiration was an eye opener for me :-) (And a feat I have not repeated, alas!)


husted 1 hour ago 0 replies      
I remember using basic during my demo coding days to find the optimum algorithms. When everything is running at 0.5fps it's easy to sport which implementation is the fastest. So after trying a few methods I would determine the fastest and implement it in asm1.

I miss those days, that was fun.

klibertp 4 hours ago 2 replies      
IIRC the version bundled with DOS6.22 was severly limited: it couldn't generate executables and the built-in help was only for IDE, without help on the language at all. That's because MS was also selling QBasic "pro" version. It was basically impossible to buy any software then in my country, so I was stuck with normal QBasic for a few years, until I got Internet connection and pirated a full version (it was still impossible to buy software here - no one bothered selling it at all here until a bit later).

Still, bundling a quite capable IDE with an OS was a very nice practice, but it basically ended then, probably because MS also sold Visual Basic. Without QBasic I wouldn't have started programming at all - I wonder how it would be if I was starting right now. Is there any ubiquitous, simple language available basically everywhere and with a nice IDE by default? JavaScript almost fits the bill, but as a beginner who doesn't know any better you're likely to use Notepad, while with QBasic you had a real IDE from the beginning.

dhotson 7 hours ago 1 reply      
I managed to dig up one of my old 3d demo projects:

Source: https://gist.github.com/dhotson/686036fb771fbbca8c48

Screenshot: http://i.imgur.com/4fpaYtO.png

.. it's not as embarrassing as I expected. :)

kyriakos 7 hours ago 0 replies      
I'm wondering what impact to future programmers the current state of programming will have. Back when i was using qbasic there were very few distractions, as a self taught programmer if you wanted to build something you had to figure out how to do it yourself. No frameworks, no google, no stackoverflow noise just a couple of libraries and infromation from a couple of books I had access to as a kid. I remember for my first PC I knew the purpose of every single file on its 21mb of hard disk, try doing that today..
DSMan195276 6 hours ago 1 reply      
Shout-out to QB64 - An attempt at a modern version/compiler for QBasic code, as well as updated features. I used to do a fair amount of programming in QB64, and the design of the language hampers it as a general-purpose language, it's a fun language for learning and writing in.
neandrake 10 hours ago 1 reply      
In the late 90's there were several great QBasic communities (NeoZones was one I would frequent the most). I think these more than the fondness of the language/IDE were what kept me interested in learning software. Most of the communities seemed to die along with popularity of the language in the early 2000's.
aabajian 10 hours ago 1 reply      
This is fantastic. My first game was in GW-BASIC, but I quickly upgraded once I discovered QBasic on DOS 6.22. I also loved the MS-DOS Shell (http://en.wikipedia.org/wiki/DOS_Shell) found on the supplemental disk of v6.22. It felt like multitasking...but wasn't.
pjmlp 5 hours ago 2 replies      
Having started with ZX Spectrum BASIC (Timex 2068), followed by hexdumps, and moving into GW-Basic/Turbo Basic and Assembly after getting my first PC, I never coding anything meaningful in QBasic.

For me, those were the days to start learning Turbo Pascal, and QBasic was used mainly to run nibbles and gorillas, when I got bored of a coding session.

Narishma 1 hour ago 0 replies      
Is it me or is he doing the input and rendering in the wrong order?
FrankenPC 9 hours ago 0 replies      
Great memories. It inspires me to locate an old DOS version of Turbo Pascal. TP was my absolute favorite IDE in the DOS days.
andrewguenther 6 hours ago 0 replies      
I learned QBasic from a Boy Scout magazine when I was in 4th grade. It was a letter jumble game and my dad helped me on our old Windows 3.1 machine. Never looked back.
Show HN: Adopted a dog, had no clue what meds she needed, did some research
points by youngj  22 hours ago   66 comments top 28
Amorymeltzer 20 hours ago 2 replies      
I just want to say that this is an beautiful example of the hacker ethos being put to use. The creator got into a new field, saw a hole in information, and put together a neat, useful resource. Just perfect.
r1ch 19 hours ago 1 reply      
I would recommend adding a warning for Hartz products - they are notoriously toxic and kill pets. http://www.hartzvictims.org/
k-mcgrady 1 hour ago 0 replies      
Interesting to read this thread to see that lots of people have to medicate their dogs. Is this a regional thing? I've had my dog 8 years and apart from the required injections it has never needed any medication.
dkhenry 16 hours ago 1 reply      
I don't want to be a downer, but I don't like this concept. I understand the desire to educate yourself about the medicine you need for your pet, but the best thing you can do is find a good vet that you trust to give you this advice. In fact the mechanism that you have where by you allow the user to buy said medication from Amazon makes it seem like your putting this list out as an alternative to proper medical care.

There is much more to taking care of a pet then reading the packages of some medication and then buying it from amazon.

tallanvor 4 hours ago 0 replies      
If the place you adopted the dog from didn't give you information, you should talk to a vet about what parasites are common in your region and what treatment options there are. --Many times vets will offer a couple of options for non-prescription treatment and prevention.

If you're in a region that doesn't have fleas and ticks, there's no reason to treat your dog for them. The same goes for many of the other parasites listed.

joelrunyon 2 hours ago 0 replies      
Do you have a template of this. I can think of 20 other areas where something like this would be useful. Nice work.
NiklasPersson 19 hours ago 0 replies      
Awesome. Problem found, solution created. See, you don't need to create ridiculously overfunded ( - exuse my language -) bullshit products in order to "put a dent in the universe". Just try provide a solution to a real problem. Thumbs up!
akshaykarthik 21 hours ago 5 replies      
Is there an algorithm that would find the minimum subset of these to cover every parasite/bug?
clumsysmurf 17 hours ago 0 replies      
The NRDC has a listing of various products and safety concerns at:


I'm using NexGard(not listed - just approved in 2014) and seems to be working fine so far - but since its new there isn't much data on it. I preferred the oral over a topical so I didn't have to worry about topical applications transferring to the home, others, clothing etc.

zumtar 20 hours ago 0 replies      
This is really great, thank you for doing this.

Could you please add Advantix, Scalibor, Kiltix and Foresto for us Europeans :)

We use most of the others here also, but those I have listed above are used alongside the ones you have listed.

madsushi 21 hours ago 4 replies      
Trifexis is awesome and what I use for my active dog (and we have never had flea problems). The problem is that the pill smells very, very strongly of mold. I have to smash up the pill and mix it into peanut butter for my dog to even look at it. I also administer it outside and use gloves, because it will have your house smelling of mold for days. With a smaller dog (thus smaller dose/pill), you could force it down, but a 30+ lb dog will have too big of a pill and won't eat it normally for any reason.
throwaway5752 18 hours ago 0 replies      
Heardgard and Frontline are usually all that anyone needs.
edmack 17 hours ago 0 replies      
Nice! It'd be helpful if one could check the column headers for the things you need to treat, and then the list is filtered for what is effective against those :)
nlh 18 hours ago 0 replies      
Very cool. Nerd question -- can you talk a bit about how you made it? Tech stack used, etc.? Always curious about that stuff, especially when on Show HN.
jakobegger 21 hours ago 1 reply      
Would be really awesome if there was some way to provide feedback on the effectiveness. My experience with cat medicine showed that what it says on the package is not to be trusted (for example all the "natural" stuff against fleas was completely useless)
JoshGlazebrook 21 hours ago 3 replies      
Interesting that nothing protects from everything. Is that just not possible or is that on purpose?
jat850 19 hours ago 0 replies      
In your research, did you explore other common immunizations like bordetella, parvo, rabies? (I realize these wouldn't fit into any overlapping things like you have here, I'm just curious if you might add it at some point.)
WolfieZero 18 hours ago 0 replies      
Awesome idea! Just a suggestion to the developer (if reading this), would be good to also have a list of what is okay to use on what breed/group (such as advocate for pastoral breeds).
kephra 21 hours ago 0 replies      
The main question is: In what area/country are the parasites resistant against those toxins. e.g. in Bremen neither advantage nor frontline works against fleas.

And I'm missing ARDAP by Shell in this list.

NDizzle 19 hours ago 7 replies      
While this is a good list, these medications are kind of snake oily.

I've had 5 dogs in my lifetime that have lived to age 15+. All I do is take them to the vet yearly and feed them good quality dry dog food.

I have lived in rural Arkansas to the east bay California, as far as exposure to various things. I've had a half wolf dog in AR that would routinely kill and eat armadillos, opossums, terapins, and all kinds of small wildlife. He was probably exposed to all kinds of nasty things.

Never did I give him any of these dog preventative medications that seems to be so popular today.

fiatjaf 19 hours ago 1 reply      
Why not do something like that for humans?
GFK_of_xmaspast 21 hours ago 1 reply      
We just asked our vet.
SaulOfTheJungle 17 hours ago 0 replies      
Pretty neat! Care to share with us what type of tech you used in the backend?
fiatjaf 19 hours ago 0 replies      
I started clicking randomly, expecting it to change the Xs into Vs and vice-versa.
GigabyteCoin 20 hours ago 0 replies      
That dog will be alright. Great job you did there on the website.
seesomesense 13 hours ago 1 reply      
Would you do this to your child ?See a vet.
marincounty 17 hours ago 0 replies      
A Veterinarian with reasonable prices is greatly valued. So valuable, with the right marketing I feel people would leave their estates to your business; I would. That's all I really have to say toVeterinarians, but I feel a lot of you need to ask for some businees advise. I see a lot of people skipping the trips to the vet because they just can't afford the high priced boutiqe veterinarian practice. I know people are inherintly cheep, but I think most just want to be treated fair. I also know what happens when vet hospitals offer too much for free.(SFSPCA offered a "no kill" policy. People started to abandon their animals, and ruined a great organization.)I will pass along-- I've always had big dogs. A Bullmastiff, and mixed breed American Bulldog/Pit. They look sturdy, but they are fragile. The purebred Bull Mastiff was always at the veterinarian. She had multiple problems fromhuge paws that attracted Foxtails/grass seeds to Entropian.I had a great income so going to the vet was no problem. I now have a low income and thank goodness for the mixed breeds.They are still fragile, but don't need to go to the vet as often. I still hear vets telling big breed dog owners about the benefits of exercise. Yes, exercise the dog, but let them choose when and where. All my dogs were over 100lbs, and when I exercized them too much their bodies fell apart.For the Bull Mastiff, a walk around a small lake was too muchon a summer day with a gallon of water. She just dropped half way around. I sat with her until dusk, and then we just made it back to the car. My point is they, especially the Bulldog breeds are fragile.
beachstartup 18 hours ago 0 replies      
this is cool. i adopted a shelter mutt about 2 years ago and i love him to death.

comment/feedback on the table: make the headers float with scroll, so that when you get to the bottom of the table, you can still see what the columns are.

Show HN: JSON-based standard for job posts
points by lukasm  1 hour ago   13 comments top 4
onion2k 53 minutes ago 1 reply      
"currency": "$"

That's quite ambiguous. A lot of countries use dollars. Why not use the ISO currency code standard[1]? Bonus feature - you could ask for payment in XAU (gold).

[1] http://en.wikipedia.org/wiki/ISO_4217

splitbrain 38 minutes ago 2 replies      
"20-01-2015" - seriously?
benbristow 20 minutes ago 1 reply      
Always relevant:http://xkcd.com/927/
jbob2000 48 minutes ago 1 reply      
Haha, HR can't use this!
How Amazon Tricks You into Thinking It Always Has the Lowest Prices
points by xmpir  23 hours ago   85 comments top 24
ecaron 20 hours ago 5 replies      
When I started working on https://trackif.com, I thought the premise was thin because prices couldn't fluctuate that much. I assumed everything gradually declined in price, and that it'd primarily be driven by store-A vs store-B price dropping.

Nope. Retailers are just gaming us 24/7. I've become very aware of all the different timeframes retailers offer post-purchase price-matches (published at http://blog.trackif.com/trackif-smart-shopping-guide-store-p... since I felt like I was hoarding knowledge.)

Have retailers always played games like this? Or it just a side-effect of sales moving online?

TheLoneWolfling 20 hours ago 2 replies      

(Amazon price tracking.) Very useful if/when you want to buy something and want to check historical prices. (You can also set email alerts when something drops to below a certain price.)

Edit: linkified. (Thanks, canvia!)

dominotw 20 hours ago 4 replies      
I buy from amazon for their predictable shipping and insanely awesome customer service.
GabrielF00 20 hours ago 2 replies      
They mention HDMI cables specifically. I just went into a Best Buy and asked for their cheapest HDMI cable. The salesman showed me one for $15. The Amazon basics cable is $5.49. If you've got Prime and you factor in the shipping costs of using another website, it's hard to beat Amazon's price.
peteretep 20 hours ago 2 replies      
I am willing to pay a significant premium to Amazin for the no-bullshit customer support. If my transaction doesn't delight me, I know they will make good on it.
Tarang 20 hours ago 0 replies      
It's not only amazon that does this with loss leader pricing, it its also my local grocery store with milk and bread.

For me what gives me the impression amazon has the lowest prices are their nearly nonexistent profits. Whatever's up may not the the cheapest but it's always difficult to find something cheaper elsewhere, even if it does exist.

DougWebb 18 hours ago 0 replies      
I'm sure Amazon is constantly adjusting their prices in order to maximize their sales and revenue; they even have some price automation tools as part of their inventory management system for people who sell their stuff through Amazon.

However I'm not sure these adjustments are meant to make people perceive that Amazon has the lowest prices. Instead it seems like they're meant to ensure that Amazon actually has the lowest prices on the most popular and high volume items. On those items they are pricing for high volume, while on the lower volume items they need a higher price to get an equivalent margin. That's what this looks like to me: maximizing margins across products with different sales volumes.

steven2012 9 hours ago 0 replies      
Surprisingly most home improvement things are cheaper at Home Depot rather than Amazon. I learned this one the hard way. Also Amazon routinely displays "original" prices that are much higher than other places and with the "discount" falls in the same price range.
WizzleKake 21 hours ago 2 replies      
Amazon jacks the price around on a lot of the household items that I buy. There one item that I last purchased for $11.94. I have seen it as high as $29 and some change. Right now it is $23.94.

I've wised up to this tactic and will buy extra when the price is low enough to make it a better deal than buying at the grocery store.

WalterBright 11 hours ago 1 reply      
This is all old, old news. Back in the 1970's, a friend of mine was shopping for a nice SLR camera. He knew which camera he wanted, and diligently researched ad after ad, finally settling on one with the cheapest price. We all piled into his car to go get it.

Sure enough, he bought the camera body dirt cheap. But he walked out of the store with a lense, filter, case, flash, film, and a few other accessories. When back home, he ruefully discovered that the total price he shelled out was higher! He didn't realize that the accessories were priced higher than the competition. People simply are not price sensitive to add-ons, and salesmen have known that for centuries.

Gillette is famous for pretty much giving away the razor and making money on the blades.

There's even a word for it: "loss leader".

All Amazon has done is automate it. Pretty much all retailers do it.

gdulli 18 hours ago 0 replies      
When I decided I didn't feel great about supporting Amazon any longer due to its reported treatment of its business partners, corporate employees, and warehouse employees, I started shopping around and was surprised to find it wasn't so hard to find deals just as good or better elsewhere.

Sometimes prices are just lower elsewhere, sometimes free shipping comes without a requirement to make a $35 order. (Or pay a high annual fee for free shipping that wouldn't amortize well for me.)

And sometimes Amazon still is the cheapest, but not by so much that it feels imperative to shop there if I have reasons not to.

zeeshanm 11 hours ago 1 reply      
I read on NPR a while ago some guys found an arbitrage opportunity in book prices. So - he would track the most sought after books, buy them when price were low, usually around July/August, and then sell them back on Amazon when prices were high, around the time of September and January. Makes sense.
WalterBright 11 hours ago 0 replies      
Another common sales technique is to have 3 models in a line - the stripper, the standard, and the deluxe. The stripper was barely functional, and its sole purpose was to have a cheap price to attract customers to the showroom. The deluxe had every silly feature the manufacturer could think of, like pinstriping on a dishwasher. It had a very high price. It's sole purpose was to 'frame' the price of the standard model and make it look like a bargain.

The standard model was the one the manufacturer expected to sell. Of course, the rare price-insensitive customer would buy the deluxe, and the salesman was happy to sell that and collect the large commission.

JoachimSchipper 21 hours ago 0 replies      
Pricing the most-seen items lower is not quite as nefarious as "trick" would suggest, IMHO - and part of it is probably just driven by various advantages of selling a lot of some particular product.
eurusd 4 hours ago 0 replies      
I also like the simple but effective Keepa http://keepa.com to compare amazon prices in different countries at the same time.In Europe for example, Hometheater amps are 50% cheaper in Germany than france, while france is cheaper on something else and UK is cheaper on tools and sometimes projectors (depending of FX rates)
kmfrk 20 hours ago 0 replies      
As has been said, this should be no surprise at all - especially if you've followed phenomenons like the Harry Potter books that got the same treatment.

Amazon underbid competitors on the short tail and make it up on the long tail.

Amazon also stand the benefit that nothing is technically "upsale", since it's all horizontally in the same basket, so they can't get accused of selling you extra stuff the way other vendors might.

known 5 hours ago 0 replies      
I think Amazon is emulating http://en.wikipedia.org/wiki/S%26P_500
xenadu02 21 hours ago 1 reply      
This has been WalMart's strategy for decades so it shouldn't surprise anyone.
tmalsburg2 20 hours ago 0 replies      
> The startup wants to help Amazon competitors think about pricing in as sophisticated a way as Amazon does.

The catch is that if several big retailers apply the Amazon strategy, a self-reinforcing feedback loop will drive the prices for popular products to zero and the prices for less popular products to +inf. This will make popular products even more popular, which further strengthens the effect. The question that this startup has to answer is thus how they are going to keep the market from exploding and how they can benefit several clients at the same time.

kenjackson 19 hours ago 3 replies      
Lego pricing on Amazon is generally bad. Often much worse than what Lego sells the sets for. That is one area I'd love to see Amazon change.
DiabloD3 20 hours ago 2 replies      
I dont get why they use HDMI cables as their first example: we keep buying HDMI cables because they get busted, not because we need more.

AmazonBasics is currently the best cheap cable (replacing Monoprice's now that they aren't nearly as good as they used to be).

Also, why does the article call them HD cables. Whats an HD cable? None of my ports say HD, they say HDMI.

And it doesn't even get into how Prime games S&H over the long term.

known 6 hours ago 0 replies      
You'll buy anything you think Amazon is losing money on.
milesf 10 hours ago 0 replies      
I use http://camelcamelcamel.com to track stuff I'm looking to buy. Here's a screenshot from my price tracking last year on a WD 6TB drive:


otterley 19 hours ago 0 replies      
The technical term is "loss leader." It's a venerable technique.
points by kingkilr  15 hours ago   53 comments top 15
ggreer 14 hours ago 3 replies      
The current ratings seem too simplistic and strict. I think a better rating system would be:

1. None. Not listening on https.

2. Bad. Invalid cert or broken cipher suites.

3. Ok. Valid cert and good cipher suites, but no redirection to https.

4. Good. Http redirects to https.

5. Great. Redirects to https and sets HSTS header.

6. Amazing. In browser HSTS preload lists.

It may make sense to change the criteria as sites improve, but that list seems sane today. I'd also recommend using letter grades (A+, A, B, C, D, F), but that might cause confusion with SSL Labs[1].

1. https://www.ssllabs.com/ssltest/

theVirginian 9 hours ago 0 replies      
I was looking forward to a smartwatch that somehow made use of https. Now I feel like an idiot.
kyhwana2 4 hours ago 1 reply      
christop 4 hours ago 0 replies      
I always find it slightly weird, when reading Snowden-related articles and looking at the NSA PDFs on Der Spiegel, that they don't use HTTPS (and even actively, permanently redirect to HTTP).
benguild 14 hours ago 1 reply      
I think this is a really good idea. I mean, today to most people the measure of whether or not a site is secure is just whether or not the lock icon displays when theyre browsing.

An actual public shaming of sites with bad security is probably all thats effective at this point.

rocky1138 13 hours ago 0 replies      
Is there a search engine which returns only results which themselves use https?
hughes 12 hours ago 0 replies      
I would love to see a list of financial institutions included. I checked www.bankofamerica.com and secure.bankofamerica.com on SSL labs found both to have identical (B-grade) security.
markbao 13 hours ago 0 replies      
I'm curious why this lists but few of the Alexa top 10, such as Google, Yahoo!, Facebook, Twitter, and others. The first two are mega-sites and only the root domain would count most likely, but social sites constitute a lot of communication. (Even better would be to say whether app connections are secure, such as knowing whether Snapchat connections are over TLS or not, though that's probably out of scope.)
BorisMelnik 8 hours ago 1 reply      
would also like to recommend my friend who runs a similar product (I have no affiliation):


slimetree 13 hours ago 5 replies      
For someone who doesn't get it, why do you need https on websites that just show you some text?
mnx 15 hours ago 0 replies      
>"If a verified TLS connection cannot be established or no page can be loaded over TLS, the site is given the Bad rating."

So, bad = none.

jMyles 13 hours ago 2 replies      
Is it protocol at this point to always redirect from HTTP to HTTPS? Is there an RFC for that?
IkmoIkmo 11 hours ago 0 replies      
Healthcare.gov being an example to the rest... go figure.
watchesfromch 13 hours ago 3 replies      
Forcing a HTTP to HTTPS redirect is a really bad behaviour.
jspaetzel 14 hours ago 3 replies      
I don't suppose we should be checking the pages that should actually be secure. IE Ubuntu is listed as bad, why not check their login page? https://login.launchpad.net/ or launchpad.net. Perhaps once https://letsencrypt.org/ comes available it will be worth the extra effort to encrypt everything. In the interim it's most likely a waste of funds, especially for projects that operate on donations.

Edit: I was surprised to see the WSJ listed as Bad. Checking their login form, something that should be encrypted, the post goes to... https://id.wsj.com a secure page. I wont go through the entire list, but I expect most of the ones in this list have a similar configuration.

This American Life: Batman
points by noobermin  19 hours ago   20 comments top 9
ridgeguy 10 hours ago 1 reply      
Decades ago in grad school, I was under my 1969 VW bus doing maintenance. Had a set of combination wrenches at curbside. Ten year old blind kid who lived next door came out and wanted to know in detail what I was doing, and I talked him through what I was doing to get ready to pull the engine to install a new clutch plate & throwout bearing.

I could see the nuts and bolts, and knew what wrench size I needed, but when I'd reach for it, I'd often pick the wrong one (13mm felt pretty much like 12mm, etc.). He asked me to say (only once) the wrench size each time I picked one off the curb. Thereafter, he told me what size I was about to select as soon as I touched it, making it scrape ever so slightly against the concrete. Saved me some time by telling me when I was about to pick up the wrong wrench.

I wasn't surprised, as I'd seen him on his bike in the neighborhood, doing the click echolocation described in This American Life. I was in a neurophysiology program at the time, and was totally impressed with his living demo of neuroplasticity and audio world mapping.

randlet 11 hours ago 1 reply      
"Running into a pole is a drag. But never being allowed to run into a pole is a disaster."

Wonderful quote.

teleclimber 15 hours ago 1 reply      
Sorry if this is off-topic with respect to the content of the episode, but does anybody know what kind of software NPR uses to create these transcripts? There is a very specific structure to the layout and I'm wondering what the data entry UI looks like, if it's a commercial package, or if they rolled their own. Thanks.
zarify 5 hours ago 0 replies      
As an educator the whole power of expectation thing probably was more interesting to me than things like the echolocation aspect. I mean yes, we know that students perform better when you expect more of them, but some of the aspects of how that translates into actual performance gains hadn't occurred to me before.
noobermin 13 hours ago 2 replies      
One of the things I was hoping I'd hear people comment on here what Miller says towards the end, that perhaps Kish's aversion to physical closeness, perhaps "loving" in general, and a desire to be independent, were mutually exclusive. While, I personally would agree there is an argument that being a helicopter parent does not help build a sense of independence in someone, I'm really curious if it really is true that a desire of closeness to others in general is really the opposite of that.

I guess it makes some sense naively (independence or apartness vs. dependence or together-ness)? But the way I think about it is that a desire to be independent needs to be tempered with a desire to socialize and be close with others. May be they are orthonormal axes in a person's "personality space," but a healthy balance of both will give a person a good norm overall. This seems opposite of what they hint at toward the end, that these two qualities may not be orthogonal but in fact, anti-parallel.

Perhaps Daniel's aversion helped him overcome the enormous odds against him, an entire culture that put that "blind" label on him and would have necessarily pulled him far into that the more "dependent" side--I mean, that was the point on the whole rest of the episode. It was needed for him given the extreme pressure he was under from this culture. Still, I hope that for many other people who do not have this extreme desire for independence that they need not be forced into it just so that the rest of us see them as equal. That certainly is not fair for them if they do not desire it.

I say all this because while I have a close love for a few close friends and family members, I usually like to be independent myself. However, I've learned as I've gotten older that sometimes you need to rely on others even when you think you won't, which isn't easy for me. A healthy balance seems better, as I've reluctantly accepted.

diminoten 14 hours ago 3 replies      
I am just a casual observer, but how did they narrow it down to the way the rats were touched? Couldn't it have just been the people "running" the experiments instead being more casual about their results? Maybe for a dumb rat you hit your stopwatch a second later or a second faster for a smart rat, for example.

How did they isolate the touch of the experimenters?

yitchelle 16 hours ago 1 reply      
Further details about batman at http://en.wikipedia.org/wiki/Daniel_Kish
mintplant 5 hours ago 1 reply      
For those who want a downloadable version, youtube-dl can pull it from SoundCloud:


niix 13 hours ago 0 replies      
I really enjoyed this episode (and pretty much everything This American Life does). The one fact of the podcast that got me really thinking was how a lot of the blind will naturally turn to clicking in order to help me navigate.
Hex Invaders
points by adamnemecek  11 hours ago   15 comments top 11
christop 11 minutes ago 0 replies      
For a moment I thought part of the tutorial was missing, but the "gameLegend.png" image shown at the end doesn't load so quickly due to being 1.8MB large; it can be optimised to 71kB.
kghose 9 hours ago 2 replies      
I kept saying "RGB, RGB" and got to level 4 before I was bored. I'm also color blind, so I was thinking I'm gonna fail here, but it wasn't that bad, probably because I wasn't distinguishing shades and such
k_ 1 hour ago 0 replies      
I narrowly escaped death at level 9, and lost at level 10.

Could have completed level 10, maybe 11 with a little luck, without those lags (mostly due to all those things running on my computer atm, so basically it's mainly my fault) making the game ignore one click out of 3~4.

I'm a designer really used to RGB colors, btw.Nice & fun game, well polished!

tluyben2 1 hour ago 0 replies      
Hmmm. I was hoping for a game where you had to solve puzzles by writing assembler directly in hex (as many did in the 70s / 80s including me; I find z80 hex codes still easier to work with).
Lerc 9 hours ago 0 replies      
Arrgh, I'm not good under pressure. Got to level 7.

When I lost level 8 it said "You have completed Level 7" which is technically true, but probably not what was intended.

jeorgun 9 hours ago 0 replies      
Very fun! Made it to level 8 before my colorblindness got the better of me.

It's unusably slow under Safari (fine under Chrome); any idea why?

useflyer 4 hours ago 0 replies      
I got to level 9, the difficulty escalates pretty quickly. I'm a designer, I'm curious how the other members of my team do.
jfarmer 8 hours ago 0 replies      
The title music is awesome. I could listen to it all on its own. The in-game music is tedious and I had to mute it after 5 seconds. :(
amadeusw 9 hours ago 0 replies      
I like it lots! Close shades of the same tint keep it challenging.
acomjean 10 hours ago 0 replies      
I like it. I'm awfull at remembering hex colors, so this is fun and learning.
zirkonit 11 hours ago 1 reply      
A short and fun game =) Visually a bit busy, though.
Apple Software Quality Questions
points by ingve  14 hours ago   108 comments top 27
coldtea 9 hours ago 5 replies      
I'm considering in staying with Yosemite and iOS 8, and haven't seen any singificant breakage over previous versions.

If anything it's better than Mavericks. And Mail woes are 99% gone too.

Just to add another viewpoint, since only people with negative experiences tend to write.

Of course all software has bugs, but not everybody is bitten by all of them. Some are legitimate complaints. Other are by peoplw who install every BS addon, haxie etc they find, have el-cheapo external peripherals, or blame third party software issues to the OS maker.

(That said, I've had the "22 px sheet" bug, and it's the second point release already --I run the beta--, it should have been fixed by now, and I dislike how they abandoned Aperture).

20kleagues 10 hours ago 4 replies      
I bought my Mom an iPad over an android tablet telling her that everything will just work. After the iOS 8 update, I am shying away from answering her questions about bugs, and questioning my decision about the iPad. People who are non-tech proficient form the biggest consumer-base for Apple, and it is terrible that Apple is forgetting how it gained this loyal consumer-base in the first place - through reliable software which 'just works'. It only makes more business sense to go back to their original software quality even if it requires dumping regular releases, because they will start losing (probably already have) customers real soon if they don't.
rwbcxrz 10 hours ago 4 replies      
You don't even have to go into depth to find flaws in Yosemite like this one: http://apple.stackexchange.com/questions/152038/prevent-redr...

The flickering menubar icons drive me insane. It's proof that Apple has is either unwilling or unable to commit enough resources to OSX to ensure that its quality is consistent with what users have come to expect.

rab_oof 10 hours ago 5 replies      
SJ used to be relentless about nitpicking to keep quality up. Probably not happening consistently across all apps and platforms as much. Tim may need to appoint a Quality Czar whom is detail-oriented, accepts no bull and has "wrath of God" authority to make folks take them seriously.

Long-standing, time-wasting bugs I've noticed:

- Mdns broadcast disabling doesn't work.

- Swift playground in Xcode crashes regularly.

- Mobile Safari regularly crashes randomly on backspace in text areas.

- App Store installs corrupted apps but they don't show as corrupted until reboot, and then future downloads fail.

- Mail.app synchronously hangs the UI when processing new email notifications (probable not using a background queue).

coreyoconnor 9 hours ago 5 replies      
No question that Apple's quality has gone done. However, who's has gone up? Or better yet: Who is actually building quality software systems?Comparing only in the same problem domains as Apple:All of my Android devices have been rife with equivalently bad issues. Windows? Different quality issues, but just as bad. Google web systems? Same case. Better in some aspects, worse in others.Perhaps I'm old and jaded. Still, seems like we've reached a point in software development where building quality systems is not possible with existing methodologies. Where some problems, while we are able to develop 90% solutions, the last 10% might as well be impossible.The even more jaded part of me wonders: Does it even matter?
jsz0 7 hours ago 2 replies      
Most of the articles I've read on Apple software quality seem like larger industry wide issues to me. For example with the iTunes issue mentioned in this article this is a problem that every metadata / library based media player has to deal. If you let more than one app touch your audio files then you're pretty much guaranteed to have problems. Different apps/services may not write or sort on the same tags. No one's fault exactly just the way things are. The example of dictionary / thesaurus lookup moving to a system wide text service is an instance of a feature clearly being improved but if the users aren't aware it changed is that really an improvement? The entire industry sucks at user education. There's no good reason every major software developer shouldn't have hours and hours of free training/how-to videos available for users to cope with change. For the issue of GMail SMTP rejecting iWork file format attachments it's the industry wide problem of users being stuck between the best interests of various companies. Apple wants to change/improve the iWork format but Google wants to protect users from files it can't scan. Again no one is really at fault it's just the way things are.
72deluxe 1 hour ago 1 reply      
Has anyone found Safari under Yosemite and iOS8 to be of "disappointing" quality? I know of DNS being broken in Yosemite but my wife and I find Safari to be extremely irritating- it just sits there at 20% of address bar progress after entering an address and pressing Enter.

EDIT: And another thing - Spotlight now takes a significant amount of time to get results. I notice a large difference between my personal i7 2012 MBP with Yosemite and the 2008 single-CPU (quad core) Xeon running Mavericks at work. Maybe it's the disk difference, but I sometimes wonder if Spotlight is doing anything as there is no search indication / activity indication.

vbezhenar 4 hours ago 0 replies      
I think that Apple is a victim of its decision to release new OS version every year. Users expect a lot of changes from iOS n+1 or OS X m+1. You can't just fix all bugs and release a new version. And constant feature improvement introduces new bugs and deadline frames won't allow to release properly tested fixes for old bugs.

I believe that feature-wise OS X 10.10 and iOS 8 are quite nice. Apple really should adapt something like Intel's tick-tock strategy. Release iOS 8 with new features at 2014. Release iOS 8S with all bugs fixed in 2015. Release iOS 9 with new features at 2016 but allow customers to downgrade to iOS 8S if they want to, for at least a year. They'll have to support 2 iOS versions, but people will have a choice between new features and stability.

coldcode 11 hours ago 1 reply      
Jean-Louis was always pretty honest about stuff. The problem is Apple has too many things going on and only the big stuff gets attention. Take XCode for example, please take it way before I shoot something.
planetjones 7 hours ago 1 reply      
I think apple are facing the same problems a lot of other companies face - it's no longer green field development and the existing codebase means they're being weighed down by regression. Maybe not enough automated tests. However some bugs are basic error really. Take the frequently visited icons on mobile safari. They keep getting the wrong favicon - that's just shoddy programming and testing.
robbyt 9 hours ago 1 reply      
When I read reports of massive Yosemite bugs, I think that I am ether lucky, or I'm just a really bad QA tester.

I've personally had very few problems on my 3 different Apple computers.

Flow 6 hours ago 2 replies      
I currently have two Macs. A late 2013 rMBP 13" which works flawlessly, and an iMac 27" with 680mx(later 2012?).

The iMac has felt wonky for lack of a better word. It doesn't hang or crash more than Mavericks did, but f.lux causes WindowServer to crash randomly which makes all users to be logged out in a microsecond. I've reported this of course.

Another thing that troubles me is the amount of rubbish logging done by the system. Have Console.app open for a while and see what nonsense it barfs out. How can it work at all with all those problems?

visarga 9 hours ago 0 replies      
I just rated Yosemite 1 star on App store and wiped everything and installed Mavericks. It is so much better now. The last OS was just a shameless cloud infestation ridded with bugs.

I use Remote Screen for work, and EVERY time I disconnect my MacBook Pro freezes completely for 1-3 minutes (no mouse, just a froze desktop). Sometimes I need to hard reset it. Screen sharing used to work nicely in 10.9, 10.8, 10.7 and so on. Why was it necessary to mess with something good?

lelf 4 hours ago 1 reply      
If more developers knew what backward compatibility is, I'd be happily using Snow Leopard right now. The most frustrating part is there's nothing fundamentally new worth all the bugs and a constant envy for new hardware.
sudo-i 10 hours ago 0 replies      
I can't wait for the day when I can use WiFi and Bluetooth simultaneously without issues.

eg. https://discussions.apple.com/thread/4113552

b3tta 7 hours ago 1 reply      
IMHO currently the worst bugs are in discoveryd, which replaces mDNSResponder for Bonjour.

If you remove a service on OSX 10.10 it's removal will be broadcast. But it doesn't stop there. No... Most of the time the service will be published again after that and after a second or so it will be finally removed. How the hell did this pass even the most basic QA checks?!

walterbell 6 hours ago 0 replies      
From a comment on the article, a roundup of writers on this topic, http://www.macobserver.com/tmo/article/mac-experts-weigh-in-...
lmg643 9 hours ago 0 replies      
I had an issue with Mac Mail and after the upgrade it just stopped synching my exchange files correctly and that was the end of it. Called apple support, very friendly guy, talked me through it, we tried a few fixes, there was a workaround but it wasn't the same again. Moved over to MS Outlook and haven't gone back. Kind of a shame.
blazespin 7 hours ago 0 replies      
Apple and Android are in a sack race to the goal line of Billions of dollars in profit. They're both stumbling a lot, but I think it's pretty clear that Apple is winning.
webwielder 10 hours ago 4 replies      
I continue to be flabbergasted that so many otherwise savvy observers believe that a random assortment of software annoyances constitutes a crisis at Apple. Articles like this could have been, and were, written at any time in the past fifteen years.
pbreit 9 hours ago 0 replies      
Classic JLG going off on apps when all anyone cares about are MacOS, iOS and iCloud.
programminggeek 8 hours ago 0 replies      
I don't think there is as much of a fear of someone important getting angry like Steve Jobs used to. Without that fear, they are more likely to ship bugs.
mrmondo 10 hours ago 2 replies      
Quite honestly Yosemite is the most buggy Apple product I've ever used - it has made working with OSX a chore. Apple has not fixed one of the bug reports I filed during the beta phase (which I'm not convinced has finished).
Animats 6 hours ago 0 replies      
The first rule of Apple software quality is that you don't talk about Apple software quality. All Apple software is, by definition, perfect.

Look at Rolex watches. They're a pure status symbol. They don't keep time all that well. Rolex doesn't even submit them for Swiss chronometer certification any more.

_pmf_ 7 hours ago 0 replies      
Maintaining software is not as fun as greenfield development, and Apple's strategy of assuming that their users should just throw away three year old devices seems not to work out any longer.
eridius 5 hours ago 1 reply      
Skimming this, I just wanted to reply to this one bit:

> Befuddled users found they couldnt send Pages 5 files through Gmail. Its now fixed, as the Whats New in Pages screen proudly claims

> > Updated file format makes it easier to send documents via services like Gmail and Dropbox

> but how could such an obvious, non-esoteric bug escape Apples attention in the first place?

And the answer, if I recall correctly as to what was going on, was that this wasn't a bug with Apple's software at all. It was a consequence of the file format actually being a package, meaning it was really a document. Apple software all worked with documents just fine, and you'll find that if you tried to use Mail.app to send it, it would all Just Work. The issue is that Gmail and other such services never even considered the idea that a user might want to send a whole folder and did not have any way to support that.

So the "fix" was to change the file format to actually be a compressed archive of the package (I assume it was a zip file, but I don't know how to go back and check). This made it work with all of the stupid software out there that assumed users would only want to transfer individual files.

Sure, perhaps the Pages team could have foreseen this issue. But that doesn't make it a bug in their software, just a case of only prioritizing compatibility with other aspects of Apple's software ecosystem.

       cached 19 January 2015 14:02:05 GMT