hacker news with inline top comments    .. more ..    2 Aug 2015 News
home   ask   best   4 years ago   
How to destroy a special collections library with social media sarahwerner.net
43 points by pepys  3 hours ago   1 comment top
McElroy 46 minutes ago 0 replies      
Good talk. I enjoyed the fact that she put up both a voice recording and a text version with images. I listened and followed along.
Convox Launch a Private Cloud in Minutes convox.com
175 points by lox  7 hours ago   83 comments top 19
ddollar 6 hours ago 8 replies      
Hello! I'm part of the Convox core team and happy to answer any questions.

Convox is an app deployment platform that you can install into your own AWS account. It uses ECS, ELB, Kinesis, and many other great AWS services under the hood but automates it all away to give you a deployment experience even easier than Heroku.

Convox uses Docker under the hood so if you want to customize anything (distro, dependencies, etc) you can simply add a Dockerfile to your project.

Convox is entirely open source. Check it out at https://github.com/convox

To get started with our installer go to:http://docs.convox.com/docs/getting-started-with-convox

thobakr 29 minutes ago 0 replies      
It's just another toy for AWS kids with "private cloud" buzzword.
mappu 5 hours ago 1 reply      
You have a "Convox, Inc" copyright on the website - is there a business / monetisation-plan behind this, or is it just the open source project?
kevinSuttle 4 hours ago 2 replies      
"The simplicity of Heroku. The power of AWS."

The privacy of "?". Was waiting for this part since it was included in the title.

dxhdr 4 hours ago 2 replies      
Cool product! This is fairly unrelated but since you've open-sourced your code (thanks!) I was just browsing around the 'cli' project and saw heaps of the following:

 if err != nil { stdcli.Error(err) return }
Is this idiomatic Go? It's just sprinkled everywhere and makes it hard to follow logic at a glance.

mazlix 6 hours ago 1 reply      
This looks amazing! Thanks for making/releasing this as open source.
intrasight 4 hours ago 1 reply      
Could someone familiar with Convox give us a brief answer to the question "how is this different from the myriad DevOps tools that Amazon provides for AWS?"
goddamnyouryan 6 hours ago 1 reply      
Super awesome. I can't wait to start porting some of my heroku apps over to aws using convox.
rhelsing 5 hours ago 1 reply      
Anathem reference?
mslate 6 hours ago 1 reply      
bvanvugt 5 hours ago 2 replies      
Is anyone using this in production? If so, at what scale?
rubiquity 3 hours ago 3 replies      
How is this a private cloud if it's running on AWS which is most certainly a public cloud? I don't mean to take away from the product but I find the messaging to be odd.
general_failure 3 hours ago 1 reply      
Funny, we have a similar tag line but quite different products.


(Cofounder of cloudron.io here)

badri 2 hours ago 2 replies      
sounds very similar to tsuru:


doomspork 5 hours ago 1 reply      
This is really great! I'm interested in trying this out for our products in place of OpsWork.

The documentations says "by default provisions an Elastic Load Balancer and 3 t2.small instances", is it possible to change the number/size of servers dynamically?

lsm 5 hours ago 1 reply      
When you will support other vendors?
rjurney 5 hours ago 1 reply      
So what does Convox, Inc. offer?
icpmacdo 6 hours ago 1 reply      
How is this different than Dokku?
SolarCity didnt invent solar panel, but it invented something more important slate.com
26 points by prostoalex  3 hours ago   15 comments top 4
diafygi 32 minutes ago 0 replies      
Holy hell is this a puff piece. Don't get me wrong, I'm a huge fan of SolarCity, but this story is written as if they invented the solar boom that's happening now.

Three major factors contributed to making solar as viable as it is now, none of which Solar City invented (but it is executing on them better).

1. Germany basically subsidizing China (and others) to build the massive solar panel production pipeline it has now that are producing ridiculously cheap panels[1].

2. Financial innovation to allow consumers and installers to pay for panels over time (PPA, leases, loans, etc.). These were originally pioneered by SunEdison[2], and the industry has been innovating on them since (including SolarCity with its MyPower Loan).

3. The California Solar Initiative[3] and Department of Energy's Sunshot Initiative[4]. Both were incredibly well designed to gradually wean solar off of subsidies and foster innovation through strategic grants, and they have worked incredibly well at teaching the solar industry to stand on its own two feet. We are well on our way to $1/watt[5].

SolarCity has executed incredibly well in this industry, so they definitely deserve tons of credit for installing so much solar. But it's a disservice to the rest of the companies, governments, and financiers to say they invented the recent growth. Everyone, including SolarCity, has come together to build a mature and exponentially growing industry that was 1 out of every 78 jobs last year.

[1]: http://qz.com/41166/how-germanys-energy-transformation-has-t...

[2]: http://archive.onearth.org/article/selling-the-sun

[3]: http://www.gosolarcalifornia.ca.gov/csi/index.php

[4]: http://energy.gov/eere/sunshot/sunshot-initiative

[5]: http://cleantechnica.com/2015/07/10/price-solar-hits-record-...

kumarm 53 minutes ago 1 reply      
From Wikipedia:In 2008, SolarCity introduced a new solar lease option for homeowners that significantly reduces or eliminates the upfront cost of installing solar power.

Several companies were offering Solar lease in 2007.

The article seem to imply that SolarCity invented Solar Leasing which is inaccurate.

pbreit 1 hour ago 3 replies      
The processes mentioned still sorta basic but I think it's always useful to point out that innovation extends beyond tech.

Also, I wonder where any of this (solar city, spacex and tesla) would be without government help?

aharonovich 1 hour ago 0 replies      
India and Bangladesh have begun the exchange of over 160 enclaves washingtonpost.com
167 points by CPLX  12 hours ago   38 comments top 13
phaemon 11 hours ago 3 replies      
Surely not as weird as the Bir Tawil border dispute between Egypt and Sudan. Both argue that it should be part of the other country!


plorkyeran 10 hours ago 3 replies      
Were the borders actually disputed, or just really weird? The article made it sound like while the borders were odd, both countries agreed on where they were.
JadeNB 10 hours ago 2 replies      
The article discusses enclaves extensively, and then refers at the end to "exclave residents". Wikipedia (https://en.wikipedia.org/wiki/Enclave_and_exclave) says:

> An enclave is any portion of a state that is entirely surrounded by the territory of one other state. An exclave is a portion of a state geographically separated from the main part by surrounding alien territory. Many enclaves are also exclaves.

If I understand this correctly, the WaPo article really means to refer specifically to exclaves, not just to enclaves, throughout.

TimFogarty 4 hours ago 0 replies      
For a great explanation of the enclaves and exclaves described in the article, you might like to check out this youtube video: https://www.youtube.com/watch?v=gtLxZiiuaXs

The visual aids definitely help.

js2 6 hours ago 1 reply      
Even within the US, state borders are still not a settled matter.


JadeNB 11 hours ago 0 replies      
The article mentions this as an example of a second-order enclave (whereas the BangladeshIndia enclave it discusses is third-order):

> There are a web of enclaves within enclaves in Baarle-Hertog, a Belgian municipality with pockets of Dutch sovereignty.

S4M 9 hours ago 3 replies      
I wonder how was life for people living in one of those enclaves. Were there actually passport controls at the frontiers? My guess is that the Indians living there were actually living just as if they were citizen from Bangladesh, but I could be wrong.
leoc 9 hours ago 0 replies      
So this must be the Cooch Behar from the Threepenny Opera's "Kanonensong" https://www.youtube.com/watch?v=gi433VgJ5bc , right?
comrade1 8 hours ago 0 replies      
There's at least one piece of Germany inside of Switzerland:https://en.wikipedia.org/wiki/B%C3%BCsingen_am_Hochrhein

There are a few others:https://en.wikipedia.org/wiki/List_of_enclaves_and_exclaves

smcl 6 hours ago 0 replies      
Reminds me of Baarle-Nassau : https://en.wikipedia.org/wiki/Baarle-Nassau
geuis 11 hours ago 0 replies      
comrade1 9 hours ago 1 reply      
Visualizing Rust's type-system jadpole.github.io
62 points by adamnemecek  9 hours ago   1 comment top
demarq 4 minutes ago 0 replies      
Thanks, you just gave me a powerful way to think about a type system. And that final diabolical implementation was proof. Even before you told us how to interpret it I managed to use your set theory approach to break it down.
Disney's Practical Guide to Path Tracing [video] youtube.com
87 points by adamnemecek  12 hours ago   26 comments top 9
solidangle 9 minutes ago 0 replies      
The video was made for this article:


If you want to learn more about rendering then here is some more info:

Stanford CS348b course notes:


Cornell CS6630 course notes:


Eric Veach's Phd thesis:


Physically Based Rendering: From Theory to Implementation by Matt Pharr, Greg Humphreys and Wenzel Jakob (book and open source implementation of a state of the art renderer):


AndrewKemendo 6 hours ago 1 reply      

This reminds me very much of one of my favorite Disney videos that I showed my daughter long ago. It's this clip of four very talented cell animators out practicing their Art:


The whole thing is whimsical while also being very educating. Really glad that Disney is keeping this kind of stuff up.

billbail 5 hours ago 3 replies      
For optimised ray tracing you don't beam the light from the camera as that has the same chance to bounce to the sun through indirect illumination as a ray from the sun has if it is going to bounce to the camera.

What they are saying here is wrong or rather extremely simplified for a younger audience.

bhouston 6 hours ago 0 replies      
Source for this video with more information:


forrestthewoods 2 hours ago 1 reply      
Disney's global illumination tech is spectacular. The quality level since Monster's University is just leaps and bounds better than everything before it. Well, technically their short Partysaurus Rex was it's debut. But Monster's University was the first feature length.
amelius 7 hours ago 0 replies      
I'm wondering what the target audience for that clip was.
rndn 8 hours ago 2 replies      
Awesome. It would be great if Disney/Pixar would make more educational videos like this one.

Off-topic question: Why is this video unlisted?

gillianseed 8 hours ago 1 reply      
Good technical explanation, but why on earth not use a 3D rendition rather than this incredibly flat 2D rendition ?
ericjang 6 hours ago 0 replies      
Great video. I think this should be shown at the beginning of every introductory computer graphics course.

For a nice overview of Disney's proprietary Hyperion renderer that implements this light bundling technique (and a whole lot more), see : http://www.fxguide.com/featured/disneys-new-production-rende...

Haruki Murakami: The Moment I Became a Novelist lithub.com
78 points by fitzwatermellow  11 hours ago   23 comments top 7
evjan 6 hours ago 0 replies      
This epiphany he writes about sounds almost religious in nature. Did he really not consider writing before that? Does his memory fail him, is this a memory construct that makes for a good story but doesnt hint at what really happened? Ive never experienced anything like this myself.

I often tell people, when asked about why I do what I do, that I knew I wanted to be a software developer at the age of 10. But it wasnt a sudden realisation. It came to me after getting an Amiga 500, playing games and then realising I could make my own games. I vividly remember a summer trip to my aunts place in Norway where I brought a book on BASIC and devoured a tutorial on how to make a hotel booking system. I was utterly enthralled and thought maybe one day I can make hotel booking systems for a living. From then on there was little doubt I would do this.

Except of course when I turned 13 and got into playing rock music on my guitar. I spent the next 7 years feeling lost, because I really wanted to be a musician but the career opportunities seemed unfeasible. It wasnt until after a brief stint as a data entry clerk in the UK that I came to the conclusion that I needed to get my shit together and go to uni. So I studied Computer Science and here I am at the ripe age of 35 with 10 years of software development experience, thinking I always knew I wanted to do this. But I didnt, and writing this made me realise that.

everly 7 hours ago 2 replies      
If you've not read any of Murakami's fiction and are interested, check out "Hard-Boiled Wonderland and the End of the World". I think most HN readers would enjoy it.
howlingfantods 5 hours ago 5 replies      
Among my group of friends, I've found that everyone's favorite Murakami book is always the first or second they read. After you read a couple, they all start sounding the same. Another angsty young man searches for a girl with serious emotional issues, and there's some metaphor heavy dream sequences and talking animals along the way. I haven't read his recent books but I've read five books of his and it was really tiresome, repetitive reading after book 2-3.
oska 2 hours ago 1 reply      
Small anecdote: I was living in Taiwan in 1999 and found an English translation of Pinball, 1973 in a bookshop there. It was a Kodansha imprint, for English learners I think. Anyway, I'd already read a number of Murakami books by that time (Wild Sheep Chase, Dance, Dance, Dance and Wind-up Bird Chronicle), and enjoyed them so I snapped it up. It was an ok read but far from his best (which for me, still remains Wind-up Bird Chronicle).

Later, I researched that Kodansha edition on the net and realised that it had never really been made available in English speaking countries and was fairly rare. So I put it up for sale on eBay, probably in 2000 or 2001. Thought it might attract a bit of interest but was pleasantly surprised when it attracted a lot of interest and eventually sold for something like $300. Can't remember the exact final bid but it was around that. Was happy to sell it for that and still happy now. I doubt it's appreciated in value since then but I could be wrong.

Anyway, it appears from this article that Pinball, 1973 and Hear the wind sing are now finally being published in a generally distributed edition. I'm a little bit surprised because it was my understanding that Murakami thought they were early, weaker works and wasn't particularly keen for them to get new attention.

sacrilicious 2 hours ago 0 replies      
I've read this story before, and often think about how if people don't have examples in life these 'epiphanies' are hard to come by. Non-fictional heroes may get more attention with the access to knowledge we have nowadays, but there's still the problem of focusing in and having the possibility made real in our own minds.

My story:- The pretty big theater I worked for had a crackerjack unix head running a windows/exchange environment like it was no big thing, all from an iBookG3. I was in the shower, at some point in the late fall. and I thought to myself 'I'd like to have a job/get paid to "fix computers"'. I sent away for Apple's Tech Training, someone's husband was starting a consulting company, and 10 years later... well, I like what I do and am lucky to have as many advances as I've had in my short career. Workstation-level sysadmin may not be highly regarded, but at least the book I wrote wasn't by hand.

curiousjorge 56 minutes ago 0 replies      
waspleg 7 hours ago 1 reply      
This is an interesting read. Thanks for sharing it.
Big Ball of Mud (1999) laputan.org
19 points by timmytokyo  5 hours ago   1 comment top
crdoconnor 1 hour ago 0 replies      
> A simple way to begin to control decline is to cordon off the blighted areas, and put an attractive faade around them. We call this strategy SWEEPING IT UNDER THE RUG. In more advanced cases, there may be no alternative but to tear everything down and start over.

I've found the most successful way of dealing with these code bases is definitely not this. It is to surround them with a body end-to-end functional tests and then slowly refactor what is underneath, which mostly just involves doing this:

* De-duplicate wherever there is code duplication.

* Pull the different modules apart so that they are minimally dependent upon one another instead of tightly coupled.

* Replacing re-invented wheels once they are de-coupled with higher quality modules.

An unappreciated facet of dealing with big balls of mud is also that unit tests usually range from unhelpful to downright harmful. If you don't have self-contained, loosely coupled algorithmic modules to test, unit tests aren't helpful. They will almost inevitably mean tightly coupling your tests to architecture you know is bad, a mess of unnecessary mock objects and an inability to write tests for real bug scenarios that users report.

If the Panama Canal gets a rival economist.com
21 points by lando2319  8 hours ago   12 comments top 7
brianbreslin 2 hours ago 1 reply      
Disclaimer, I'm part Panamanian, that being said, I find this highly suspicious. A lot of my Nicaraguan friends are also highly doubtful this will ever take off. The big problems for Nicaragua with this are:

- Horrible ecological impact (fisheries in Caribbean are already [1]

- Many think this is an excuse for the politicians to get cushy land/resort deals along the proposed route and near the entrances to the canal [2]

- Some think it is really never going to materialize, but Chinese owned resorts will pop up on each end.

- Nicaragua doesn't have the infrastructure to undertake such a big engineering project (human talent, electrical, etc)

- The public hasn't had any say in this so far [2]

1. http://e360.yale.edu/feature/nicaragua_canal_a_giant_project...

2. http://www.csmonitor.com/World/Americas/Latin-America-Monito...

chadlung 2 hours ago 2 replies      
Looking at the proposed map of the canal how does this work when Lake Nicaragua [1] is freshwater? Even with locks I'm guessing salt water from the sea will slowly contaminate (seep) into the lake causing big changes for the ecosystem and local population.

[1] https://en.wikipedia.org/wiki/Lake_Nicaragua

prewett 1 hour ago 1 reply      
I'll admit to being skeptical, too, but I seem to recall reading that people thought that the US was not going to be able to finish the original canal and that the whole project was foolhardy. I think the US was just more tenacious than France before it (also people had learned about the role of mosquitoes in disease and sprayed the mosquitoes)

China has a very long history of massive building projects, so if Nicaragua is willing, and the similar sentiment as the first canal makes me hesitant to dismiss it.

lchengify 1 hour ago 0 replies      
Related NYTimes article from earlier this year [1]

[1] http://www.nytimes.com/2015/04/26/travel/26nicaragua-cover.h...

jakozaur 2 hours ago 0 replies      
Back on the envelope calculations suggest that is not profitable investment. Even with optimistic assumptions:


JacobAldridge 1 hour ago 0 replies      
themeekforgotpw 4 hours ago 0 replies      
Like China's "One Belt, One Road"?
Not the retiring type: people still working in their 70s, 80s and 90s theguardian.com
48 points by yannis  12 hours ago   27 comments top 12
ulrikrasmussen 15 minutes ago 0 replies      
My grandfather constructed crossword puzzles for several magazines and newspapers, from 1955 until his death this year, 90 years old.

He was extremely knowledgeable, and kept a lucid mind until the very end. I don't doubt that his work was the main reason for this.

napsterbr 7 hours ago 0 replies      
My grandfather is a 89-year-old lawyer and you can find him every morning on his office, including weekends. He uses a typewriter and is actually very fast on those.

He is incredibly lucid and and claims it's because of his work, which I agree.

Always working - or, in other words, exercising your brain - is a must to keep your memories and sanity.

kabdib 9 hours ago 2 replies      
My father-in-law retired at 75. He was hacking C for some embedded systems running silicon wafer equipment. Ten years later he's still pretty sharp.

Personally I see no reason why I should retire, as long as someone is happy to pay me for what I like to do I'll probably keep doing it.

bane 4 hours ago 0 replies      
My father is in his mid-70s and after a short stint in retirement went back to work about 5 years ago. He claims it's because of the money, but there's not a lot preventing my parents from selling their home and moving to a cheaper part of the country and living out their retirement years.

On the plus side, he was remarkably sharp and agile for a 70 year old. But his legs are starting to fail and that's made him age very quickly.

Now I can see it being about money (and health care), he's going to need knee replacement surgery and months of rehab. Something my parents will struggle to afford.

He's lived a fascinating life, and is full of stories, but to me he's also a warning about the need to cultivate non-work interests and stash away enough money to enjoy a long retirement enjoying those interests.

sporkenfang 8 hours ago 3 replies      
As long as you like what you're doing and someone will employ you, right? Those folks probably have significant experience. Experience can't always trump creativity, but this is pretty cool. I'd not mind learning from a coworker in their eighties, you know?

<blanket statement warning> Silicon Valley's emphasis on young programmers and engineers in general is silly.

tluyben2 2 hours ago 1 reply      
Anecdotal alert. All people I know who went 'on pension' either started working again or died (at least 5 cases of high profile managers getting heart attacks or commit suicide within a year after their pension). Most people, including my father and grandfather, really did not like pressureless sitting at home. You do the vacations and the reading and hobbies but then, unless you are very dedicated (like a job), there is a hole. I believe this to become a society wide thing quite soon. Actually dedicating yourself to do something (be it painting, coding, building, whatever) as a discipline is very hard and very underestimated for people who never needed it.
bbgm 2 hours ago 0 replies      
My father will be 70 next year and just went into semi-retirement (where semi-retirement = writing two books). I suspect if there hadn't been a coup in Mali, which resulted in a return to India, he would probably still be working full time. It's not about the money T this point. He's in good health, is sharp as ever, so doing something seems to make sense.
ScottBurson 9 hours ago 1 reply      
They'll have to pry my keyboard from my cold dead fingers.

(With apologies to John Wayne)

WalterBright 8 hours ago 1 reply      
I plan to work until my brain doesn't function anymore.
abalashov 4 hours ago 0 replies      
Kudos to those who go about it cheerfully, but I also fear it will be increasingly mandatory, as the top-heavy demographic crisis comes home to roost in many developed countries. By the time the median HN user is a "senior citizen", Social Security eligibility will start at what, 87?
a3voices 2 hours ago 1 reply      
The issue with not working is that if you have habitual activities during retirement (other than watching tv), those activities become indistinguishable from work. So you might as well just work. I'm only 28 but that's how I see it.
bettyx1138 4 hours ago 1 reply      
Should I Use Signed or Unsigned Ints? robertelder.org
61 points by dkarapetyan  11 hours ago   42 comments top 15
jerf 5 hours ago 2 replies      
As I learned from Haskell, the correct answer really ought to depend on the domain. I ought to be able to use an unsigned integer to represent things like length, for instance.

However, since apparently we've collectively decided that we're always operating in ring 256/65,536/etc. instead of the real world where we only operate there in rare exceptional circumstances [1], instead of an exception being generated when we under or overflow, which is almost always what we actually want to have happen, the numbers just happily march along, underflowing or overflowing their way to complete gibberish with nary a care in the world.

Consequently the answer is "signed unless you have a good reason to need unsigned", because at least then you are more likely to be able to detect an error condition. I'd like to be able to say "always" detect an error condition, but, alas, we threw that away decades ago. "Hooray" for efficiency over correctness!

[1]: Since this seems to come up whenever I fail to mention this, bear in mind that being able to name the exceptional circumstances does not make those circumstances any less exceptional. Grep for all arithmetic in your choice of program and the vast, vast majority are not deliberately using overflow behavior for some effect. Even in those programs that do use it for hashing or encryption or something you'll find the vast, vast bulk of arithmetic is not. The exceptions leap to mind precisely because, as exceptions, they are memorable.

kazinator 5 hours ago 2 replies      
Complete newbie programmer naivete, I'm afraid.

Unsigned integers have a big "cliff" immediately to the left of zero.

Its behavior is not undefined, but it is not well-defined either. For instance, subtracting one from 0U produces the value UINT_MAX. This value is implementation-defined. It's safe in that the machine won't catch on fire, but what good is that if the program isn't prepared to deal with the sudden jump to a large value?

Suppose that x and y are small values, in a small range confined reasonably close to zero. (Say, their decimal representation is at most three or four digits.) And suppose you know that x < y.

If x and y signed, then you know that, for instance, x - 1 < y. If you have an expression like x < y + b in the program, you can happily change it, algebraically to x - b < y if you know that overflow isn't taking place, which you often do if you have assurance that these are smallish values.

If they are unsigned, you cannot do this.

In the absence of overflow, which happens away from zero, signed integers behave like ordinary mathematical integers. Unsigned integers do not.

Check this out: downward counting loop:

 for (unsigned i = n - 1; i >= 0; i--) { /* Oops! Infinite! */ }
Change to signed, fixed! Hopefully as a result of a compiler warning that the loop guard expression is always true due to the type.

Even worse are mixtures of signed and unsigned operands in expressions; luckily, C compilers tend to have reasonably decent warnings about that.

Unsigned integers are a tool. They handle specific jobs. They are not suitable as the reach-for all purpose integer.

ridiculous_fish 9 hours ago 3 replies      
This is good general advice.

A key difference is whether the type is meant to be an index (counting) or for arithmetic. Indexing with unsigned integers certainly has its pitfalls:

 while (idx >= 0) { foo(arr[idx]); idx--; }
But this is outweighed by the enormous and under-appreciated dangers of signed arithmetic!

Try writing C functions add, subtract, multiply, and divide, that do anything on overflow except UB. It's trivial with unsigned, but wretched and horrible with signed types. And real software like PostgreSQL gets it wrong, with crashy consequences: http://kqueue.org/blog/2012/12/31/idiv-dos/#sql

modeless 7 hours ago 1 reply      
Unsigned overflow is defined but that doesn't necessarily make it any more expected when it happens. It can still ruin your day, and in fact it happens much more often than signed overflow, because it can happen with seemingly innocent subtraction of small values. After personally fixing several unsigned overflow bugs in the past few months I'm going to have to side with the Google style guide on this one.
dfbrown 7 hours ago 1 reply      
Despite the fact that signed overflow/underflow is undefined behavior I'm pretty sure that many more bugs have resulted from unsigned underflow than signed overflow or underflow. When working with integers you're usually working with numbers relatively close to zero so it's very easy to unintentionally cross that 0 barrier. With signed integers it is much more difficult to reach those overflow/underflow limits.
jhallenworld 2 hours ago 1 reply      
Use signed because pointer differences in C are signed:

 char *a, *b; size_t foo = a - b; /* generates a warning with -Wconversion */
This tells me that C's native integer type is ptrdiff_t, not size_t. (And I agree this is crazy: unsigned modular arithmetic would be fine, but they chose a signed result for pointer subtraction).

Why care about this? You should try to get a clean compile with -Wconversion, but also you should avoid adding casts all over the place (they hide potential problems). It's cleaner to wrap all instances of size_t with ptrdiff_t- you can even check for signed overflow in these cases if you are worried about it.

There is another reason: loop index code written to work properly for unsigned will work for signed, but the reverse is not true. This means you have to think about every loop if you intend to do some kind of global conversion to unsigned.

cautious_int 42 minutes ago 0 replies      
Assume sizeof(int) == sizeof(unsigned int) == 4

g << h Well defined because 2147483648 can be represented in a 32 bit unsigned int

Even under those assumptions, it is implementation defined if unsigned int can hold the value 2^31. It is perfectly valid to have an UINT_MAX value of 2^31-1. In that case the code will cause undefined behavior.

The only guarantee for unsigned int is that it's UINT_MAX value is at least 2^16-1, regardless of its bit size, and that it has at least as much value bits as a signed int.

For example C allows these ranges:

int: -2^31 , 2^31-1

unsigned int: 0 , 2^31-1

dietrichepp 8 hours ago 0 replies      
There are a lot of corner cases involved here. The corner cases for signed integers involve undefined behavior, the corner cases for unsigned integers involve overflow. Any time you mix them you get unsigned integers, which can give you big surprises.

For example, if z is unsigned, then both x and y will get converted to unsigned as well. This can cause surprises when you expected the LHS to be negative, but it's not, because the right side is unsigned, and that contaminates the left side.

 if (x < y * z) { ... }
This particular case gets caught by -Wall, but there are plenty of cases where unintended unsigned contagion doesn't caught by the compiler. Of course, if you make x long, then y * z will be unsigned, then widened to long, which gives you different results if you are on 32-bit or if you are on Windows. Using signed integers everywhere reduces the cognitive load here, although if you are paranoid, you need to do overflow checking which is going to be a bear and you might want to switch to a language with bigints or checked arithmetic.

As another point, in the following statement, the compiler is allowed to assume that the loop terminates:

 for (i = 0; i <= N; i++) { ... }
Yes, even if N is INT_MAX. The way I think of it, your use of "signed" means that you are communicating (better yet, promising) to the compiler that you believe overflow will not occur. In these cases, the compiler will usually "do the right thing" when optimizing loops like this, where it sometimes can't do that optimization for unsigned loop variables.

So I'm going to disagree as a point of style. Signed arithmetic avoids unintended consequences for comparisons and arithmetic, and enables better loop optimizations by the compiler. In my experience, this is usually correct, and it is rare that I actually want numbers to overflow and do something with them afterwards.

All said, if you don't quite get the subtleties of arithmetic in C (yes, it is subtle) then your C code is fairly likely to have errors, and no style guide is going to be a panacea.

olympus 9 hours ago 0 replies      
While the signed vs unsigned int question doesn't concern me much[1], I really appreciate this post because I discovered the One Page CPU. The author nerd-sniped me three paragraphs in. Now I want to try to implement this on an FPGA. I think my next few weekends will be taken up with getting a real world implementation working and getting some examples working. Thanks.

[1] If I was designing a system that could only have one type of int (but why?) I'd use unsigned ints and if I needed to represent negative numbers, I'd use two's complement which is fairly well behaved, much as the author points out. This is fairly common in the embedded world.

jschwartzi 9 hours ago 1 reply      
Seems like it boils down to using unsigned when you need to treat numbers as fields of bits, and signed when you need to do arithmetic.
0x0 8 hours ago 1 reply      
Funnily enough the bash crash in the comment still crashes bash on OSX 10.10.4:


pandaman 5 hours ago 1 reply      
For what it's worth, all the bugs I've seen related to integer representation (not just over/under, e.g. I've seen code casting a 32bit pointer to a 64bit signed integer) could have been fixed by changing int to unsigned and never the opposite. Of course, this could be just the effect of the vast majority of programmers choosing int as the default integer type.
bsder 2 hours ago 1 reply      
mmaunder 7 hours ago 0 replies      
Unsigned int's for storing IPv4's FTW. Or conversely, storing IP's as signed int's will make you sad.
faragon 8 hours ago 1 reply      
No matter if you use signed or unsigned types, be sure you handle both underflow and overflow. From my experience, signed integers main usage is integer arithmetic (e.g. accounting things that can be negative or positive), pointer arithmetic, error codes, or multiplexing on same variable two different kind of elements (so if negative has one meaning and if positive, a different one). Unsigned, for handling countable elements, with 0 as minimum. The good point of using unsigned integers is that you have twice the available codes, because the extra bit.

Example for counting backwards using unsigned types (underflow check, -1 is 111...111 binary in 2's complement representation, so -1 is equivalent to the biggest unsigned number):

 size_t i = ss - 1, j = sso; for (; i != (size_t)-1; i--) { switch (s[i]) { case '"': j -= 6; s_memcpy6(o + j, "&quot;"); continue; case '&': j -= 5; s_memcpy5(o + j, "&amp;"); continue; case '\'': j -= 6; s_memcpy6(o + j, "&apos;"); continue; case '<': j -= 4; s_memcpy4(o + j, "&lt;"); continue; case '>': j -= 4; s_memcpy4(o + j, "&gt;"); continue; default: o[--j] = s[i]; continue; } }
Example for compute percentage on unsigned integers (same idea can be applied for signed) without losing precision nor requiring bigger data container (overflow checks):

 size_t s_size_t_pct(const size_t q, const size_t pct) { return q > 10000 ? (q / 100) * pct : (q * pct) / 100; }
Also, for cases where overflow could happen, you can handle it doing something like this:

 sbool_t s_size_t_overflow(const size_t off, const size_t inc) { return inc > (S_SIZET_MAX - off) ? S_TRUE : S_FALSE; } size_t s_size_t_add(size_t a, size_t b, size_t val_if_saturated) { return s_size_t_overflow(a, b) ? val_if_saturated : a + b; } size_t s_size_t_mul(size_t a, size_t b, size_t val_if_saturated) { RETURN_IF(b == 0, 0); return a > SIZE_MAX / b ? val_if_saturated : a * b; }
P.S. examples taken from:

https://github.com/faragon/libsrt/blob/master/src/senc.c underflow example)

https://github.com/faragon/libsrt/blob/master/src/scommon.h (overflow examples)

SciRate: an open source website to browse, save and comment arXiv articles scirate.com
20 points by Link-  6 hours ago   3 comments top
tvawnz 3 hours ago 1 reply      
This is something that I had been thinking would greatly benefit the arXiv for a long time.

Presently, when you do a scientific work, the article goes to a referee who then sends you back their comments which you account for before resubmitting. Sometimes you get an excellent referee who really knows his stuff and gives reasonable comments for improvements. Sometimes you get a guy who really just can't be bothered who gives minimal comments leading you to wonder if they've even read it. Sometimes you get an opposing group, which frequently leads to untenable comments and prompts submission to a different journal.

This is the only feedback you will ever get outside of your coauthors except the citation count. In my opinion it would be amazing to have some big named authors who have read your paper drop off advice, what they liked what they didn't like, etc. At most universities in most groups you do "journal club" once a week where you discuss others' papers and produce this exact feedback, but there's no forum to post it in, so it just stays in the journal club.

However, just as abuse on arXiv led to the transformation to an invitation only site (I forget if you need an invite or just a university sponsored email; see also vixra.com), the community on a site like this _should require your real identity_.

It could be devastating to a young researcher to have their work publicly shamed by an anonymous commenter who has it out for their research group. But if the comments are linked to real identities, I think the community will police itself... Although there are frequently unofficial "response to... " articles on the arXiv, they are publicly attached to other research groups, and you will sometimes see "response to response to ..." letters.

It's interesting to see these sort of 2010 things popping up amongst our 1990s bastion websites like arxiv and ADS and such (see researchgate, the facebook of scientists). But frequently they kind of seem to fall victim to the same downfalls of their non-scientific counterparts ("cite" is the equivalent of "like" on researchgate to improve your "profile impact" metric so you frequently get people acting needy about "citation requests" even though we have our own metrics like Hirsch Indeces to measure scientific productiveness in an objective way).

TLDR;I worry that a comment based website could host troll-like behavior which could be especially harmful when the whole premise is people's professional work. This is one of the few places on the internet where I think real names must be required and institutional affiliation should be provided (as is the case with arXiv). As it is I signed up with a BS name and email in 5 seconds and can immediately start trashing this paper on quantum physics that I know nothing about.

Ask HN: What was Usenet's ultimate demise?
71 points by jebblue  14 hours ago   78 comments top 28
cstross 12 hours ago 3 replies      
Old usenet-head here (on it regularly from 1991, first met it 1986) ...

First problem: there's no identity authentication mechanism in NNTP. So spam is a problem, forged moderation headers are a problem, general abuse is a problem. (A modern syndicated forum system with OAuth or some successor model would be a lot easier to ride herd on.)

Second problem: storage demands expand faster than the user base. Because it's a flood-fill store-and-forward system, each server node tries to replicate the entire feed. Consequently news admins tended to put a short expiry on posts in binary groups so they'd be deleted fairly promptly ... but if you do that, the lusers can't find what they're looking for so they ask their friends to repost the bloody things, ad nauseam.

Third problem: etiquette. Yeah, yeah, I am coming over all elitist here, but the original usenet mindset was exactly that. These days we're used to being overrun by everyone who can use a point-and-drool interface on their phone to look at Facebook, but back in September 1992 it was a real shock to the system when usenet was suddenly gatewayed onto AOL, I can tell you. Previously usenet more or less got along because the users were university staff and students (who could be held accountable to some extent) and computer industry folks. Thereafter, well, a lot of the worse aspects of 4chan and Reddit were pioneered on usenet. (Want to know why folks hero-worshipped Larry Wall before he wrote Perl? Because he wrote this thing called rn(1). Which had killfiles.) Anyway, a side-effect of this was that when web browsers began to show up, the response was to double-down on the high-powered CURSES-based or pure command-line clients rather than to try and figure out how to put an easy-to-use interface on top of a news spool. Upshot: usenet clients remained rooted in the early 1990s at best.

These days much of the functionality of usenet (minus the binaries) is provided by Reddit. Usenet itself turned into a half-assed space-hogging brain dead file sharing network. And we know what ISPs think of space-hogging half-assed stuff that doesn't make them money and risks getting them sued.

stevewepay 2 hours ago 0 replies      
USENET has always been used for porn and piracy, since at least the early 90s. Of course, most of the great newsgroups were discussions-based, but probably most of the bandwidth was porn and piracy.

When I was in college, I remember someone on my floor had written a program in Pascal that automatically downloaded porn off USENET. He would leave his computer running all the time, connected to the college's internet connection via modem, and we would occasionally see a flash of a porn pic on his screen and ask "What was that?". This was before the days of integrated TCP/IP stacks in the OS, so if I remember correctly he had to dial in via modem and then use something called Slurp or something like that, I can't remember exactly now.

This continued all through the 90s. A bunch of my friends had Airnews accounts and downloaded mp3s and porn 24x7, during what we called the "Golden Age" of piracy, when Napster was starting up in 97 up until the early 2000s, when the bust hit.

At some point, the medium for discussions moved off of USENET and went to more user friendly places like email mailing lists, google groups, yahoo groups, reddit, etc. This left only piracy and porn on USENET, and I'm actually surprised that some ISPs still support USENET at this point.

DanBC 13 hours ago 2 replies      
Binary groups were huge and users expected them for free. And users would download huge amounts of stuff. So it's pretty much a cost sink, and ISPs who tried to start charging (for this service that had dramatically increased costs) were faced with vigorous campaigns. At some point it's easier to just cancel and tell disattisfied customers to get a new ISP if they're unhappy.

The amount of groups distributing images of child sexual abuse created some risk (not every ISP is in the US) and things like stealth binary groups distributing porn put a bunch of people in oppressive regimes in tricky situations.


ISPs could have dropped binaries and only carried text groups. But this means putting up with groups of people who strongly held but conflicting opinions:

1) be a dumb pipe and provide everything

2) be a dumb pipe but filter spam with a Breidbart index of something or other.

3) make the news server operate to rules laid out in the ISP's ToS. (Young people may not realise but a lot of effort on the early Internet was spent on "what do we do if our users go on the Internet and start swearing?" Many ISPs had rules forbidding swearing. (At least, they did in europe))

Then www forums sprung up and they had some advantages: avatars, mods, etc.

avifreedman 4 hours ago 3 replies      
(Background: I'm a Usenet user from the late 80's to early naughties, did outsourcing at netaxs as newsread.com, then ran readnews.com from 2004-1024).

Usenet is still around but mostly for binaries. The market's of pretty stable size, dominated by a few large wholesale players.

My take on what happened with text groups is that the S/N ratio just went to hell. In the 90s the problem was spam, but in the 2000s the problem was too many loudmouths who wanted to hear themselves talk drowning out the useful experts.

Like some of the other folks commenting, I've been pissed as hell at the PHPBB/vbulleting monstrosities. My original plan with readnews was to try to build a great web UI for discussion, but we got distracted by wholesale customers wanting service - and front-end is not my area of expertise.

For folks looking for something modern with promise, the news is good with discourse and a few others coming up. Would love to see something distributed, but if really distributed I suspect we'd see binaries and/or commercial spam and/or people with nothing interesting to say dominate - just like Usenet...

david-given 9 hours ago 4 replies      
Here is a more interesting question:

How would you reinvent Usenet?

What Usenet did well was that it was completely decentralised, had zero cost of engagement (despite 'hundreds, if not thousands of dollars'), and was everywhere.

What Usenet did badly was that there was a complete absence of identity management or access controls, which meant no accountability, which meant widespread abuse; and no intelligence about transmitting messages, which meant that every server had to have a copy of the entire distributed database, which meant it wouldn't scale.

It's a tough problem. You need some way to propagate good messages while penalising bad messages in an environment where you cannot algorithmically determine what good or bad is, or have a single unified view of all messages, all users, or even all servers. And how do you deal with bad actor servers? You know that somewhere, there's a Santor and Ciegel who are trying to game the system in order to spam everyone with the next Green Card Lottery...

jivardo_nucci 13 hours ago 2 replies      
"User interface woes" is my guess.

I found USENET and associated newsgroups to be better than the WWW, especially for discussions of software. I once even promoted the use of internal newsgroups w/in a corporate environment, where a history of topics (discussions, problems, and decisions) would have IMO proven extremely useful.

But the idea never got traction: people were unwilling to participate because newsreaders were too different from the browser and they'd had enough trouble learning to navigate the WWW. Once blogs and browser-based "newsgroups" and forums began showing up, the handwriting was on the wall. In the end, the WWW browser's low bar to entry ate USENET.

I still value the treasure trove of information stored in the archives. And some people still actively participate in USENET and other newsgroups, just as some still participate in IRC (Internet Relay Chat, which also is fading). I think these are valuable tools with a lot of greybeard expertise held in reserve.

There's a sort of Gresham's law of the Internet: "The browser drives out every other interface."

jsz0 41 minutes ago 0 replies      
For ISPs there simply wasn't enough customer usage of NNTP servers to justify their continued existance. 5 years ago when I was working at a mid-sized ISP only about 2% of our customers used our NNTP servers. We carried binary groups and offered pretty good retention/completion but by then even the pirates had mostly ditched NNTP for torrents. At the time we estimated that we had maybe about a dozen customers accessing the server for non-binary / piracy use.

Going back further to why NNTP became irrelevant for discussion I'd say it was a combination of difficult setup for the average user and the lack of good free NNTP clients. Early web forums could offer discussion for free without the difficulty / expense of a NNTP client. As NNTP groups became more insular the miserable trolls were able to take over and ruin it for everyone. Almost every group I was active in during the late 90s deteriorated in this way. Just one mentally ill and/or very lonely person posting 50+ times per day could very effectively destroy a group.

inyourtenement 8 hours ago 1 reply      
I skimmed the comments here, and never saw the real answer (to what I understand the question to be). Even though it was public knowledge, I had some extra insight from working for a large Usenet provider.

The New York Attorney General started a campaign against child porn groups on Usenet. In the end, his office identified a small number of groups they said were used for child porn -- I think it was less than 100 groups. Many ISP's jumped on the opportunity to stop paying for Usenet service.

In the 90's it was just assumed that an ISP service would include Usenet. With the growth of binaries groups, the quality of service declined. I remember retention would be a day or two, with about 50% completion. So, for most ISP's, the service was unusable, and only a small number of subscribers knew or cared about it. The others paid quite a bit for service from a third party, like my employer. I don't know why they didn't shut down service earlier, but once the NYAG campaign started, they could cancel Usenet, saving themselves money, and getting good press for fighting child porn.

randcraw 2 hours ago 0 replies      
N.Y. attorney general forces ISPs to curb Usenet access (2008)


Thank Andrew Cuomo.

JoshTriplett 12 hours ago 0 replies      
> It seems that the past 6 years or so saw most big ISP's dropping USENET support claiming mostly piracy concerns. Was it piracy or the fact that it's tough for the government to control what people say on USENET?

No conspiracy theories needed here.

Copyright infringement is one angle; the other is that it costs ISPs a huge amount of resources for something few people use.

Once upon a time, a single server could easily mirror all of USENET for all users of an ISP, and almost every user expected it, so they'd treat it as an essential part of the service. Now, it would take far more storage to do so, and almost nobody expects it, so why should an ISP provide it? It's easier to let people get USENET from a third-party service, and it'd be a better experience for the people who actually want it, too.

If an ISP has resources to burn and wants to make their technical users happy, they'd get far better results for more users if they provided things like local Linux distribution mirrors instead. Far more users would make use of that than USENET.

And if they want to make the vast majority of users happy, and save resources on their end in the process, they can provide local CDN nodes for YouTube, Netflix, and similar.

ised 1 hour ago 1 reply      
There is a lot of history and useful knowledge archived in Usenet. A lot of that content (e.g., the early UNIX newsgroups) puts today's forums and blogs to shame.

Google acquired Deja News (if Usenet is wortheless, why?) and now all the archived Usenet messages are web-access only and fronted by Java and Javascript nonsense.

If the Usenet archives are no longer important or if everyone thinks Usenet is "dead", then why put these messages behind Javascript and try to prevent bulk downloads (which is how NNTP was designed to work)?

Animats 12 hours ago 1 reply      
Usenet isn't dead. I still use several Usenet groups via Thunderbird. Google Groups is a Usenet host/client, and many groups belong to both the Google and Usenet spaces. The Usenet interface is easier to use, has no ads, and doesn't require a Google account.
mwfunk 11 hours ago 1 reply      
It felt like Usenet died as a meaningful place for discussion in the mid-to-late '90s, for all the same reasons that most (or all?) electronic communities eventually die. Bad posters drive away good posters and encourage even worse posters, which eventually results in something akin to YouTube. Forum entropy for lack of a better term.

By the time most ISPs started dropping it, a vanishingly small percentage of most ISPs' users even knew what it was, and the binaries groups had turned it into a source of both cost and legal risk. The heavy users were people who incurred that cost and risk to the ISPs because they were using it for pirating software and porn. The icing on the cake would've been the fact that it's a terribly inefficient way to distribute those things and the ISPs have to store all that stuff locally on servers they own.

From an ISP's perspective, maintaining Usenet feeds became all downside and no upside.

Regarding government control, I would think that Usenet would've been far easier to monitor and censor than the web.

NoPiece 12 hours ago 2 replies      
The single biggest issue was spam. Being largely unmoderated, it became flooded with garbage as the reach of the Internet expanded. Conversation moved to web based forums, which IMO had worse UI in il the early days, because there was more ability to moderate.
JoshTriplett 12 hours ago 0 replies      
Related: check out olduse.net for a real-time USENET feed on a 30-year delay (so it's currently showing the news of the day in 1985).
T-A 14 hours ago 0 replies      
I thought it was just limited interest. First web hosting prices came down so much that anyone could run a forum, then WordPress made free blogging with ancillary comments accessible to anyone with a browser. So the masses went to forums and blogs (and then Twitter and Instagram and YouTube and whatever chat app is popular this week) and only geeks who cared enough to find and install a newsreader were left.
gioele 12 hours ago 0 replies      
USENET (the network) may be dying but NNTP is still going strong as a better interface to mailing lists. See for example http://gmane.org/ or the new GNU Mailman 3 gateway.

I am now subscribed to maybe 2 mailing lists, the rest (two dozens) I read via gmane.

batou 13 hours ago 0 replies      
It's expensive to keep binary groups online (bandwidth) and the text groups are all SPAM these days.

Edit: Forgot to say that the tech is fine; a member of my family operates a usenet server over in Switzerland for our family. Works well for that sort of thing and avoids facebook etc.

tmpusenet 7 hours ago 0 replies      
When DejaNews made USENET searchable. You could actually nuke messages from DejaNews, but then Google bought DejaNews and suddenly every nuked message were made available again forever. Google killed USENET.
Marazan 10 hours ago 0 replies      
I imagine Binary groups being a great big cost sink would be the main thing.

It's sad, because the most barebones mid 1990's 3-panel Usenet client is still an infinitely better reading experience for discussion than all current web forums.

mrbill 11 hours ago 0 replies      
I wonder just how big a non-binaries feed is these days. A tiny engineering company I worked for in '98-99 had its own Usenet server with a no-bin feed going into a SPARCstation 2 (think 386-486-class x86, equivalent) and it kept up just fine.

A couple years earlier I'd been one of the senior admins at Texas.Net (now DataFoundry) and helped build out what eventually turned into GigaNews, which used multiple dual-proc Sun E450s.. I think they're still one of the "biggest" Usenet providers these days.

mattkrea 13 hours ago 0 replies      
I would definitely believe the reason being piracy. That's a massive portion of the bandwidth I'd almost guarantee it. It was a safe haven for a long while.
sergiotapia 9 hours ago 0 replies      
I wish I was old enough to use USENET when it was popular. The closest I got was using XDCC on IRC but even then, USENET sounds like such an interesting thing to use. I miss anonymity.
dfbrown 13 hours ago 1 reply      
How is it more difficult to control what people say on USENET than on the internet at large?
nickysielicki 9 hours ago 1 reply      
I still use usenet every day. There are, admittedly, only a few good groups left. But where there's a high barrier to entry there's a high reward. The discussion is of high quality. Higher than most mailing lists and reddit/HN, at least.

 sci.math, comp.misc, sci.electronics.*, comp.lang.*

davidgerard 9 hours ago 0 replies      
But hey, slrn finally reached version 1.0!
api 9 hours ago 0 replies      
Spam mostly, and other forms of abuse. But mostly spam.
obrero 13 hours ago 1 reply      
Usenet is a way for the ordinary person to be able to talk unfettered to other ordinary people - without a need for a central authority, without the approval and shilling of advertisers etc. So, after decades of the taxpayer funding R&D to create the Internet, when the Internet was handed over to corporations in the early 1990s, the question is not if such a resource was going to go away, but when.

News stories like this marked Usenet going away - http://www.cnet.com/news/riaa-tries-to-pull-plug-on-usenet-s...

It's a confluence of forces. The old Bell monopolies get a stranglehold on the last mile, and then wireless transmissions as well. They become so bold as to lobby to end net neutrality so they can pump more money from content providers with their monopoly. A vast infrastructure is being built to monitor what people say on the network (like the NSA's Utah Data Center) which makes the Stasi look like Inspector Clouseau, in a country quite different than the one whose Secretary of State said in the 1920's "Gentlemen do not read each other's mail". The RIAA/MPAA oligopolies are not busy yet trying to extend their 95 year copyright lease which is kicking in again in 2019, so they can try to shut Usenet down as well. After all, it's one of the rare mediums of distribution of content they don't control. I'm surprised the powers that be haven't cracked down on Internet Relay Chat yet, it's one of the last remnants of the old, distributed, decentralized, noncommerical Internet.

Exchange-traded funds have overtaken hedge funds as an investment vehicle economist.com
6 points by prostoalex  3 hours ago   6 comments top
pbreit 2 hours ago 1 reply      
And still, 99% of retail investors should have a core in Vanguard Retirement or Lifestrategy funds.
History's Worst Software Bugs (2005) wired.com
93 points by t-3-k  15 hours ago   27 comments top 8
scott_s 13 hours ago 0 replies      
The Morris worm is on that list; Robert Morris is a partner at Y Combinator. That incident is the source of the pg quote that would not stop kicking around in my mind through the latter half of grad school:

"The danger with grad school is that you don't see the scary part upfront. PhD programs start out as college part 2, with several years of classes. So by the time you face the horror of writing a dissertation, you're already several years in. If you quit now, you'll be a grad-school dropout, and you probably won't like that idea. When Robert got kicked out of grad school for writing the Internet worm of 1988, I envied him enormously for finding a way out without the stigma of failure."

From http://www.paulgraham.com/college.html

Morris is now also a tenured MIT professor, so things ended up okay for him.

lordnacho 14 hours ago 3 replies      
Seem to me this list needs to incorporate how easily these bugs could have been avoided/detected/fixed, rather than just how dire the consequences were. It doesn't say much about what people did to test their code. For instance the first one in the list is something unit testing would have fixed. Take the trajectory function, plug numbers in, see if it's correct.

Some of these things were a lot more obvious than others.

Race conditions, for example, can be really hard to find, but as long as you know it might happen (these days it's just about every system) you can take precautions for testing. If it's important, maybe hire someone with experience.

The AT&T network crash thing looks pretty unobvious to me. A network graph can have a huge number of topologies, so you can't really test them all. Machines might also be using different versions of software that don't interact nicely. Sounds like they took sensible precautions and were thus able to roll back. That's why "rollback" is a word.

There's a whole class of bugs where things work and then need to be upgraded. You think it will work, because there aren't many changes and stuff is qualitatively the same. Like the number overflow bug in the Ariadne, or the buffer overflow in the finger daemon.

trsohmers 14 hours ago 0 replies      
Another thing which should be in this list (relating to floating point rounding error):

"On 25 February 1991, a loss of significance in a MIM-104 Patriot missile battery prevented it intercepting an incoming Scud missile in Dhahran, Saudi Arabia, contributing to the death of 28 soldiers from the U.S. Army's 14th Quartermaster Detachment."


luso_brazilian 14 hours ago 1 reply      
No mention to Y2K and mankind can thank the millions of man/hours employed (and regally paid) to stamp out the majority of the occurrences of that bug.

It could really be a game changer if it didn't get fixed and I don't really know what expect in the wake of Y2K38 because it's about there, lurking in waiting.

rer0tsaz 14 hours ago 3 replies      
> Programmers respond by attempting to stamp out the gets() function in working code, but they refuse to remove it from the C programming language's standard input/output library, where it remains to this day.

gets was deprecated in C99 and removed in C11.

TillE 14 hours ago 1 reply      
The title said "software", so I assumed they were going to exclude the infamous Pentium FPU bug. But no, there it is.

To me, the interesting thing about testing a CPU is that it's theoretically possible to comprehensively test all inputs and outputs, but the time required makes that totally impossible.

CookWithMe 13 hours ago 2 replies      
The Soviet Gas Pipeline explosion - if the whole CIA story is true at all - should not be labelled a bug... The code allegedly did exactly what it's creator intended ;-
spacehome 9 hours ago 1 reply      
The Magic of RPython kirbyfan64.github.io
66 points by williamtrask  13 hours ago   18 comments top 5
ahomescu1 8 hours ago 2 replies      
I used RPython on my own interpreter project a few years ago (stopped working on it around 2013). It's a very interesting approach to writing interpreters/JIT compilers, and produces very fast code, but developing RPython code was very painful for a few reasons (back then, maybe they got fixed in the mean time):

1) Huge compilation times, and compilation is non-incremental. Making even a small change to the source code causes it to be fully re-compiled, which on our project took 15-20 minutes (I can only imagine how painful this is on PyPy, which took me around 2 hours to build the time I tried it). I think the root cause of this is the static analysis and type inference, which needs to run again on the entire source code, and proved to be really slow on a huge code base. This was painful for development, so much time wasted on waiting on the compiler.

2) My experience with error messages was not as positive as the OP's. Sometimes, I'd make a type error in the code and get a cryptic error message, and have to guess by myself what caused it. Perhaps things have improved since then (I see some new details in the errors in the article that weren't there when I used RPython).

orf 11 hours ago 0 replies      
RPython and PyPy are (IMO) the coolest Python projects out there. I made a little Brainfuck intepreter with it and it was super simple, but making small changes seemed to slow it down a lot. There is very little visibility as to how the code is compiled - i.e should I use a list or a tuple for some things? Can PyPy work out that a list is fixed sized? How should I classes to make them as efficient as possible (like are classes with two or so fields passed as structs)?
616c 8 hours ago 0 replies      
The coolest example of RPython in action, after PyPy of course (but goes without saying): Pixie, the lang and VM that is a native, high-speed Lisp.


masklinn 13 hours ago 3 replies      
Note that, as far as I know (the rpython/pypy team will confirm or infirm) RPython is not intended to be a general-purpose python-like language. For that you want cython or nim. RPython is a toolkit for building language VMs. I'm guessing that's why relatively little work has gone into error reporting.
cschmidt 11 hours ago 3 replies      
I must say that RPython is not a very good name. I assumed it was a system for R and python integration. Like say the RPython package http://rpython.r-forge.r-project.org
The Birth of Standard Error (2013) spinellis.gr
72 points by nazri1  16 hours ago   10 comments top 4
gre 10 hours ago 0 replies      
pflanze 12 hours ago 1 reply      
I wasn't aware that Microsoft Windows supported a stderr filehandle separate from stdout. When I worked a little on it about 17 years ago, I thought it didn't have that (e.g. warnings from Perl were intermixed with redirected stdout or so). Did I misinterpret something or has the system been changed?

(Edit: that was longer ago than I first remembered; it was on Windows NT.)

contingencies 11 hours ago 0 replies      
See also http://www.cs.princeton.edu/~bwk/202/ for further information about early work (reverse engineering!) at Bell Labs on typesetting machines, and http://haagens.com/oldtype.tpl.html for general phototypesetting history, featuring gems like: There was a romantic tradition, in [the US] at least, of the drifter Typesetters, who were good enough at the craft to find work wherever they traveled. They'd work in one town until they wanted a change and then drift on. They had a reputation for being well read, occasionally hard drinking, strong union men who enjoyed an independence particularly rare in the 19th century.[0]

It's amazing how interlinked typesetting and computing are. Here we have a troff link, then there's the PDF (from postscript) and TeX world, keyboard layouts, telegrams, rotating drums and early mechanical cryptography, etc.

If anyone's interested in good collections on the history of printing, I can recommend both the Museum of Printing and Graphic Communication (Muse de limprimerie et de la communication graphique) in Lyon, France[1] and the National Technical Museum (Nrodn technick muzeum) in Prague, Czech Republic,[2] which also sports the best permanent exhibition on the history of photography I have ever seen (by a long shot). For those of you in California, there's also the International Printing Museum[3] in Carson (open 10-4PM Saturdays).

[0] Added to 'Hackers of History' section of my fortune clone @ https://github.com/globalcitizen/taoup

[1] http://www.imprimerie.lyon.fr/imprimerie/

[2] http://www.ntm.cz/en

[3] http://www.printmuseum.org/

kylebgorman 14 hours ago 2 replies      
But, where did you read standard error during the teletype era? Was it printed to a separate tape?
How TCP backlog works in Linux veithen.github.io
59 points by signa11  19 hours ago   6 comments top 4
Animats 8 hours ago 0 replies      
The article kind of misses the point. The reason for having a separate queue for connections in a SYN-RECEIVED state is to provide a defense against SYN flooding attacks.[1] An incoming SYN has a source IP address, but that may be faked. In a SYN flooding attack, large numbers of SYN packets with fake source addresses are sent. The connection will never reach ESTABLISHED, because the reply ACK goes to the fake source address, which didn't send the SYN and won't complete the handshake.

Early TCP implementations allocated all the resources for a connection, including the big buffers, when a SYN came in. SYN flooding attacks could tie up all of a server's connection resources until the connection attempt timed out after a minute or two. So now, TCP implementations have to have a separate pool of connection data for connections in SYN-RECEIVED state. There's no data at that stage, so buffers are not yet needed, and a minimum amount of state has to be kept until the 3-way handshake completes. Once the handshake completes, full connection resources are allocated and the connection goes to ESTABLISHED state.

This has nothing to do with behavior of established connections, or connection dropping.

[1] https://en.wikipedia.org/wiki/SYN_flood

ised 7 hours ago 0 replies      
"The solution suggested by Stevens... The problem with this is..."

I see no problem with it. But perhaps I am missing something.

"... an application is expected to tune the backlog..."

Two simple applications I use everday called tcpserver/sslserver and tcpclient meet this expectation.

See "-b" and "-c" switches.

Has the author looked at Stevens' own example?


bbrazil 9 hours ago 0 replies      
I ran into the overflow behaviour with our source repository provider, as they'd get hammered on the top of every minute by all the continuous integration servers and silently drop connections. The specific version of SSH we were running didn't send the client banner until it received the server banner, so the connection just hung for 2 hours on the client.

After much debugging and reading of kernel source this was all figured out, and the provider adjusted things on their end so this wouldn't happen.

Moral of the story: You probably should set tcp_abort_on_overflow to 1.

pests 10 hours ago 1 reply      
There was a recent video or article posted here discussing the poor interaction between Nagle's Algorithm/Delayed ACK/TCP slow-start and how it results in increased latency, especially for the first few packets.

From a first read it sounds like the decisions made in both BSD and Linux could also be adding to the latency problem for the first initial packets.

Have OSes checked how their TCP backlog implementation affects the various congestion control algorithms being used?

Forty Minutes with a Fields Medallist t5eiitm.org
55 points by dpflan  14 hours ago   8 comments top 4
biot 11 hours ago 1 reply      
I can only infer from the picture of a person on the couch two-thirds of the way through the article that this is about "Prof. Manjul Bhargava": https://en.wikipedia.org/wiki/Manjul_Bhargava
mrcactu5 8 hours ago 1 reply      
Manjul makes it so simple. The odds of two numbers being relatively prime is 6/pi^2 so where's the circle? You can kind of see rotation symmetry if you draw the relatively prime pairs of numbers in the coordinate plane. However that symmetry is far from perfect.


S4M 10 hours ago 0 replies      
That man is so down to Earth, very inspiring.

For those who wonder like I did why the probability of picking a square-free number is 6/Pi^2, it's explained on wikipedia: https://en.wikipedia.org/wiki/Square-free_integer#Distributi...

journeeman 11 hours ago 1 reply      
Wow, how inspiring! Thanks for the post.
Breaking Smart breakingsmart.com
188 points by lukasLansky  14 hours ago   44 comments top 10
vezzy-fnord 10 hours ago 4 replies      
This is a very syncretic fusion between computing, dialectical materialism, entrepreneurial laissez-faire idealism and a bombastic techno-optimism.

Unsurprisingly, it harbors plenty of confusion.

"Towards a Mass Flourishing" makes the outrageous claim that the hacker ethos is best embodied in Silicon Valley. In reality, SV is one of the most detached from the MIT hacker ethos, instead having its own entrepreneurial hacker culture that is markedly distinct.

The "Purists versus Pragmatists" essay romanticizes the release of Mosaic and gives little credit at all to Ted Nelson's ideas, who is shoved aside as a purist crank. It's a false dichotomy through and through.

"Agility and Illegibility" again romanticizes widespread access to personal computers as some entrepreneurial Randian vision, that of Bill Gates specifically.

The "Rough Consensus and Maximal Interestingness" essay misquotes Knuth and incorrectly attaches philosophical meanings to technical terms like dynamic binding and lazy evaluation. It further espouses the "direction of maximal interestingness" and grand visions in the post-dot com bust era, when in fact systems software research is becoming increasingly conservative compared to as recent as the 90s.

"Running Code and Perpetual Beta" presents the dogmas of "release early, release often" and constant chaotic flux in software as a natural result of great ideas, as opposed to being the result of a cascade of attention-deficit teenagers. Note that fault tolerance, stability and security are not mentioned once.

"Software as Subversion" equivocates "forking" as being a Git terminology that somehow reclaimed its negative stigma, when it is purely a GitHub redefinition. The author makes no distinction between a clone and a fork. Also a misrepresentation of OS/2's mismanagement to argue in favor of "worse is better" (ignoring all other great systems besides OS/2) and babble about how blockchains are pixie dust.

"The Principle of Generative Pluralism" sets up the false dichotomies of hardware-centric/software-centric and car-centric/smartphone-centric. I suppose it somewhat reflects the end user application programmer's understanding of hardware.

"A Tale of Two Computers" prematurely sets up mainframes as obsolete compared to distributed networked computers (they are not exclusive) and makes the error of ascribing a low-level property to an ephemeral, unimportant abstraction - its marvel at the hashtag when the core idea of networking has enabled the same for much longer, and will continue to.

"The Immortality of Bits" is one of the worst, and makes this claim: "Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today." This reeks of an ignorant programmer, oblivious as to how just how much hardware design decisions control them and shape their view. In fact, this is a very dangerous view to propagate. Our hardware is in desperate need of being upgraded to handle things like capability-based addressing, high-level assembly and thread-level over instruction-level parallelism. This stupid "hardware doesn't matter" thinking will delay it. The essay also wrongly thinks containerization is a form of hardware virtualization. It further says the "sharing economy" will usurp everything, which is ridiculous.

"Tinkering versus Goals" again sets up tinkering for the sake of it as leading to disruption and innovation, and not churn and CADT.

The "Free as in Beer, and as in Speech" essay clumsily and classically gets the chronology and values of open source and free software wrong. Moreover, the footnote demonstrates a profound bias for the "open source" ideal of pragmatism. This is in spite of the fact that many of the consequentialist technical arguments for OSS like the "many eyes make all bugs shallow" argument have proven to be flawed, whereas free software making no claims of technical superiority and using ethical arguments has a much stronger, if less popular case.


Overall, I do not recommend this.

j_lev 4 hours ago 0 replies      
TIL "polyannish" [sic] is a word.

Agree Ribbonfam peaked with the Gervais Principle essay. Agree with some of the criticisms here. Will add my own: the first three essays are somewhat accessible but after that the author is talking to the echo chamber which is his regular blog audience.

antisugar 6 hours ago 0 replies      
I wrote up a very rough summary/set of notes. Please excuse all the errors in punctuation, spelling, and formatting. Thought it might be helpful for people who want to skim, as the whole thing is more-or-less a book.


phaemon 13 hours ago 5 replies      
The first essay on that list starts talking about "soft technologies" without defining what they are.

They don't match other definitions of "soft technologies" and I'm having difficulty figuring out what the definition is here that only includes writing, money and software (frankly, I suspect if anyone other than an American had written this, money wouldn't be on the list).

jordanpg 6 hours ago 2 replies      
Umm, who is Venkatesh Rao and why should I read 30 essays by him?
bluishgreen 1 hour ago 0 replies      
Holy negativity batman!

This is a great collections of essays, I'd say bed time reading for the budding entrepreneur and/or VC.

Ignore (or misunderstand) at your own peril.

vonklaus 7 hours ago 2 replies      
So this is a blog that will write 1 article every 5 weeks and batch release them in 2017? I am all for thoughtful content, but binging isn't a concept that can be applied to blogs. This makes no sense.

Edit: i get what turned me off by this. It was the positioning as a radical new media concept and the convoluted and confusing explanation.

What do you call the development and research of a text based narrative which is catalogued for direct and total consumption online?

> an e-book. Got it now guys.

andrewtbham 12 hours ago 2 replies      
I sense this may coin several new phrases, much like "software is eating the world." breaking smart, Promethean mindset /pastoralist mindsets.

Also lots of references to previous great insights: Alan Kay, Carlota Perez, Jeff Bezos,

alexashka 12 hours ago 2 replies      
PuffinBlue 8 hours ago 1 reply      
Ask HN: I'm looking for the self-proclaimed last private web hosting site
77 points by hellbanner  9 hours ago   23 comments top 16
jenkstom 7 hours ago 0 replies      
You are probably thinking of this article: http://www.dailydot.com/society/anonymous-website-challenge-... which links to http://voidnull.sdf.org.
cstigler 8 hours ago 0 replies      
You may be thinking of Silence is Defeat: http://silenceisdefeat.com/

"public access unix systems for free speech, established 2000"

They offer free shell accounts with 50MB storage space, HTTP access (http://silenceisdefeat.net/~username), SSH access, email, etc. Very cool place.

lavara 8 hours ago 1 reply      
Not sure if it's what you're looking for but: http://sdf.org/?join
spilk 8 hours ago 1 reply      
Are you talking about https://www.nearlyfreespeech.net/ ?
white-flame 7 hours ago 1 reply      
Host a .onion or .i2p site.

You'll need a 24/7 box somewhere, but it could even be behind a residential NAT with only outbound connections (at reduced performance than having a public port). You can easily & transparently move it from one physical place to another; your hash identifies it, not the physical routing location.

mtmail 8 hours ago 0 replies      
These guys don't store payment details and allow cash-in-the-mail. I used them for a domain name once. http://mediaon.com/About-Us.php#Payment
merah 7 hours ago 0 replies      
kordless 3 hours ago 0 replies      
I built something that starts instances with Bitcoin, if that helps any: https://www.stackmonkey.com/. You add ssh keys with a callback, so the site never sees who you are. The virtual appliances run on my HP Cloud account, so for now your instance would be there.

Will be adding container tech to it at some point.

jbuzbee 4 hours ago 0 replies      
How about nyx.net? They've been around a long, long, time


ikeboy 3 hours ago 0 replies      
There's https://www.anonymousspeech.com, which I believe was used by the creator of bitcoin to register bitcoin.org
zimbatm 8 hours ago 0 replies      
skorlir 4 hours ago 0 replies      
There's also http://tilde.club, although they haven't been accepting new registrations for a little while
hurin 6 hours ago 1 reply      
There is a small number of VPS providers that accept Bitcoin (and require no identifying information) - probably this is your best bet.
jsprogrammer 7 hours ago 0 replies      
Always Connected: Generation Z, the Digitarians randyapuzzo.com
13 points by ardeay  6 hours ago   14 comments top 5
gregpilling 2 hours ago 2 replies      
I have a gaggle of children, and I have noticed that my 5 year old son thinks many devices have voice control, and he views this as normal.

My 11 year old son and 8 year old always think every screen has a touch interface, but they don't use voice.

To me, it seems that there is a break in those generations. The 5 year old is voice activating his tablet, amazon fire tv, watches his parents voice command their phones driving - this is all totally normal to him and you can see he will often try a voice command on a device unfamiliar to him.

The older boys think using a PC is novel, and look at me strangely when I tell them about DOS and Windows 3.1 //// oh well...

brianstorms 4 hours ago 1 reply      
With all due respect, what a bunch of b.s. Goes to show that always being connected maybe isn't all it's hyped up to be. From the sound of it, it doesn't appear to produce any lasting insights.

p.s. Whoever typed "exec" to run a DOS program?

p.p.s. "Software is a dead. Digitarians know apps, games, and web browsers." WTF?

EdSharkey 4 hours ago 3 replies      
Just got back from a family reunion and my relations' kids were glued to their tablets playing mindless games and watching YouTube unattended for 4 hour spells and more day after day. It was so depressing, all these boring consumer drones in the making.

My kids get 1 hour of screen time a day, max (until they decide to become coders, and then the cap is lifted. :)

oasisbob 1 hour ago 0 replies      
Can someone please remind me how demographers defend treating post-WWII generations as being 20 years in length when the evidence is very clear that real generations are much longer?


I wouldn't care so much if it weren't for having to endure another round of inane punditry.

zitterbewegung 4 hours ago 0 replies      
This generation is fighting the largest class warfare battle in history and guess what. There is only mutually assured destruction.
Show HN: Glow Syntax highlighting for Clojure source code venanti.us
25 points by venantius  10 hours ago   1 comment top
pandler 1 hour ago 0 replies      
I'm unfamiliar with Clojure, but I rather liked Douglas Crockfords comment on code coloring (Lisp) based on scope rather than syntax.


Saturated Reconstruction of a Volume of Neocortex cell.com
27 points by Someone  11 hours ago   1 comment top
Freges Concept Horse Paradox in the Simply-Typed -calculus dvt.name
16 points by dvt  9 hours ago   discuss
For Sympathetic Ear, More Chinese Turn to Smartphone Program nytimes.com
34 points by kordless  14 hours ago   3 comments top 2
the_af 8 hours ago 1 reply      
From the article:

> Xiaoice, whose name translates roughly to Little Bing, after the Microsoft search engine, is a striking example of the advancements in artificial-intelligence software that mimics the human brain.

"Striking example"? "Artificial-intelligence"? "Mimics the human brain"? From the example chats, it seems to be just a chatbot, maybe a little more convincing than ELIZA. If there is a piece of news here, it's not about Xiaoice but about people feeling so lonely they can pretend a chatbot is their "friend". I realize this is standard journalist exaggeration, so I'll ask: is there any technical info on Xiaoice that explains how this chatbot stands out from the rest? Is this the current state of the art?

Gys 10 hours ago 0 replies      
A message from the FFmpeg project ffmpeg.org
59 points by lobster_johnson  4 hours ago   9 comments top 3
nickpsecurity 2 hours ago 0 replies      
Many details here for anyone that missed it:


Phlarp 3 hours ago 1 reply      
As someone who uses FFMPEG heavily in a professional capacity, I really wish the various community leaders and contributors could come together and build better tools for everyone-- as opposed to the current status quo where both sides seem to spend a majority of their time attacking or defending each other.

It's like a "great filter" in the growth of open source projects, so many get ripped apart by their own internal power struggles, but those few that can make it past these hurdles really do shine.

ageofwant 3 hours ago 2 replies      
A a satisfied drive-by user of ffmpeg over the years I was not aware of any major forks or ffmpeg derivatives. What are the points of contention between the different forks ?
Dolphin Progress Report: July 2015 dolphin-emu.org
55 points by madez  10 hours ago   2 comments top
Others 7 hours ago 1 reply      
I'm always eager to read these progress reports. It is amazing how interesting it is to read about the kind of bugs they face, and how they were diagnosed and fixed. Can anyone recommend other open source projects that do this sort of writeup?
Ask HN: What are your best soft skill resources?
109 points by acconrad  15 hours ago   51 comments top 30
JacobAldridge 13 hours ago 3 replies      
Always a risk to self-promote, but last month our team launched our Compass Platform - Behavioral Indicators and associated support to help business founders and leadership teams in private enterprise [1].

I've found these are more balanced, so work better than tools like DISC and Myers-Briggs, which tend to 'put people into a box' and therefore work against creating an inclusive environment.

The four soft skills these tools help me understand with my clients and prospective clients are Communication, Attitude to Risk, Role Preference (Entrepreneur Leader Manager), and Natural Pace (what pg calls Makers v Managers).

I've obviously done some training on these, and nowadays deliver (mostly internal company) training on them as well. The website and online tests are a great starting point - having an understanding of a potential client or recruit's Risk profile, for example, makes it so much easier to connect with them and explain a value proposition.

[1] http://www.shirlawscompass.com/

wwkeyboard 12 hours ago 1 reply      
You might try Toastmasters. Once you get past the first few "nervousness" lessons they focus on how to convey meaning with speech. Every chapter is different, so YMMV. Several of the speeches you have to prepare cover persuasion, motivation, and how to structure a speech so that everyone remembers exactly what you want them to remember. They teach through having you give a series of 5-7 minute speeches, but I've found the practice helpful beyond giving a formal talk.
jsonmez 17 minutes ago 0 replies      
I actually wrote a book, specifically to teach "soft skills" to software developers.

In fact, my whole life and business is dedicated to that aim.

Soft Skills: The Software Developer's Life Manual (http://simpleprogrammer.com/softskills)

selleck 10 hours ago 2 replies      
I just finished The Charisma Myth:


On audio book and immediately bought a physical copy. The book is filled with tips and tricks to increase your charisma that can be applied right away.

mettamage 4 hours ago 0 replies      

The book Search Inside Yourself by Chade-Meng Tan the best book I know. It describes scientifically studied exercises of the mind that you can do by yourself. It will boost your empathy much higher than anything I've experienced (or read about on Sciencedaily). It explains the science too and you can look it up. One thing though, reading the book is part 1, part 2 is performing the exercises. If you won't perform the exercises, then reading the book has not much of a purpose.

In my experience, I got to amazing levels of empathy by doing these exercises. I felt like I had godlike skills. My intuition could immediately signal me if a woman liked me (first time ever in my life). Two months later I was in a relationship. I could spot feelings that my friends had that they were not aware of.

There are some caveats though, which goes for any book that will be presented here. The moment I stopped practicing, my level got down a bit above baseline before I started. So most structural gains are hard to keep, which goes for any trained skill. Another downside is that everything you see has a bigger impact on you. So when you'd go to an action movie, you'd feel like you're right in it. When you look at a rose you feel like you're a rose, that sort of thing. When you drink one sip of alcohol you feel the effect of it already on your perception in very subtle ways (that might be a good thing though).

Some final thoughts: I believe books train deliberation aka the slow system. Exercises, mental or physical, train the fast system. Checkout Thinking, Fast and Slow by Daniel Kahneman. I believe it's a useful approximation to how thinking works. Also, while empathy is a big component in becoming better, it's not the only component.

colinbartlett 9 hours ago 1 reply      
How to Win Friends and Influence People by Dale Carnegie should be required reading for anyone who interacts with human beings.
josephmosby 11 hours ago 0 replies      
"Nonviolent Communication" by Marshall Rosenberg was very helpful for me. A great resource about choice of words and the internal assumptions we make when we communicate in certain ways. http://www.amazon.com/Nonviolent-Communication-A-Language-Li...
protomyth 9 hours ago 0 replies      
Do some volunteer work, it looks good on your resume, it does some good in the world, and it improves your soft skills because it takes you out of your grove and sticks you in a new, non-threatening situation (hopefully). Plus, most volunteer events have mentors that will teach you how to deal with people. Do something simple, do not go overboard, and listen to how the professionals there deal with people.
flarg 12 hours ago 0 replies      
What actually helped me was to get married - it's the best training in soft skills that you can get.
soham 8 hours ago 1 reply      
Best way, is to actually put yourself in a situation where you have to live the skills you're trying to get better at. i.e. I'd suggest you work a short (part-time) side stint in Sales or Customer support or Recruiting, especially under a seasoned manager and sincerely carry a quota. The pressure of actually closing that deal will improve you at a rate nothing else will.

Books and everything else will definitely help, but I'd treat them as supplemental resources. You don't get good at soccer by reading about it. You got to play it. You don't get good at coding by reading about coding; you have to actually write code.

Not saying you meant that you only want to read in order to get better and nothing else, but just trying to draw attention that getting better at soft skills is also about actual practice, like anything else.

[Me: http://InterviewKickstart.com. We practice for tech interviews, and we get better at those too]

bane 4 hours ago 0 replies      
I urge technical people to explore non-technical subjects and general "well roundedness". I've gotten immense relaxation and satisfaction from community art classes, martial arts, yoga, etc. There's a powerful argument that technical work is inherently creative, but creative work without the technical is something else entirely.

I also urge technical people to study history, language, speech, public performance and public speech giving. All of those things give a sense of perspective, and abilities to confer with partners and customers on a level that most technical folks don't understand.

Public performance in particular enables one to overcome lots of fears and be able to talk in front of both crowds and executives. This capability is often rewarded in important ways that build one's career...and the only way to get good at it is to do it.

zhte415 5 hours ago 0 replies      
Harvard's Program on Negotiation.

'Any meeting, discussion, or human contact, is basically a negotiation.' is their stance, with a real emphasis on role play. Position vs. Interest, Group vs. 1-1, etc.

The role play exercises can be downloaded from http://www.pon.harvard.edu/store/* Free for checking / testing with detailed notes for the trainer / post-exercise* Low price for use (around $3 dollars/copy licensed use - super low for what they bring)

What do they bring? Really accelerated understanding of behavior (yours and theirs) in any interaction you have. This is be done via role-play and reflection, not a reading and 'know it' resource, so download a few and play them with colleagues.

nphyte 14 hours ago 2 replies      
how to win friends and influence people - Dale Carnegie.
mwilliamson 11 hours ago 0 replies      
Made to Stick, by Chip and Dan Heath [1]. They ask "how is it that certain ideas seem to stick our minds better than others?", and give concrete advice on how to improve the stickiness of your own ideas. I've found it useful to avoid forgettable business waffle that fails to change people's minds nor behaviour. One of the many examples they give is of Nordstrom (a fashion retailer). They could have said "we want to delight our customers". Instead, they use stories of employees that embodied those principles: ironing a shirt for a customer that needed it that afternoon, refunding tyre chains even though Nordstrom doesn't sell tyre chains.

[1] http://www.amazon.co.uk/Made-Stick-ideas-others-unstuck-x/dp...

firepoet 7 hours ago 1 reply      
I have many many recommendations. Here are a few of what I've experienced and practiced:

https://www.neuroleadership.com/education/bbc/brain-based-co...Training -- Brain-Based Conversation Skills

http://www.quietleadership.com/indexBook -- Quiet Leadership

http://www.centerforappreciativeinquiry.net/Training -- Appreciative Inquiry

http://www.amazon.com/Appreciative-Inquiry-Positive-Revoluti...Book -- Appreciative Inquiry: A Positive Revolution in Change

http://aliainstitute.org/Authentic Leadership in Action Institute (Buddhist foundation)

http://shambhala.org/The path of meditation focused on creating "enlightened society." Starts with where most of the issue is: your own mind.

walterbell 13 hours ago 1 reply      
1) Tactical Office Politics



"This guidance probably should have been Chapter 1 of our Politics 101 series. Its foundational. Its a HUGE problem for many professionals, particularly young and dare we say it, nave professionals. So many young people say, I dont play politics. The more savvy folks around them think, thats good, because this isnt a game you can play. "

2) Improve human memory, reduce dependence on high-latency offboard storage (paper, web)


3) GTD for Hackers, https://gtdfh.branchable.com

4) The language of organizational models/patterns. The book "Key Management Models" has a good overview, http://www.google.com/search?q=key%20management%20models. The 3rd edition has 75 org models which help when designing the model de jour.

5) Richard Hamming, "You and Your Research", 1986, http://www.cs.virginia.edu/~robins/YouAndYourResearch.html

"One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can't, almost surely you are not going to. Courage is one of the things that Shannon had supremely. You have only to think of his major theorem. He wants to create a method of coding, but he doesn't know what to do so he makes a random code. Then he is stuck. And then he asks the impossible question, `What would the average random code do?' He then proves that the average code is arbitrarily good, and that therefore there must be at least one good code. Who but a man of infinite courage could have dared to think those thoughts? That is the characteristic of great scientists; they have courage. They will go forward under incredible circumstances;"

thisjustinm 14 hours ago 1 reply      
I recommend "Soft Skills: A software developer's life manual" by John Sonmez


ndespres 13 hours ago 1 reply      
1. Career Tools and Manager Tools podcasts. Career advice, interviewing help, resume-building, team interactions, navigating office life/culture, salary negotiation, having your voice heard, and many other topics discussed in a friendly and approachable way. I listen every week. https://www.manager-tools.com/

2. "How to Talk So Kids Will Listen & Listen So Kids Will Talk" isn't just for talking to children. Some good advice in here that applies to talking with adults also. http://www.amazon.com/How-Talk-Kids-Will-Listen/dp/145166388...

rrecuero 10 hours ago 0 replies      
From my point of view, it boils down to communication and self awareness. Nonviolent Communication that was mentioned before is a great book.

Also, I found that the Pathwise Leadership Program (http://pathwisemanagement.com/) has helped me a great deal in knowing myself and finding out how to frame your communication in the best way possible.

sopooneo 3 hours ago 0 replies      
Not a resource, so much as a technique, but I've just been baffled to find myself moving slightly into management, and my new mantra, whenever I'm not absolutely positive what I should be saying is: listen.
dmourati 11 hours ago 0 replies      
For negotiation, I can recommend: Getting More,http://www.amazon.com/Getting-More-Negotiate-Succeed-Work/dp...

The author, Stuart Diamond, gives workshops at my company. I was able to attend one early at my tenure there and the book and workshop helped me understand how to use negotiation to get what I want.

mrmrcoleman 8 hours ago 0 replies      
They're not soft skill. They're really fucking hard skills: https://vimeo.com/134601419
thadd 9 hours ago 0 replies      
I've found the best is a combination of being social and spending a good amount of time reading. If you're looking for books, take a look at The Great Books of the Western World.
antonp 11 hours ago 0 replies      
"The Art Of Charm" podcast has some great episodes about that stuff. Check out some of these: http://theartofcharm.com/best/
pjmorris 5 hours ago 0 replies      
Pretty much the entire Jerry Weinberg canon, but 'Becoming A Technical Leader', and 'Secrets of Consulting' are two excellent places to start.
danappleman 7 hours ago 0 replies      
There are many soft-skill and career courses on pluralsight http://www.pluralsight.com/search/?searchTerm=career

(yes, some of them are mine)

enraged_camel 11 hours ago 0 replies      
Toastmasters is a phenomenal resource for improving one's public speaking and improvisation skills.
nickpsecurity 1 hour ago 0 replies      
Dale Carnegie's book is the obvious choice. However, Lifetime Conversation Guide by Van Fleet had a ton of specific stuff tailored to different situations. Stumbled upon it in a thrift store and bought it because of its thoroughness along with giving me a few good ideas.


qznc 13 hours ago 0 replies      

Wiki-style collection of this stuff.

       cached 2 August 2015 07:02:04 GMT