hacker news with inline top comments    .. more ..    17 Aug 2015 News
home   ask   best   2 years ago   
GPS Trackers in Fake Elephant Tusks Reveal Ivory Smuggling Route npr.org
84 points by merah  4 hours ago   17 comments top 5
1
adamnemecek 3 hours ago 2 replies      
If you live in Washington state and would like to do something about preventing further poaching, please vote Yes on Initiative-1401 which will be on the state ballot in this fall [0].

Furthermore, anyone can donate for example to the International Anti-Poaching Foundation[1][2] which fights these poachers. The founder, Damien Mander[3], is an Australian ex spec-ops sniper who is using his military experience to train the park rangers since they, unlike the poachers, tend to be poorly equipped and trained as well as understaffed.

There is also the David Sheldrick Wildlife Trust[4][5] which takes care of elephant and rhino orphans (most of them are orphans due to poaching). For $50 a year, you can become a sponsor of a particular orphan and they'll send you photos and updates about how your sponsored orphan is doing. I've been giving these out as gifts with good successes. You can for example sponsor this little fella [6].

[0] http://saveanimalsfacingextinction.org/

[1] http://www.iapf.org/

[2] https://en.wikipedia.org/wiki/International_Anti-Poaching_Fo...

[3] http://en.wikipedia.org/wiki/Damien_Mander

[4] http://www.sheldrickwildlifetrust.org

[5] http://en.wikipedia.org/wiki/David_Sheldrick_Wildlife_Trust

[6] https://www.sheldrickwildlifetrust.org/asp/orphan_profile.as...

2
flashman 3 hours ago 0 replies      
Check out the National Geographic article with much more detail: http://www.nationalgeographic.com/tracking-ivory/article.htm...

There's an interactive map too: http://www.nationalgeographic.com/tracking-ivory/map.html

3
userbinator 2 hours ago 2 replies      
A GPS tracker becomes useless if it can't see any satellites; having worked with GPS before and seeing how effective even a thin layer of metal can attenuate the signal significantly, I'm curious as to how they were able to make this work. It seems highly unlikely that someone smuggling a tusk would leave it in clear view of the sky.
4
JumpCrisscross 2 hours ago 1 reply      
Is there an ethical issue with flooding the market with replica ivory? It would at least raise transaction costs and risks for poachers and their supply chains.
5
barsonme 3 hours ago 3 replies      
While I read the article, I'm currently unable to listen to either the npr or nat geo articles.

Could anybody elaborate on how they pass off the fake tusks to the smugglers? Do they, for lack of a better word, install the tusks on the elephants and wait for the elephants to be poached or do they sell the tusks to the smugglers so the smugglers can flip the tusks?

The network is hostile cryptographyengineering.com
163 points by pmh  7 hours ago   39 comments top 8
1
x5n1 5 hours ago 5 replies      
> hostile to the core values of Western democracies.

It seems the governments have a very different idea of what those values are than the people. Until those ideas are aligned, governments are out to get the people. There is no point in any of this. Because ultimately, no matter what technical solutions you can come up with, force and law always trump those.

Perhaps at some point you could make the argument that we don't explicitly know what the government does and that's why it's doing it and getting away with it. That's no longer the case. We know exactly what the government does, we don't think that it's right, and yet we can do nothing to stop it. So either we need to overhaul government or accept the status quo and quit bitching about it or trying to create technical solutions to fix social problems.

If the government can mandate networks spy on computers, it can mandate manufacturers spy on users. As they are already doing this, fixing the network solves nothing. As for foreign adversaries spying on users, well if you are not in the US avoiding that is impossible as most of your computing experience is under regulatory capture by the US government.

2
nly 30 minutes ago 0 replies      
> Anyone who has taken a network security class knows that the first rule of Internet security is that there is no Internet security.

True, but not a useful observation because we're stuck with the core of what we have. I think it's more worrying atm that nobody can be bothered to even deploy what we do have: TLS, OCSP stapling, HSTS, HPKP, DNSSEC. This stuff isn't difficult to deploy at the individual level. Especially for this crowd. You can make a difference.

> We don't encrypt nearly enough

Ironic from a security conscious cryptographer and blogger who isn't protecting his readers or himself with TLS. Ok, Matt isn't using WordPress, but many do, and I wonder how many of them ever log in to moderate, or edit a post, over networks they don't entirely trust? WordPress has a built-in file editor and stores its config file in the docroot by default for crying out loud... if someone gets your admin session cookie you're toast. They're one patch away from your password, and your commentators passwords and email addresses, if they trust you with such, and can plant as much malware on your site as they please.

> It's the metadata, stupid

Yet Matt Green and Troy Hunt both use Blogger, effectively allowing their readers interests and comments to be further pervasively catalogued by Google.

I'm not saying these minor hypocrisies are even 1 millionth as grievous as failing to prevent the NSA from wiretapping the UN, or even terribly important at all, but damnit... there are things we can all do instead of just pining for a privacy utopia that isn't going to come. If you want privacy to be the norm then protect everything in your power, and aggressively, NOW, everyway you know how.

3
krallja 2 hours ago 0 replies      
This blog post isn't served over HTTPS, either:

 Secure Connection Failed The connection to blog.cryptographyengineering.com was interrupted while the page was loading. The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem.

4
panarky 5 hours ago 1 reply      
All networks are hostile, not just the internet or "external" network.

Google's BeyondCorp [1] initiative recognizes this and treats the internal network as untrusted.

Instead of trusting a privileged network or VPN, securely identify devices and users assuming untrusted networks.

[1] http://static.googleusercontent.com/media/research.google.co... [PDF]

5
windexh8er 4 hours ago 0 replies      
Such a simple thought: "the network is hostile". Yet when you consider the implications of that statement across the board you see stop-gap after stop-gap to fill the void. And as Green points out - the state of the state is bleak when it comes to the surveillance state.

His closing point is very open ended. But, thinking about this as to how "network security" sells products in today's landscape if Green's suggestion that these new systems would fulfill the goal of not having to worry about the network because the systems are designed with an inherent zero-trust model, how does the landscape of "network security" change? If the data path is immune from protections (firewall, IPS, URL, etc.) then does the endpoint radically change? Do we all end up with a containerized laptop with a front-end NGFW/UTM/security blob with which is locally routed within to my guest operation system of choice? And are the general functions broken out into secure segments so that I can work and play while minimizing risk of a malicious actor exfiltrating corporate data while I browse the questionable reaches of the Internet?

Thought provoking, although - as Green states, I don't see many moving the ball quickly (yet?).

6
wavefunction 44 minutes ago 0 replies      
2 tru, all networks are hostile until you takeover

then they're frienly, to a pt.

by that I mean witness the fitness of network evolution in a hostile environment

7
sekasi 5 hours ago 0 replies      
One glaringly obvious problem with this concept is that this very article requires some basic insight in security engineering, and even for people that are interested, it can be hard to digest.

How do we (blanket statement) try to address the overall level of understanding that people have around this topic and make them understand that the problem is real, serious and needs significant thought?

I've thought about this a fair bit. Think about your average non-super-technical co-worker. How do we get them to see the problem in a clear way, and how do we rally people around the problem? I don't know how to, but I try and fail and try again. It's a tough gig. I do have an enormous man crush on Matthew Green though.

8
Zigurd 5 hours ago 1 reply      
This should be completely clear to the people running mass-market internet communication and storage services. And yet none of them encrypt payloads.

Ephemeral keys and forward secrecy are a solved problem for real time and near-real-time communication. Why don't we have a Hangouts or Skype or Yahoo Messenger that are secure against the state-actor threat?

At some point we have to assume the companies providing these services have been persuaded to sell us all out.

CPAN is 20 perl.org
96 points by neilbowers  6 hours ago   12 comments top 6
1
smonff 4 hours ago 0 replies      
CPAN is an awesome tool. First, this is a good way to avoid to reinvent the wheel. Second, it's companion, CpanTesters community platform, gives to the authors a damn simple way to test their modules on a huge variety of OS and architectures. These tests are public and will help people to choose distributions according to their needs and quality exigences. Third, it encourage to test and upload modules at early stages of the development: it makes possible to simplify the development process by retrieving test results, and be sure the modules will be usable on various OS, even if they are quite old or uncommon. It's almost disappointing sometimes how for any task you need to achieve, each time there is an existing module ready to be use.
2
Isamu 4 hours ago 1 reply      
A good time to acknowledge that CPAN was a very influential example for internet libraries that came afterward.

I got a lot out of it for years, time to say thanks! You guys in the Perl community were awesome and still are!

A shout out to CTAN too!

3
jph 5 hours ago 0 replies      
Congratulations! I recall when it debuted and it was a huge accomplishment. 20 years onward and it's still a huge accomplishment, especially testing on so many systems. Thank you to all the maintainers!
4
sparaker 3 hours ago 0 replies      
I think it was the first better done package managers. Saved the problem of installing modules on cross platforms. Most of the time it works without a hiccup. For a perl developer CPAN is a friend.
5
stevebmark 5 hours ago 2 replies      
How old is metacpan, the site that makes cpan useful?
6
dinesh_babu 1 hour ago 0 replies      
Prepping for the Transfer of 25,000 Manuals textfiles.com
75 points by r721  5 hours ago   13 comments top 6
1
donpdonp 4 hours ago 0 replies      
Context/Background link: http://ascii.textfiles.com/archives/4683

tl;dr: A technical manual reseller in Finksburg, MD is throwing out 25,000 high-quality paper manuals from as far back as the 1930s this week and Jason Scott & friends are driving there with "$900 of banker's boxes" to save whatever they can.

2
nadams 3 hours ago 0 replies      
I know many people may be thinking "just throw it out". But you don't understand - you may be faced with system that is in one of those manuals and some previous genius decided that when they cleaned their office to toss that manual.

Or even come into possession of one of those systems and have no idea how to use it. If google hadn't crawled the manual for certain products I would have thrown away a number of electronic paperweights.

3
e0 1 hour ago 1 reply      
Why does he say the Linear Book Scanner (https://code.google.com/p/linear-book-scanner/) destroys books? I thought the idea of the Linear Book Scanner was to automatically scan books without cutting the binding.

Anyhow, cool project.

4
monochromatic 3 hours ago 2 replies      
> Why are these even worth anything or worth keeping, tidy your life, lighten up, etc. Either you really understand why 80 years of manuals, instructions and engineering notes related to 20th century electronics are of value both historically, aesthetically and culturally, or you dont. To try to make the case would be a waste of time for both of us.

I'm not sure I do understand the motivation, but I don't think that I'm beyond understanding it. Is it that some of these systems are still in service? Is it just the history/archeology aspect?

5
pmorici 1 hour ago 0 replies      
Looks like the place is in Finksburg Maryland. about 30 minutes from Baltimore and an hour from DC.
6
userbinator 2 hours ago 2 replies      
Winning 1kb intro released at Assembly 2015 p01.org
160 points by joeyspn  16 hours ago   39 comments top 13
1
adam12 7 hours ago 0 replies      
Here is the UNPACKED source.

function u() { requestAnimationFrame(u);

 for (g = p ? B.currentTime * 60 : (B = 'RIFFdataWAVEfmt ' + atob('EAAAAAEAAQAAeAAAAHgAAAEACAA') + 'data', b.style.background = 'radial-gradient(circle,#345,#000)', b.style.position = 'fixed', b.style.height = b.style.width = '100%', b.height = 720, h = b.style.left = b.style.top = A = f = C = 0, 6177); g > f; h *= f % 1 ? 1 : .995) for (s = Math.pow(Math.min(f / 5457, 1), 87) + Math.pow(1 - Math.min(f / 5457, 1), 8), f == [b.width = 1280, 1599, 2175, 2469, 2777, 3183, 3369, 3995, 4199, 4470, 4777, 5120][C] && (C++, h = 640), f += p ? (c.translate(640, 360 + h / 45 * Math.random()), c.rotate(A / 5457 - h / 5457), c.scale(1 + s * 8, 1 + s * 8), 1) : (B += T((1 + s * 8) * Math.random() + (1 - s) * (h / 45 * (f * (2 + C / 3 % 1) & 1) + (C > 3) * 8 * (f * (2 + (f / 8 & 3)) % 1)) | 1), 1 / 512), i = p.length; i;) y = p[i -= 7], x = p[i ^= 1], r = p[i + 4], l = p[i + 6], s = 2 * Math.random() + 1, t = s * 4, a = 122, 640 > r ? (640 > Math.abs(p[i ^= 1] += p[i + 2]) || (p[i + 2] *= -1), 640 > Math.abs(p[i ^= 1] += p[i + 2]) || (p[i + 2] *= -1), t = Math.random() > p[i + 7] || p[i + 7] == '22312131212313' [C] & h == 640, w = x - A, p[i + 2] || r * r / 3 > w * w && (t = s * (r - Math.abs(w)) / 45 + 2, a = 2 * Math.random() + 5, p.push(A, 0, s * Math.sin(a += 599), s * Math.sin(a - 11), s * t, C + s, 640, .995), s = 2 * Math.random() + 1, a = 2 * Math.random() + 5, p.push(A, 0, s * Math.sin(a += 599), s * Math.sin(a - 11), s * t, C + s, 640, .995), s = 2 * Math.random() + 1, a = 2 * Math.random() + 2, p.push(A, 0, s * Math.sin(a += 599), s * Math.sin(a - 11), s * t, C + s, 640, .995)), a = p[i + 2] * y / 45, l = p[i + 6] = t ? 640 : .9 * l, t = r) : A = p[i] ++, g > f || (s = r, c.beginPath(), c.lineTo(x + s * Math.sin(a += 599), y - s * Math.sin(a - 11)), s = t, c.lineTo(x + s * Math.sin(a += 599), y - s * Math.sin(a - 11)), c.lineTo(x + s * Math.sin(a += 599), y - s * Math.sin(a - 11)), c.shadowBlur = r, s = l, x = s * 2, a = p[i + 5], c.shadowColor = c.fillStyle = 'rgb(' + [x + s * Math.sin(a += 599) | 1, x + s * Math.sin(a += 599) | 1, x + s * Math.sin(a += 599) | 1] + ')', c.fill()); p ? c.fillText('BLCK4777', 90, 99) : (B = new Audio('data:Audio/WAV;base64,' + btoa(B))).play(p = [f = C = 0, 0, 0, 0, 180, 2, 0, 1, -360, 0, 0, 0, 99, 1, 0, 2, 360, 0, 0, 0, 99, 1, 0, 3, -2880, 0, 0, 0, 1280, 0, 1280, 0])
};

2
bane 7 hours ago 1 reply      
Anybody interested in a well written history of the scene that really helps explain it as a cultural phenomenon should probably read the demoscene chapter in "The Future Was Here", a book about the Amiga. It's probably one of the best written synopsis of the scene I've ever read and really places it well as both a technical and artistic movement and helps provide context for this kind of work.
3
myth_drannon 4 hours ago 0 replies      
4
segmondy 4 hours ago 0 replies      
And today, I was reading that facebook's mobile messenger has 18000 classes, and is about a 100mb, this doesn't take into account other dependencies it has on the OS. These demos are 100% standalone, no external libraries. Progress is a funny thing.
5
X-Istence 1 hour ago 0 replies      
This is one of my favourite demoscene's: https://www.youtube.com/watch?v=auagm5UBTwY
6
mrspeaker 7 hours ago 0 replies      
p01 does so much good stuff (http://www.p01.org/releases/) - the fact that he won Assembly 2015 with a JavaScript demo is testament to his skills!
7
gfody 7 hours ago 0 replies      
8
btzll 7 hours ago 1 reply      
I wonder if the bad performance is due to the <1kb requirement.
9
tantalor 8 hours ago 4 replies      
Why "intro"? As if "demo" wasn't confusing enough.
10
aikah 8 hours ago 1 reply      
crashed my browser(chrome latest,win 8.1) but there is a video capture linked .
11
alt_ 8 hours ago 7 replies      
Cute, but doesn't really compare to non-js stuff:https://www.youtube.com/watch?v=qQNIKOD6WnY
12
cloudsloth 8 hours ago 0 replies      
Stunning.
13
replete 7 hours ago 1 reply      
Beware of sudo on OS X rongarret.info
140 points by lisper  9 hours ago   55 comments top 7
1
JakaJancar 5 hours ago 7 replies      
Honest question:

Why does anyone still care about root escalation on workstations? When do we stop pretending our MacBooks are multi-user UNIX mainframes?

App sandbox to full user privilege escalation may be scary. But if someone can run arbitrary code as my user, then by all means, have root as well.

2
gruez 6 hours ago 4 replies      
>What this means is that if you use sudo to give yourself root privileges, your sudo authentication is not bound to the TTY in which you ran sudo. It applies to any process you (or malware running as you) start after authentication. The way Apple ships sudo it is, essentially, a giant privilege escalation vulnerability.

But even if you enable TTY tickets, a malicious process on your system can still elevate itself by patching the shell (in memory, using /proc/id/mem) to inject commands alongside the original sudo command. For example:

User types:

 sudo apt-get update
shell executes:

 sudo bash -c "apt-get update; evil.sh"

3
spitfire 6 hours ago 3 replies      
Funny, I always thought this was supposed to be a feature. It remembered your authentication for a few minutes after using sudo. I assumed it was part of the OSX auth system and would forget if you locked the screen.
4
X-Istence 3 hours ago 0 replies      
And this is a good reason to lock sudo down to a single application that is allowed to be run. In my case I only allow su for my user. Now even if an attacker were to try and use sudo they would also have to know that they can only use su, and most automated attacks will fail.
5
brobinson 9 hours ago 4 replies      
Is there some kind of advantage to this option not being set by default?
6
code_sterling 2 hours ago 0 replies      
7
jmhobbs 5 hours ago 0 replies      
I bet you're fun at parties.
The Interactive Way to Go playgo.to
59 points by fjk  6 hours ago   9 comments top 3
1
Gys 5 hours ago 1 reply      
Go, the game... Not the programming language as I expected on HN...

But some of its ancient Japanese wisdom is timeless ;-)

'Basically, I don't provide answers to problems because you will eventually find them by yourself after retrying many times.'

2
joshbuddy 3 hours ago 1 reply      
Man, we need this for bridge
3
decafbad 3 hours ago 0 replies      
How DuckDuckGo Rode a Wave of Post-Snowden Anxiety to Massive Growth fastcompany.com
53 points by bootload  2 hours ago   28 comments top 8
1
GeorgeOrr 1 hour ago 4 replies      
Am I the only one who actualy finds DuckDuckGo's search results overall better?

The main reason, I believe, is that Google's personalized search seems to get in the way.

Add to that the ! commands in DDG, and the fact that they aren't eliminating things from search in the same way google is and I find DDG the better choice for search as well as privacy.

YMMV of course, as the word "better" is so subjective, but I'd be interested if others have that experience as well.

2
quaffapint 6 minutes ago 0 replies      
While it might be just the opposite of the main purposes of DDG - ie privacy, is there a way to have it show local results?

Like when I search for general things in google it will show me stuff around my area first, whereas in DDG its just generic and could be anywhere.

3
bane 1 hour ago 2 replies      
Like a few other comments here, I switched to ddg a few months ago and only rarely need to go back to google. In fact, it's kind of nice to have a few places to search now (google, bing, ddg), reminds me of the old internet search engine war days.

Unlike the article, I didn't do it for security reasons, I actually gave it a go and found the search results better for most of the things I search for. And if I don't find what I want I can !g and it'll take me right to google's results so I get a direct like-for-like comparison.

I also generally like how DDG presents the results better than google. Little details like checkboxes next to places I've already been are super helpful and I miss those little touches in google. There's tons of little touches like that all over DDG.

I find about 1:30 searches I hit google for and maybe only half of those do I find anything anyway.

About the only think I don't like about DDG is the image search, google is far superior here still. And google sometimes does a better job bringing up maps of places I'm searching for. But I don't see changing back anytime soon and getting better personal security "for free" is a nice side effect.

It also brings into question how important google's personalized searches are for relevancy if DDG can provide equivalent results without needing that information.

4
hardwaresofton 1 hour ago 0 replies      
Good for them. I'm glad they stepped up to serve the sector of people who wanted increased security when searching the web. Most wouldn't try to build a competing search engine these days.

I've been using DDG since I heard about it.

5
Dramatize 1 hour ago 0 replies      
I switched to DDG as my default search engine a few months back. Most of the time it's good enough. If not, a simple !g brings up Google's results.
6
dmfdmf 1 hour ago 1 reply      
The title as written is a smear against Snowden.

"How DDG Rode a Wave of Post-Snowden Security Concerns to Massive Growth"

7
gggggggg 1 hour ago 3 replies      
I have not uses DDG in a while. Do they still get results from Goolge?
8
omginternets 1 hour ago 1 reply      
The problem with DDG, as much as I respect their efforts and their official discourse, is that you really can't be sure they aren't tracking users based on some secret court order.

For this reason, I still have a very hard time trusting them in any meaningful sense.

Tic Tac Toe: Understanding The Minimax Algorithm (2013) neverstopbuilding.com
55 points by madflame991  8 hours ago   9 comments top 8
1
nemesisrobot 15 minutes ago 0 replies      
The Berkeley AI class on edX[0] covers this, and other related algorithms, like alpha-beta pruning. It's a very fun class; recommended for anyone interested in this sort of thing

[0]: https://www.edx.org/course/artificial-intelligence-uc-berkel...

2
xiaoma 6 hours ago 0 replies      
One of my friends made a strong chess engine in high school: http://www.tckerrigan.com/Chess/TSCP/

It's far, far more sophisticated than this and still only about 2k lines of C. The source is available on the site linked above.

3
kkl 1 hour ago 0 replies      
What great timing. I _just_ finished a Minimax algorithm to programmatically play the game of Rota. I did this as fun project to teach myself Golang. If this article seems interesting to you, I would suggest doing something similar.
4
im3w1l 6 hours ago 0 replies      
If used for larger games, one cannot afford to examine the whole tree. The solution is to stop at some search depth and then estimate how good the position is with some heuristic.

One common optimization is alpha-beta pruning. It is applicable of you have already found a good move (when we use a heuristic we don't know for sure if it will win or not, only if it is good or not), and consider another candidate move. If the latter move has a strong counter, that means it will for sure be worse than the good move. We can then immediately discard the weak move from consideration, there is no need to find out just how weak it is. To make best use of this, one should check promising moves first, because then the bar for potential moves will be higher and more pruning can be done.

5
sparaker 2 hours ago 0 replies      
Had fun trying out your example. I really like this "You will note that who the player is doesn't matter. X or O is irrelevant, only who's turn it happens to be."
6
leetrout 7 hours ago 0 replies      
Cool writeup. I botched an implementation of negamax[0] for a coding interview for CMG a few years back.

Ultimately with algorithms like minimax or negamax if your scoring / winning algorithm is slow this will compound on top of that. I didn't dig in deep to OPs code nor do I "know" ruby but a quick glance at the scoring / checking code and I saw a yield in use so that's good :D

[0] https://en.wikipedia.org/wiki/Negamax

7
chaosfox 7 hours ago 1 reply      
>Basically the perfect player should play perfectly, but prolong the game as much as possible.

hmm no, "masters" will concede the game when they realize they lost, if conceding isn't an option I believe making a play that will end the game faster is the next best.

8
jrobertfox 6 hours ago 0 replies      
I'm glad this is useful for people. Thanks :)
Marvin Minsky's Homepage mit.edu
48 points by jessup  7 hours ago   29 comments top 5
1
chestervonwinch 1 hour ago 0 replies      
An interesting quote from his biography page next to his Perceptrons text:

> ... Many textbooks wrongly state that these limits apply only to networks with one or two layers, but it appears that those authors did not read or understand our book! For it is easy to show that virtually all our conclusions also apply to feedforward networks of any depth (with smaller, but still-exponential rates of coefficient-growth). Therefore, the popular rumor is wrong: that Back-Propagation remedies this, because no matter how fast such a machine can learn, it can't find solutions that don't exist. Another sign that technical standards in that field are too weak: I've seen no publications at all that report any patterns that porder- or diameter-limited networks fail to learn, although such counterexamples are easy to make!

2
adamzerner 47 minutes ago 3 replies      
Not to complain, but I find it surprising that the web pages for such prominent academics/people are often so ugly. Why not pay some web designer to spruce it up? Or maybe have the department/school use a nice template? Seems worth it to me.
3
kleer001 4 hours ago 2 replies      
Why would you put your relative's homepages and email addresses on your homepage? It strikes me as strange, but I'm sure there's a sensible reason.
4
westoncb 7 hours ago 1 reply      
Anyone have opinions on his "The Emotion Machine"? I'd read "The Society of Mind" as a teenager and it was hugely influential on mebut I read a few pages of Emotion Machine last year, it felt very different, and I ended up not continuing.
5
jordigh 7 hours ago 6 replies      
Is Old Man Minsky the only one who still thinks that machines should do AI instead of machine learning? He certainly seemed to believe that in 2007. Who is still on his side?

http://snarkmarket.com/blog/snarkives/societyculture/old_man...

Anna Mikusheva refines time-series econometrics tools to improve forecasting mit.edu
59 points by user_235711  15 hours ago   11 comments top 3
1
animefan 6 hours ago 1 reply      
I did my PhD in econometrics. Reading this article, it looks like it was a general interest article profiling a professor who just got tenure. So if people are asking "how is this relevant to me", "how can I use this" or "should I learn more", the answers are most likely "it's not", "you can't", and "no". Not trying to be negative here, it's just that there are dozens of people getting tenure in econometrics every year at a similar level, all working on things that might seem to have some relevance to what HN readers are working on. From a HN readers perspective, there is nothing unique or special that I see here.
2
grayclhn 6 hours ago 0 replies      
The lecture notes for Mikusheva's graduate time series class are available through OCW: http://ocw.mit.edu/courses/economics/14-384-time-series-anal...

The math behind her research is a bit daunting...the course notes aren't trivial, but there's more intuition presented there than if you try to dive into her publications.

addendum: just to add ---- the course notes give a pretty good picture of the background knowledge necessary to be a time-series econometrician. If you're interested in the subject, check them out.

3
baldeagle 7 hours ago 2 replies      
Here is a link to her MIT page that contains here papers. I tried skimming one, but couldn't follow it; however, Based in the article I'm more interested in econometrics and might pursue that study further.

http://economics.mit.edu/faculty/amikushe/papers

A classic cracking challenge 3564020356.org
40 points by simonjgreen  8 hours ago   9 comments top 4
1
0x0539 2 hours ago 1 reply      
Its a classic alright but being a classic just a warning to everyone some of it does require going back in time a little bit like dealing with a 16bit binary.

If this type stuff interests any of you but is too hard, give mine a try ( 0x0539.net ). It is not intended to be a significant challenge, instead all the stages focus on introducing some basic concept related to offensive security. Its mostly aimed at some young teens that have expressed an interest in learning that stuff rather so the target is very introductory.

I update the site every so often with new sets of challenges and rotate through former sets if someone requests it. The current one I ran for a bit in 2013 and then brought it back earlier this year and plan to cycle in a new binary exploitation focused one in December.

2
ishtu 5 hours ago 0 replies      
3
foobeef 5 hours ago 1 reply      
Wonder what happend to +Mal and others +HCUers now. Probably still in reversing business .. Good ol' days
4
83a 6 hours ago 1 reply      
Rust web framework, Iron ironframework.io
86 points by lding43  6 hours ago   48 comments top 5
1
tinco 5 hours ago 6 replies      
The first thought that should pop into your head is probably, why use this instead of language XYZ, where XYZ is Ruby/Java/Python/Go. It's actually a pretty interesting question because Rust is very much unlike those languages in the sense that it's a true systems programming language like C/C++ are.

No one would (should!) ever consider writing a webservice in C++, simply because it's unsafe to do so, it's much easier in other languages and the performance downsides of those languages don't matter any way (i.e. Ruby powers a bunch of high throughput websites yet it's notoriously 200x slower than any other language).

But here comes Rust, it takes away the unsafety, arguably being even safer than the managed languages in some aspects, is (in most aspects) easier to use than C++ (coming close to the managed languages in ease of use), yet has similar performance aspects (theoretically identical performance aspects) to C/C++.

There's a web gateway in our cluster that receives binary blobs over REST and puts them onto the message queue. In total not much over 100 lines of Ruby. We've thought about reducing the server load a bit by porting it to Go or some other more performant language. We probably wouldn't go for Rust, since Go is simply easier to learn (should a new person ever have to perform maintenance). Just the possibility of implementing it in Rust in roughly the same timespan and code complexity while having a theoretically optimal performance is very interesting.

2
jonreem 3 hours ago 1 reply      
Main author of iron here, on mobile but happy to answer any questions.

Here's a link to the first chapter of an iron tutorial I've been working on, which explains the "hello world" example in great detail and introduces some of irons core abstractions: https://github.com/iron/byexample/blob/master/chapters/hello...

3
hammerdr 5 hours ago 4 replies      
Warning: very little knowledge of Rust and Iron

Can someone explain why there is an extra Ok(...) in this? (I want to call it a function, but I'm not even familiar enough with Rust to be sure that it is a function).

Is it something that could be removed? Right now it just looks like boilerplate.

Edit: Thanks everyone! Ok is similar to a Right or Success of a Either/Result type of object.

4
steveklabnik 5 hours ago 1 reply      
Crates.io uses Rust as a backend, Ember on the front. It uses about 35MB of resident memory, and (other than some weird DNS issues that aren't the server's fault) is just super rock solid. (It doesn't use Iron, though.)

I'm still not sure the application tier isn't best served by something that's easier to prototype in, but if you already know Rust, the web stuff is shaping up pretty nicely.

We also got Hyper, Iron, and Nickel entries into the Techempower benchmarks, I'm really interested in seeing the eventual results.

5
ejcx 4 hours ago 5 replies      
Specters of a Civilization nybooks.com
35 points by whocansay  7 hours ago   1 comment top
1
rdtsc 6 hours ago 0 replies      
> In addition to the fatal scourge of measles and smallpox that decimated other Amerindian groups, the Selknam were singled out in the 1890s for a campaign of genocide: Romanian engineer Julius Popper paid bounties for Selknam heads and ears and organized hunting parties to clear them from the territory to make way for miners and ranchers.

Never heard of Popper before. Saw the reference to Romania and thought, oh great, another terrible thing about Romania. What have they done now...

Anyway here is the wikipedia entry on him:

https://en.wikipedia.org/wiki/Julius_Popper

Generative testing for JavaScript github.com
37 points by luu  8 hours ago   6 comments top
1
Dru89 5 hours ago 3 replies      
I'm a bit torn here. On the one hand, this sounds like it has a high probability of doing exactly what unit tests shouldn't do: pass on one run and fail on another.

On the other hand, the fact that it would fail at all would help you see that you have a bug. Something you might not have caught before.

MovieLens Non-Commercial, Personalized Movie Recommendation System movielens.org
40 points by wang42  12 hours ago   19 comments top 7
1
RodericDay 1 minute ago 0 replies      
I'm torn. On the one hand, I just spent a few minutes rating about 100 films, and the quality of the recommendations is so much better than Netflix's garbage that it's mindblowing. As in, we've spent half an hour browsing Netflix trying to find something to watch and settling for something meh, whereas here I have a full page of recs with 5 minutes input.

On the other hand, I don't know if I'm very impressed with anything yet, the recommendations seem obvious. It's more like a neat visual collection manager, I'm not sure I "feel" anything crazy going on in the background beyond what looking at IMDb best-of lists would produce. In other words, the value seems to be in the Netflix-ish "cover collection" display, unconstrained by the limits of ther limited selection.

2
posborne 2 hours ago 0 replies      
Here's the dataset they use. I have used this as part of developing and testing the fitness of recommender systems in the past: http://grouplens.org/datasets/movielens/

This predates some of my more recent work with the grouplens database, but here is a parser I put together for the data awhile back: https://github.com/posborne/mlcollection/blob/master/mlcolle...

3
MichaelGG 36 minutes ago 0 replies      
Proof that Netflix doesn't need all that effort to deliver a laggy, annoying, UI. Seriously, the core functionality is essentially the same. MovieLens seems like it might replace Netflix for me. Not-terrible recommendations and a decent UI. Though I've only been using it a few days so we'll see, but it can barely end up worse than Netflix eh?
4
dmix 5 hours ago 2 replies      
This looks interesting, but I'm always sad to see consumer web apps still requiring you to sign up before you can try it out. Missing out on so many potential users. 1) sell the product then 2) ask for the customers info
5
nobody_nowhere 7 hours ago 1 reply      
Oh wow, blast from the past. I remember rating hundreds of movies on this back when collaborative filtering was fresh and new...
6
pen2l 5 hours ago 4 replies      
Despite there being so many of these sites, most of them kind of fail for me.

For instance... last night I was wanting to watch a movie, but the kids needed to watch it too (as they can't be left unattended with me watching a movie in the other room :)).

So I went off on a search for a movie to watch... with a rating filter -- it needed to be either G or PG.

Nothing. No recommendation site, Netflix, or anything let me filter movies/shows this way. Yes, you have the 'Kids' section on Netflix, but most of the stuff there is deathly boring for adults, I was hoping for something good for both kids and adults.

I ended up not watching anything.

7
cryowaffle 5 hours ago 1 reply      
Neat, I worked on this project while at the U of M under the instruction of now passed John Riedl.
Ceptre: A Language for Modeling Generative Interactive Systems [pdf] cmu.edu
33 points by mindcrime  9 hours ago   3 comments top 2
1
maxwelljoslyn 5 hours ago 1 reply      
As a linguistics student and amateur programmer I find linear logic to be very interesting since it gets used both in linguistics (associated with lexical-functional grammar) and, of course, math and computing.

I think it would be cool to parse the output tree of a Ceptre program and produce some kind of display (other than a visual tree, of course.) Maybe set the stages to auto, run it a a bunch of times, then produce some statistical analysis from the results - number of deaths per character, average speed at which a given character dies, and so on.

Anyone know if a Ceptre parser available? They must have written one for this article.

2
smosher_ 4 hours ago 0 replies      
I'm a big fan of this work. For the PDF-shy, there's some excerpted text on LtU: http://lambda-the-ultimate.org/node/5216 but you'll miss some of the goodies, like the graph of a scenario from Romeo and Juliet, showing the actors act concurrently at different locations.

See also the github repo: https://github.com/chrisamaphone/interactive-lp/

There is an emphasis on using Ceptre for developing interactive fiction (text adventures/parser games.) From my perspective, a pain point in developing truly creative interactive fiction is the tools available tend to impose a world model on your work. I'm excited for Ceptre because it lets you write causal relationships directly and that makes starting from scratch a more realistic proposition.

1. With some work I think it could be (and should be) used in other kinds of games, such as sandbox games or anything that would benefit from a living world, even if only in part if the main story must be nailed down and deterministic.

A* Search kartikkukreja.wordpress.com
59 points by kartikkukreja  13 hours ago   15 comments top 9
1
ggambetta 10 hours ago 2 replies      
The A* algorithm is a really interesting one. It's conceptually quite simple if you approach it from the right angle, but most of the time I've seen it poorly explained, and as a consequence, people really struggling to "get it".

My little contribution is a very accessible series of articles that derives A* almost from scratch, in a way that is really easy to understand: http://gabrielgambetta.com/path1.html

2
octaveguin 24 minutes ago 0 replies      
One of the most used in industry approaches to pathfinding in things like RTSes is what's called a flow field. It's a bit like a precalculated pathfinding map.

These are some of the most interesting for what you can accomplish (pathfinding a ton of units at once!).

A great resource for that is this blog:

https://howtorts.github.io/

It has working javascript examples and other related topics like flocking behavior.

3
c3534l 20 minutes ago 0 replies      
An example of the A* search algorithm in use: https://www.youtube.com/watch?v=DlkMs4ZHHr8
4
TheLoneWolfling 50 minutes ago 0 replies      
What I find interesting is that A star, DFS, BFS, Dijkstra's algorithm, JPS, etc are all basically the same algorithm. The only real difference is the queuing priority. (BFS is LIFO, DFS is FIFO, Dijkstra is min-distance-from-start-first, A star is min-distance-from-start-plus-(admissible,ideally)-heuristic-distance-to-end-first )

Most CS courses I've seen teach them as entirely separate algorithms, which I find... frustrating. Or rather, unnecessarily complex.

5
suby 9 hours ago 2 replies      
If you're interested in pathfinding, you might enjoy this video. http://www.gdcvault.com/play/1022094/JPS-Over-100x-Faster-th...

Taken from the description:

In 2011, Harabor and Grastien introduced Jump Point Search (JPS) that achieves up to a 10x speed improvement over A* on uniform cost grids. In the last year, an additional 10x speed improvement to JPS, called JPS+ was independently developed by Steve Rabin as well as Harabor and Grastien. This improved algorithm is over 100x faster than A* on maps with open areas and over 2x faster than A* on worst-case maps. This incredible speed-up is due to pre-computation, eliminating the recursion in JPS and focusing only on touching select relevant nodes during the search.

6
logane 9 hours ago 1 reply      
Amit Patel has a great series on A* pathfinding: http://theory.stanford.edu/~amitp/GameProgramming/AStarCompa...
7
justinhj 5 hours ago 0 replies      
I wrote a tutorial for this too which includes c++ source code that's been used in some succcesful video games and thereby tested by millions of people http://heyes-jones.com/astar.php
8
akeruu 9 hours ago 0 replies      
A nice way to visualize A* with different heuristics : https://qiao.github.io/PathFinding.js/visual. Also supports lots of other algorithms.
9
bpp 9 hours ago 0 replies      
This article was really helpful to me in understanding A*: http://www.briangrinstead.com/blog/astar-search-algorithm-in...
Search-Script-Scrape: Web scraping exercises in Python 3 for data journalists github.com
40 points by danso  10 hours ago   9 comments top 5
1
thuruv 34 minutes ago 0 replies      
The others might failed to understand that these are the tools not the talents to pursue their career.
2
Sven7 6 hours ago 1 reply      
If you are a young journalist being told "data journalism" is the must have resume bullet point for the future, here's some unsolicited advice from someone in techland who has worked with journalists.

1. Don't waste your time on this stuff if you have no interest/aptitude for it. I see people being pressured into it when it's not the right fit. The kind of people who will have success with this, are the Nate Silver's of the world who are really domain experts dabbling in journalism.

2. Being a journalist gives you access to data and access to experts. Bring the two together whenever you can. It takes time and skill to develop that access. And in most cases, it's time better spent than learning python. Matt Taibbi is a good example of this. He was able to make sense of something complex (2008 meltdown) by bring the data and the experts together. No Python necessary.

3
gtrubetskoy 4 hours ago 1 reply      
Unless I'm missing something, the README doesn't mention that all the examples rely on "requests" (which is not in the standard lib or Python 3 specific, thus title is a tad misleading): https://pypi.python.org/pypi/requests
4
alexcasalboni 5 hours ago 1 reply      
Many of those scripts will most likely fail within a few weeks, as their data extraction logic is way too simplistic and based on unstable and non-semantic HTML structures (i.e. doc.cssselect('small a')[0] ).
5
j4kp07 6 hours ago 1 reply      
Maybe I am being picky, but is traversing JSON files truely "web scraping"?
Linux-Insides: Interrupts and Interrupt Handling, Part 10 github.com
42 points by 0xAX  13 hours ago   1 comment top
1
tbrock 11 minutes ago 0 replies      
I just started reading the source + commentary for MIT's xv6 (an modern x86 adaptation of Unix V6) and I can't wait to use Linux-Insides to take what I learn and apply it to Linux.

Bravo to whoever 0xAX is, what a great resource!

Deep learning for assisting the process of music composition highnoongmt.wordpress.com
60 points by albertzeyer  13 hours ago   28 comments top 16
1
pierrec 11 hours ago 0 replies      
Well, this field is really exploding right now! I was curious about the performance and searched around a bit: in another other post, the author gives a slightly more detailed explanation of how the tunes are automatically turned into audio:

"I convert each ABC tune to MIDI, process it in python (with python-midi) to give a more human-like performance (including some musicians who lack good timing, and a sometimes over-active bodhran player who loves to have the last notes :), and then synthesize the parts with timidity, and finally mix it all together and add effects with sox."

https://highnoongmt.wordpress.com/2015/08/07/the-infinite-ir...

The generation of tunes by the RNN is pretty nice and definitely the trending topic, but I think I'm more impressed by the little performance script that he's put together. The output is quite pleasant and I'm curious about the code that generates the bodhran part. Hope this gets open-sourced!

(Off-topic to the guy who submitted this: thank you for making OpenLieroX and turning my university into a chaotic LAN party on many an occasion.)

2
bane 8 hours ago 1 reply      
I really appreciate the effort that went into the performance part of this work. There was a real effort to try and make it sound like a reasonable representation of humans playing...a little off beat, out of sync at times. Instead of just hammering the notes out like I hear with lots of these systems, it makes it listenable...I've had the endless trad on for 15 minutes now in the background.

I also like how the basic structure of the musical forms has mostly carried through the model, that seems to be a good "sniff test" if the model is producing reasonable output, if the musical structure makes sense as well as the notes. It makes it feel like there was a little bit of planning.

Great work.

3
Yenrabbit 7 hours ago 0 replies      
This is great! He is using deep learning as it should be used in regards to music - not as the sole generator of songs (no technique is quite up to that yet) but as a source of inspiration for a 'proper' musician, who can take it's output and do cool things with it. As a bagpipe player, I can hear ideas for several new pieces among the output he posted!I see this in the same line as IBM's 'Chef Watson' - great if a sufficiently skilled person is there to supervise :)Good work.
4
andkon 12 hours ago 1 reply      
Fiddle player for the last 18 years: can confirm that this is pretty much what most traditional music circles sound like (especially their endlessness).
5
archagon 4 hours ago 0 replies      
This is really interesting, but it also makes the failure points of computer generated music evident. Namely: it works best with simple, "jammy" music; it requires a large existing base of music to pull from; and it has no way of using compositional techniques for rhetorical effect. (Increasing/decreasing tension, "going somewhere", expressing emotions, etc.) In other words, it can't create innovative music or music that has something interesting to say. It can only recombine the past.

I like the idea of using these tools for educational purposes, however. Can we derive the "rules" for music of different cultures by feeding them through this kind of algorithm? If so, it would give us a fantastic insight into different musical traditions around the world, even if it couldn't write that music for us.

6
tgb29 7 hours ago 1 reply      
I've now read several posts with deep learning and music composition on this forum and there appears to be good development on this front . If I had the ability to contribute I would use these methods and theories to try and create new genres of music that blend existing / contrasting cultures together .

For example Western Pop / Hip Hop blended with Arabic / Middle Eastern tunes . How can Western artists use deep learning / pattern recognition models for a set of popular Middle Eastern tunes in order to create new and modern melodies that can be produced with Western lyrics in order to create modern songs that are popular to both Western and Arabic demographics . The same concept can be applied in reverse so that Arabic or Indian / Asian styles can be measured , modified, and released in Western markets.

So we have models that are creating melodies based on its training set and we also have artists using these new melodies to produce new music that can reach broader or different markets . More money opportunities and more blending of cultures which is I think good for society .

One could argue this capability already exists and the current top 40 hits are melodies Scientifically created to be catchy to the ears of consumers based on patterns of the previous months popular top 40 hits. If so then I would like to see this ability applied to bring conflicting cultures together and create new music markets .

7
tgb29 6 hours ago 0 replies      
I would also be interested in these methods being applied to lyrics but I think this is to far off right now. The emotion and feelings that come from lyrics are different than the emotions and feelings from the melodies. Furthermore , the emotions and feelings that come from the special combination of certain lyrics over certain melodies is a whole separate challenge . I do think we will get here but the linguistics barrier needs to be overcome or better understood.

Today I was with an English major and we talked about how these models could be developed to one day create unique novels for each unique Individual . At first she tried to say there was a difference in the patterns found in music and the patterns found in, for example, Romantic novels . But after our conversation progressed we began to reach the conclusion that many and most novels do follow a set of patterns that can be identified and then possibly replicated to form new novels. This challenge is , for now I think , significantly greater because, for example , a character in a novel is fully dependent on the context of the words used to build the plot and theme . The emotion and meaning behind a character can mean there is an infinite number of character and context combinations that can be used in a novel to bring about this meaning . The character could be a male, female , robot , a animal a plant , located on Mars or in China or in the South in 1950 or 2066 or 550 BC. As long as the meaning and theme is consistent than the character and context can be anything and so in order for a program to be able to write a successful novel in order to compete with Shakespeare or King then our deep learning models need to better understand linguistics and meanings behind words.

8
dang 8 hours ago 0 replies      
This was posted twice. We kept this thread as the earlier of the two, but changed the URL to the more explanatory post. The other URL is http://www.eecs.qmul.ac.uk/~sturm/research/RNNIrishTrad/inde... and actually plays the music. The other HN thread was https://news.ycombinator.com/item?id=10069007, but we moved the comments here.
9
vonnik 3 hours ago 0 replies      
People interested in the history of computer-generated music should look into David Cope's experiments in musical intelligence: http://artsites.ucsc.edu/faculty/cope/experiments.htm
10
raverbashing 11 hours ago 2 replies      
This seems it is almost getting it to work in a musical sense. (The Irish songs seem to be simple enough for the NN to understand it back to front, also a lot of 'similar' samples definitely helps)

The NN seems to be able to assemble the repeat sections with different endings and having a song with two distinct sections

But they seem to all be in the same key and time signature

11
scottlocklin 3 hours ago 0 replies      
You could use LZW or prefix trees for assisting music composition. Rather less mystification with those tools though.
12
kephra 11 hours ago 0 replies      
The generated sniplets have a lot of pentatonic intervals. Of course pentatonics makes music more harmonic, but we are used to more disharmonics adding color. I can not look at the notes sheets, but I guess the genenerated music has less disharmonics then original Irish fiddle.
13
maxki 8 hours ago 0 replies      
it has a lot of similarity with human made trad music, but to my ear it sounds very different from music. something is definitely missing. Inflatable dolls look like the real thing, somewhat, on the surface... still a long way to go for synthetic music to fool a musician's ear, IMHO...
14
tonetheman 11 hours ago 0 replies      
This type of stuff is super cool.
15
pierrec 10 hours ago 1 reply      
Submitting this twice simultaneously may be causing more confusion than anything. I didn't see this one at first and only commented the other one (which seems to be magically higher on the front page):

https://news.ycombinator.com/item?id=10069007

16
monochromatic 9 hours ago 0 replies      
MergeShuffle: A Very Fast, Parallel Random Permutation Algorithm arxiv.org
58 points by user_235711  15 hours ago   8 comments top 3
1
blt 7 hours ago 1 reply      
Wait, so the authors' method is only a tiny fraction faster than the parallel version of the more general Rao-Sandelius method, and uses more random bits? But they claim it's "very fast" and "extremely fast"? Am I missing something?
2
dripton 8 hours ago 3 replies      
A fun read.

Does anyone know of a practical use for a very fast, parallelizable shuffle algorithm that uses few random bits? All the shuffling I've done has used small enough N that Fisher-Yates was just fine.

3
davidshepherd7 8 hours ago 0 replies      
Just in case the authors are reading the comments: the second paragraph of section 1.1 ends mid-sentence, and even worse is missing the closing parenthesis!
Octopus genome holds clues to uncanny intelligence nature.com
82 points by fitzwatermellow  15 hours ago   15 comments top 7
1
Calcite 8 hours ago 1 reply      
I've seen a few octopus while scuba diving in tropical waters. They are the most interesting species underwater. You can look at them for five minutes and it's absolutely fascinating. The way their skin changes texture and color is mesmerizing. It really feels like an alien life form because nothing else looks or behaves like that.
3
matwood 8 hours ago 0 replies      
Funny and informative video about the octopus.

https://www.youtube.com/watch?v=st8-EY71K84

5
fit2rule 8 hours ago 2 replies      
I am a huge fan of octopuses. I recall many a day spent in my youth on Australian beaches, looking through crack and crevice for these delightful - and sometimes dangerous - creatures. (Blue Ring Octopus: delightful, and deadly.)

I've seen them doing all sorts of things, and learned a few tricks for how to deal with them. One very important thing for an octopus lover to understand is that they are absolutely entranced by the bright and shiny - I have yet to meet an Australian octopus I couldn't entice out of its shelter with the flash of a gold coin .. just get the sun angle right, shine the coin in the hole, and out they come .. be prepared to leave the reef poor, because once that coin gets grabbed, its all over! Somewhere in the holes of Yanchep, there's a small pile of coins .. I'm quite sure. (At least $5.)

I've seen octopus fishing! That is to say I've seen them using tools! I once spent hours watching a small reef specimen sitting in its little hole, one long tentacle stretched out, the carcass of a crayfish held lazily at the end, swaying in the current gently, tempting those stupid whitefish to come just a little closer for .. one last little feed on the crayfish before .. WHAM .. out comes another tentacle like light, to drag the stupid whitefish in .. a few minutes later, back comes the crayfish shell, and on it goes. I spent almost a whole day watching this process, it was fascinating .. and I dare say the little octopus even knew I was there and put on an extra special show for me - it really felt like it. (Okay, I'm projecting, but .. wow. A fishing octopus, using a lure!)

I hope one day we get a chance to understand these awesome creatures better - and it certainly seems like we're closer now than ever to understanding just how intelligent they can be.

There's an octopus I visit regularly at the local aquarium, here in middle-Europe .. it looks so sad. I like to talk to it when I visit, and I've gotten a response a few times .. out it comes, swimming from its little spot, to saunter all over the glass of its tank, looking me right in the eye. That is a delight, as sad as it is to see. I know how it feels, so far from the ocean, so that's why I like to tell it nice things.

Definitely my favourite creature. Anyone got octopus tales to tell? I'd love to hear them. I'd also love to hear from anyone who has managed to keep one in captivity - much as I understand it to be a cruel exercise, the idea of having a pet octopus appeals to me greatly. I'd never do it though - to take such a delightful being from its ocean is beyond my means.

6
ArtDev 7 hours ago 0 replies      
I may be pescatarian but still I don't eat octopus. They are really smart!
7
DonHopkins 12 hours ago 1 reply      
What's almost as amazing as the octopus in that video are the fish swimming backwards at 1:58. That must be some kind of defensive behavior they've evolved to confuse the octopus.
Bash Pitfalls wooledge.org
89 points by monort  17 hours ago   35 comments top 9
1
monort 11 hours ago 3 replies      
If your bash script is not trivial, and target machines have python, it's probably better to use sh:

https://amoffat.github.io/sh/

2
asa400 3 hours ago 0 replies      
Li Haoyi just gave a talk at Scala by the Bay entitled "Beyond Bash", which details the shell scripting environment he's been working on that is hosted in a reimplementation of the Scala REPL, called Ammonite.

Slides for the talk: http://tinyurl.com/beyondbashDocs: http://lihaoyi.github.io/Ammonite/

I wasn't at the talk, but I downloaded it and have been playing around with it. It's really fun!

The path operations are all typed (ie, you can't combine relative and absolute paths in stupid ways), you get all of Scala to operate on files and the filesystem (if you know Scala, this is pretty huge), and it has a handy pipelining syntax that is effectively an extension of the shell `|` operator we all know and love: http://lihaoyi.github.io/Ammonite/#Extensions

There are other niceties built in as well, like syntax highlighting and pretty printing, that gave me the impression that the author really cares about the UX of the software. It's not all academic/pure, in fact it appears to be the kind of pragmatic, practical thing that I wish Scala was known for. I highly recommend giving it a shot, especially if you already know Scala. I definitely will be giving it some time in the coming weeks.

3
freddref 11 hours ago 1 reply      
Is there a script to check for these pitfalls? (And offer example solutions?)

Has anyone forked bash to remove or fix these pitfalls, while maintaining maximum "bashness"?

edit: http://www.shellcheck.net/

4
proactivesvcs 3 hours ago 0 replies      
A really helpful guide, particularly for someone just starting out on Linux such as myself. Hopefully I will not get into bad habits to begin with :-)

Having looked at my scripts I seem to have been pretty cautious already (Windows batch has already scarred me plenty), but I have shored up a few minor areas.

5
vezzy-fnord 9 hours ago 0 replies      
See also the Inferno shell: http://debu.gs/entries/inferno-part-1-shell

I've been playing around with the werc framework, 9base, plan9port and other Plan 9-derived tooling and have found rc shell to be rather pleasant compared to Bourne and Korn dialects.

6
hyperpape 9 hours ago 2 replies      
Referencing the other thread on shell scripting currently on the front page (https://news.ycombinator.com/item?id=10068668), some of these examples show ways in which bash is actually quite verbose:

 # POSIX for i in *.mp3; do [ -e "$i" ] || continue some_command "$i" done # HYPOTHETICAL SYNTAX 1 map some_command *.mp3 # HYPOTHETICAL SYNTAX 2 some_command *.mp3
It's hard to imagine how to create simple syntax for operations like that while accomodating other syntactic requirements (strings without quotations, pipelines, etc), but I dream about a shell language that lets me do things like the above.

7
jordigh 11 hours ago 2 replies      
And people think C++ is hard to do correctly...
8
joshbaptiste 12 hours ago 1 reply      
Everything I know of bash I learned from Freenode irc #bash channel who have a very active bot that always points to this wiki, so many scripts at work I see the dreaded

 for i in `ls`; do... 
Which only works due to the fact that we hardly have any files with spaces, newlines etc..

9
rando289 8 hours ago 0 replies      
Heim A real-time community platform github.com
67 points by tvvocold  13 hours ago   29 comments top 8
1
renke1 12 hours ago 7 replies      
I kind of like the idea of threaded chatting. Is there any other chat program that uses this idea?
2
willpearse 8 hours ago 1 reply      
I get a message saying it's not ready for Hacker News yet, so I can't even see what's going on :-(
3
Gracana 10 hours ago 1 reply      
I've been a user on the site for a few months now. There's a lot you can imagine would be difficult with threaded chat, but in practice it works pretty well, to the point that I generally miss having the ability to reply to a specific message on other chat platforms.
4
planetix 12 hours ago 1 reply      
Looks like the IRC on my other monitor..
5
fiatjaf 10 hours ago 0 replies      
I see a picture of a cat, a bad drawing of an orange and a not-handsome guy. Only that.
6
DoubleMalt 10 hours ago 1 reply      
But does it federate? (Honest question, did not see a documentation for that)
8
sergiotapia 10 hours ago 0 replies      
Looks like a more confusing version of IRC.
Ask HN: Why did literate programming not catch on?
57 points by dman  9 hours ago   47 comments top 20
1
chipsy 5 hours ago 2 replies      
It makes it harder to make changes. The story you start telling is not what you end up with later, after you've completed all the non-trivial features and major assumptions have fallen through. Going back and fixing the story as you go along is expensive. Writing the story after it's done is too late - the business value is in the product's shipped functionality, not in the development artifacts.

We have an alternate method of understanding how software developed, which is to look at revision control commits. This method falls more in line with the techniques of development and the genuine evolution of the codebase. Recall that revision control methods were still being improved well after Knuth wrote about literate programming, and the available systems, where people used them(and a lot of shops didn't), weren't nearly as fine-grained back in the 1980's.

Personal experience: I tried using "Leo", an outlining and literate programming text editor, for a project. Although the documentation capability was nice temporarily and gave me an additional method of structure and organization, the hooks messed around with some formatting and refactoring operations, and most of the time, the benefit wasn't clear. The time I spent on the documentation could have gone - from my current perspective - into making the code smaller and simpler. At the time, I didn't know how that would be possible, thus I focused on defending against the complexity by adding more.

A lot of our preconceptions about what makes code both look good and behave well are temporary. That makes it hard to come up with a sensible system of organization, as we'll put effort in to enumerate and categorize only to discover that it falls apart.

2
erikpukinskis 4 minutes ago 0 replies      
I've been trying to write a JavaScript framework in a literate way for a year or so, here's what I've found:

* I end up deleting most of the prose once the code starts getting good. Good code is sort of literate already.

* As others have said, when you're doing some code churn, it's difficult to maintain a narrative structure that makes sense.

* Existing frameworks, at least for web programming, encourage you do draw boundaries around individual homogenous chunks of one type of code (router, view, migration) rather than human conceptual concerns (upload a file, browse my photos, etc). In order to do a good job explaining what a view does, you need to know what the controller is providing. In order to understand that you need to understand what the domain layer is doing. Frameworks just make it really hard to put, in one file, everything necessary to explain a concern.

I still believe in the idea, but I think for literate programming to work well it has to be done in an ecosystem where the APIs are all structured for literate programming, which doesn't really exist (yet).

3
daly 48 minutes ago 0 replies      
I've been using literate programming for 15 years on many projects. The largest and most visible one is Axiom(https://en.wikipedia.org/wiki/Axiom) which currently has many thousands of pages with embedded code.

I've talked to Knuth. He claims he could not have implemented MMIX without literate programming. Literate programming is really valuable but you only understand that once you really try it. I gave a talk on this subject at the WriteTheDocs conference:https://www.youtube.com/watch?v=Av0PQDVTP4A

You can write a literate program in any language, for instance, in HTML:http://axiom-developer.org/axiom-website/litprog.html

There are some "gold standard" literate programs:"Physically Based Rendering" by Pharr and Humphreys won an academy award. "Lisp in Small Pieces" contains a complete lisp implementation including the interpreter and compiler. The book "Implementing Elliptic Curve Cryptography" is another example.

Suppose your business depends on a program. Suppose your team leaves (they all do eventually). Suppose you need to change it... THAT's why you need literate programming. Nobody is going be around to remember that the strange block of code is there to handle Palm Pilots.

Companies should hire language majors, make them Editor-in-Chief, and put them on every programming team. Nobody checks in code until there is at least a paragraph that explains WHY this code was written. Anybody can figure out WHAT it does but reverse-engineering WHY it does it can be hard.

Imagine a physics textbook that was "just the equations" without any surrounding text. That's the way we write code today. It is true that the equations are the essence but without the surrounding text they are quite opaque.

Imagine how easy it would be to hire someone. You give them the current version of the book, send them to Hawaii for two weeks, and when they return they can maintain and modify the system as well as the rest of the team.

Do yourself a favor, buy a copy of Physically Based Rendering. Consider it to be the standard of excellence that you expect from a professional programmer. Then decide to be a professional and hold yourself to that standard.

4
dthal 5 hours ago 1 reply      
It does survive, in a certain sense, in scientific programming and data science. Both iPython notebooks and Rmarkdown are a sort of literate programming, although with the emphasis on the text more than the code. In that setting, the executable artifact is not really more important than the explanation of why the code does what it does, so the extra overhead is justifiable.

Rmarkdown example: http://kbroman.org/knitr_knutshell/pages/Rmarkdown.html

iPython notebook example: http://nbviewer.ipython.org/github/empet/Math/blob/master/Do...

5
tumba 1 hour ago 0 replies      
Programming occurs in a wide variety of contexts, and different tools and workflows are optimal in different contexts. In the same way that I find interactive programming in Common Lisp optimal for exploratory work in a complicated or uncertain domain, I find literate programming to be optimal for work that requires rigor in a domain with little uncertainty and static requirements.

Literate programming hasn't "taken off" only in the sense that few people are performing the type of tranquil and rigorous work it was made for. Much of the (admittedly difficult) work being done by programmers today is in fact trivial. The difficulty comes from attempting to solve ill-specified or constantly changing requirements by glueing together a constantly changing set of frameworks and tools.

However, I would suggest that even in organization whose primary responsibility is wrangling with a messy soup of ill-defined requirements as fast as possible, there are often sub-problems whose treatment is amenable for the use of literate program, such as a library implementing a unique and non-trivial algorithm. In such cases, it can be worthwhile to carve out an island of tranquility, clear prose, and rigor, even if it means using slightly different tooling than the rest of the project.

6
mkozlows 6 hours ago 0 replies      
Others have already given the reasons about the downsides of comments. One other reason is: Most developers are terrible, terrible writers.

It's one thing to imagine a profession of literate programming as practiced by Donald Knuth; it's another thing entirely to imagine it as practiced by the kind of people who are actually writing code.

7
kazinator 17 minutes ago 0 replies      
TeX benefited from literate programming because it was written in a higher level assembly language.

(Also, it was written by someone who liked to write books, and if a book was about software, he wanted to integrate the writing of the book and the software.)

Better than literate programming is to write code that explains itself. Don't write two things, one of which executes and the other explains it; write an explanation which also executes.

In other words, don't desperately separate and literate; instead, lisperate!

8
nostrademons 9 hours ago 4 replies      
Because software changes so rapidly.

Literate programming is based on the idea that you should explain what software does and how it does it in detail so that a reader can follow along and learn. That works great for TeX, which hasn't changed significantly since 1982. It works less great for say, Google/Alphabet, which wasn't even the same company last week.

The general problem with documentation is that it gets out of date as the software evolves to fit new requirements. Most real-world products face new requirements on a weekly, sometimes hourly basis; as a result, most fast-growing startups have oral cultures where the way to learn about the software is to ask the last person who worked on it.

9
nickbauman 1 hour ago 0 replies      
Literate programming tries to make every line of code traceable to English (or some other natural language). It's as hard as writing the same program twice in two languages. Perhaps harder: one is for programming a modified lump of sand. Another is for programming humans. The latter is a lot harder to do well than the first.

Then the question becomes: which one is correct? Maybe it's just easier and cheaper to write in the language the lump of sand understands and be done with it.

10
mcphage 1 hour ago 0 replies      
This is just me, but when I read through the literate programming book, and the hundred or so pages of literate code resulted in a program that could be replaced with a line of bash script, I decided that writing the program succinctly was more important than writing a novel to accompany it.
11
klibertp 3 hours ago 1 reply      
There are two parts to literate programming. One is the style of commenting and structuring the code, which makes it easy for humans to follow. The other part is the tooling you use when you want to write literate programs, which includes what syntax you use for defining blocks and how do you tangle/weave your code.

There are many tools which support Literate Programming for many different languages. The usual reservation about additional tools applies: each member of a team needs to have it installed, build times get longer, time for a new programmer to start contributing gets longer and so on. It makes many people never even consider LP.

But, it's important to remember, that the tools you use are just an implementation detail. What's important is an idea that source code should be written to be read by humans, not only for computers to execute.

Sometimes there's a need to go over a piece of code with a colleague who doesn't know it. It happens a lot when a new programmer joins a project that has some code written already. This is because to understand the code you need to know the context it was written in. The problem is that programmers often don't document this context enough (or even not at all). This means that reading the code is essentially a reverse engineering someone's thought patterns. It's much more efficient just to sit next to the person and assist him in reading the code by providing a live, context-aware commentary.

LP is "just" that commentary written directly in the code. What's important is that you don't need any special tools to use this style in your comment if your language is flexible enough. Most dynamic languages nowadays are capable of supporting LP style. Don't take my word for it, see for yourself. You can just go and read some LP code. A couple of examples:

 http://underscorejs.org/docs/underscore.html http://coffeescript.org/documentation/docs/grammar.html http://backbonejs.org/docs/backbone.html
As you can see it's a plain JS (or CS). The pages were created with "docco" tool, but you can read the source with most of the same benefits (excluding rendering of maths symbols, which docco doesn't support anyway).

To sum up: LP is not dead, it just changed its form a little. Many people adopted (or invented independently) this style of code structuring and commenting. Many real-life codebases are written in a literate style because it makes a real difference on how long it will take for someone new to grok the code. Such codebases use docstrings and other mechanisms available in the language to do exactly the same thing that previous LP implementations did.

13
elcritch 6 hours ago 1 reply      
The only large project using literate programming that I am aware of is Axiom, a symbolic math system written in Lisp. From their documentation it sounds like it uses literate programming and would be a good resource.

However, a modern version of literate programming is catching on in scientific fields based on Jupyter (IPython) Notebooks. It allows running simulations and embedding results. It's fantastic for exploratory work. The main downside is transitioning code from notebooks to longer term libraries or full applications can be somewhat tricky. Here's a good write up of notebooks and science: http://peterwittek.com/reproducible-research-literate-progra...

14
kyllo 2 hours ago 0 replies      
I've seen a lot of Literate Haskell in tutorials and course materials, but not really ever in production code. It's too cumbersome to work with when comments are the default and every line of code has to start with '>'.
15
gargarplex 2 hours ago 0 replies      
Whenever I'm on a team and I get the opportunity to do code reviews, I strongly encourage it to reduce the Bus Factor
16
EliRivers 5 hours ago 3 replies      
My previous employer (a subdivision of a global top ten defence company) used literate programming.

The project I worked on was a decade-long piece for a consortium of defence departments from various countries. We wrote in objective-C, targeting Windows and Linux. All code was written in a noweb-style markup, such that a top level of a code section would look something like this:

 <<Initialise hardware>> <<Establish networking>>
and so on, and each of those variously break out into smaller chunks

 <<Fetch next data packet>> <<Decode data packet>> <<Store information from data packet>> <<Create new message based on new information>>
The layout of the chunks often ended up matching functions in the source code and other such code constructs, but that wasn't by design; the intention of the chunks was to tell a sensible story of design for the human to understand. Some groups of chunks would get commentary, discussing at a high level the design that they were meeting.

Ultimately, the actual code of a bottom-level chunk would be written with accompanying text commentary. Commentary, though, not like the kind of comments you put inside the code. These were sections of proper prose going above each chunk (at the bottom level, chunks were pretty small and modular). They would be more a discussion of the purpose of this section of the code, with some design (and sometimes diagrams) bundled with it. When the text was munged, a beautiful pdf document containing all the code and all the commentary laid out in a sensible order was created for humans to read, and the source code was also created for the compiler to eat. The only time anyone looked directly at the source code was to check that the munging was working properly, and when debugging; there was no point working directly on a source code file, of course, because the next time you munged the literate text the source code would be newly written from that.

It worked. It worked well. But it demanded discipline. Code reviews were essential (and mandatory), but every code review was thus as much a design review as a code review, and the text and diagrams were being reviewed as much as the design; it wasn't enough to just write good code - the text had to make it easy for someone fresh to it to understand the design and layout of the code.

The chunks helped a lot. If you had a chunk you'd called <<Initialise hardware>>, that's all you'd put in it. There was no sneaking not-quite-relevant code in. The top-level design was easy to see in how the chunks were laid out. If you found that you couldn't quite fit what was needed into something, the design needed revisiting.

It forced us to keep things clean, modular and simple. It meant doing everything took longer the first time, but at the point of actually writing the code, the coder had a really good picture of exactly what it had to do and exactly where it fitted in to the grander scheme. There was little revisiting or rewriting, and usually the first version written was the last version written. It also made debugging a lot easier.

Over the four years I was working there, we made a number of deliveries to the customers for testing and integration, and as I recall they never found a single bug (which is not to say it was bug free, but they never did anything with it that we hadn't planned for and tested). The testing was likewise very solid and very thorough (tests were rightly based on the requirements and the interfaces as designed), but I like to think that the literate programming style enforced a high quality of code (and it certainly meant that the code did meet the design, which did meet the requirements).

Of course, we did have the massive advantage that the requirements were set clearly, in advance, and if they changed it was slowly and with plenty of warning. If you've not worked with requirements like that, you might be surprised just how solid you can make the code when you know before touching the keyboard for the first time exactly what the finished product is meant to do.

Why don't I see it elsewhere? I suspect lots of people have simply never considered coding in a literate style - never knew it existed.

If forces a change to how a lot of people code. Big design, up front. Many projects, especially small projects (by which I mean less than a year from initial ideas to having something in the hands of customers) in which the final product simply isn't known in advance (and thus any design is expected to change, a lot, quickly) are probably not suited - the extra drag literate programming would put on it would lengthen the time of iterative periods.

It required a lot of discipline, at lots of levels. It goes against the still popular narrative of some genius coder banging out something as fast as he can think it. Every change beyond the trivial has to be reviewed, and reviewed properly. All our reviews were done on the printed PDFs, marked up with pen. Front sheets stapled to them, listing code comments which the coder either dealt with or, in discussion, they agreed with the reviewer that the comment would be withdrawn. A really good days' work might be a half-dozen code reviews for some other coders, and touching your own keyboard only to print out the PDFs. Programmers who gathered a reputation for doing really good thorough reviews with good comments and the ability to critique people's code without offending anyone's precious sensibilities (we've all met them; people who seem to lose their sense of objectivity completely when it comes to their own code) were in demand, and it was a valued and recognised skill (being an ace at code reviews should be something we all want to put on our CVs, but I suspect a lot of employers basically never see it there) - I have definitely worked in some places in which, if a coder isn't typing, they're seen as not working, so management would have to be properly on board. I don't think literate programming is incompatible with the original agile manifesto, but I think it wouldn't survive in what that seems to have turned into.

17
a3n 9 hours ago 2 replies      
I always liked the idea, but it seemed too indirect to me. Software is hard enough as it is, without adding yet another hurdle to get from brain to .exe. IDE's are probably the best middle ground, as they "know" enough about your code to help you find the parts you want.

Besides, literate seems to go against the current view of overcommenting as an anti-pattern.

18
jostylr 2 hours ago 0 replies      
1) It ain't dead yet.

2) Tooling/syntax is a big part of it. I like literate programming, but most syntaxes for it turn me off. Some of it also seems very tied to particular languages. None of that helped for a concept that was always going to be a hard sell.

3) I wrote, and am refining, a version of literate programming: https://github.com/jostylr/literate-programming-lib

It uses markdown where headers are the section headers, code blocks in markdown are the code blocks to sew together, and _"header name" is the block syntax. It has a bunch of other features, but that's the core.

My hope is that this might eventually help this style to catch on in some quarters.

4) I am just a hobbyist programmer, but what I enjoy about it is the ability to organize my code in any fashion I like. In particular, I can keep chunks to a screen size. Brackets never go beyond a page. And I can stub out anything I like real easy.

Now in many programming languages, small chunks are easy to achieve in the terms of functions or objects or modules or even just variables. That is, one of the most important in-the-now useful parts of literate programming is implemented in a good-enough version, particularly with good naming conventions and decent commenting. And good enough is what keeps many from using optimal practices. Or rather, and this is important, optimal from one perspective, e.g., literate-programming can easily rub up against "no premature optimization".

On the other hand, I like not using functions for code management. I want functions to be used when a function is really needed. But that's just my preference. I also like being able to see the whole construct in the compiled code put in one place instead of having to trace it out through function calls. But I have never been big on debugging tools; if I was, this would probably be less of an issue.

5) Updating does tend to be a problem in that with the excitement of a new feature or a bug fix, it is real easy to leave bad documentation there. But that would be true of any documentation. Here at least one can quickly look at the code and see that it seems off.

6) One key feature that I like about my system is the management of multiple files, really, a kind of literate project management. I do not know if the other systems did that. This is a game changer for me. When I open a foreign code base, I have a hard time knowing where to start. In any particular file, I can see what is going on, but getting the global perspective is what is missing. Literate project management can tell you what all these files do, do pre and post compiling options (linting, concatenating, minimizing, testing, dev vs production branching, etc.), and allow a true global organization of where you want the code to be. You can have all code, styling, content for a widget all in one place. Or not. There are no constraints to your organization and that is awesome.

It is also a downside. Constraints are powerful tools and when you have a system that allows you to do anything, it can lead to severe issues, particularly (ironically), of communication. I could see teams benefiting greatly from this if they agree on organizational principles and if not, I can see this exacerbating the conflicts.

7) The hurdle to get over is "Is it going to make it quicker to get my code done?" And I don't think previous tools have done this. I am hoping my tool will do this for the web stack. That stack is a mess and we need organization for it. For other programming languages and tasks, I don't think this is as glaring a need. It often feels a lot like just reversing the comments as literate-coffeescript seems to be. In the hands of a master, such as Knuth, a literate programming is a gem of wonder. But for most, including me, it is a different beast and may not be that pretty.

8) Programmers may have an irrational fear of writing. As a mathematician, I see many adults, even programmers, fear math for no reason whatsoever. The same may be true of prose, at least sufficiently so to discourage the desire to use this. Note that I think they could do it, but that they think they cannot. But I am an optimist.

19
hzhou321 1 hour ago 0 replies      
First of all, what do we think is literate programming?

* Is it just interspersing documentations with the code?

* or Is it an idea of writing code in the style of communications?

There is subtle difference. The former is writing code first, then write documentation to explain the code (without changing the code significantly). For the latter, one may still write code, but he is not writing the code just to get computer working; rather he is writing code for the first purpose of communicating (to human).

In the first style, the program, stripping away documentation, is pretty much a working code as is. Yes, in many so called literate programming, the documentation are readily to be compiled into pretty web pages or pdf, but they are just pretty documentation. In the second style, the code is, to large extent, rearranged (to the computer, scrambled) due to the need of expressing ideas to human. Often a complex parser program is needed to re-arrange the code into computer acceptable form -- such is the case of Knuth's WEB.

Maybe I am over guessing, but I think many readers are only thinking in the first style (documentation+code) when they are commenting on literate programming.

And maybe my interpretation does not even fit Knuth's original idea of literate programming, but in my interpretation, the ultimate literate programming is just code, in the style of written language, arranged like a book, and readable by most literate readers (who possess the basic background -- the background knowledge of the problem domain and the background knowledge of how computer runs, in addition to basic set of vocabulary) as is, and with a compiler, can be translated into a formal programming language or machine code directly and run by the computer. I find Knuth's example -- writing two versions of the program (the code and the documentation) -- a compromise due to lack of compiler power and impractical -- who, after spending so much effort get the code written and debugged and barely worked, still have the energy to write article to explain it -- especially with no apparent readers in sight?

EDIT: In a high level view, there is just one code, but two groups of readers/listeners -- the human group and the computer machine group. In the first stage, computers are limited in power and parsers/compilers are dumb, so the code has to cater for the machines, and have the human readers stretch themselves to read the code (in so-called programming language). In the next stage, every day computer is powerful enough and it can take in more background knowledge (common vocabulary, idioms, styles and some common algorithm, scientific facts, international conventions, or individual/group/corporate conventions),and this stage will allow code be written in some sorts of middle ground. Then the final stage is of course with AI that can understand most human communications. I think we are ready to enter the early second stage -- there is capabilities and early signs, with most people's mind is still well stuck in the first stage.

20
Myrmornis 3 hours ago 0 replies      
Kernel Debugging with LLDB and VMware Fusion ddeville.me
47 points by ddeville  16 hours ago   1 comment top
1
mattbauer 7 hours ago 0 replies      
Just a few other notes to help make kernel development with VMware Fusion easier:

1. Add -zc and -zp to your boot args. It's not documented but it greatly helps catch zone allocation problems (OSMalloc/OSFree/buffer overrun) issues.

2. Use snapshots instead of rebooting. It's much faster to revert back to a snapshot than to reboot your virtual machine instance after a crash.

3. If you use a shared directory between your instance and host machine for moving your KEXTs/other code, make sure to MD5 the files before loading. It's very common for the instance to be using stale cached blocks.

4. While not necessary, I create a separate network interface that's host only to debug on. I give it a static ip and add an entry to my host's host file. It makes debugging instances easier to do since I can connect by name, i.e. kdb-remote vm0.

Inside Amazon tbray.org
142 points by dochtman  7 hours ago   76 comments top 23
1
bcantrill 5 hours ago 2 replies      
I used to work with Tim at Sun, and I think very highly of him; I know that he's only being earnest here.

That said, there's a bunch of hard data that supports many of the assertions in the New York Times article -- to which I will add one from my own experience: Amazon routinely pursues its ex-employees for violating its non-compete.[1] I have already offered my own experience with respect to Amazon in this regard[2], so I won't rehash that here, but I offer this to show that Tim is engaged in bad science: as with busted software, it's not a particularly interesting data point that "Amazon works for me." Amazon routinely engages in acts that I personally find despicable; that some of its employees are comfortable or happy or perplexed that their company could engage in such acts doesn't negate those acts or in any way exonerate the company behind them.

[1] http://www.geekwire.com/2014/amazon-sues-employee-taking-goo...

[2] https://news.ycombinator.com/item?id=7975428

2
aturek 6 hours ago 1 reply      
I worked at Amazon for three years, 2010-2013. My experience was better than most of my coworkers', but I watched a lot of people including some friends go through exactly what is described in the NYTimes article. There's an old team that I'll see about once a year at somebody's house. People tell me that our conversations sound like we all have PTSD from our time there.

I have a friend still at Amazon, on a particular Kindle team. I got him an interview there, and felt a lot of guilt about that for awhile. I've been relieved (and quite surprised) that he's still happy there; the team he describes is quite different from any team I worked on or saw first-hand.

You can find a place in the company that isn't totally insane. But most of the many Amazon folks I keep in contact with are either looking outside for different jobs, or switching teams rapidly to try to find something better.

3
simonebrunozzi 3 hours ago 1 reply      
Disclaimer: I have worked at AWS for 6 years, from 2008 to 2014.

I like Tim's attitude here. However, the fact that he's writing about this on his blog is by itself a testament of how privileged his position is. For any other employee, especially if not VP-level but not limited to, blogging about this very hot issue would have meant some serious consequences. Amazon's PR is not exactly that friendly - although, one has to admit, Amazon's PR is also one of the most structured out there.

4
curiousfiddler 5 hours ago 1 reply      
I had an offer from Amazon after grad school. There was only one reason I did not accept it: I had heard first person stories from a couple of my friends who were working at Amazon. And it did not make any sense to me to join such a work environment. One of them left Amazon to join Google, and is thriving at Google, which works well for him and his company. While at Amazon, he was constantly cynical, negative and it felt that the work environment was affecting him personally. As a developer, I need to be happy and stress free to provide good quality, thoughtful solutions (I'm sure not everyone is like me, and many would certainly thrive at Amazon). I couldn't see myself doing that at Amazon.
5
Phlow 6 hours ago 1 reply      
I have a good friend who works there that I talk to frequently. It is most definitely still going on. He has called me many times stating that he is miserable, that the stack-ranking, secret-pact, politics and backstabbing are still alive and well. The Office Space scene where Peter describes that every day you see him is on the worst day of his life has been mentioned.
6
jpatokal 4 hours ago 0 replies      
For anybody else who missed the context, "Kantor and Streitfeld" refers to this NYT story:

http://www.nytimes.com/2015/08/16/technology/inside-amazon-w...

HN discussion: https://news.ycombinator.com/item?id=10065243

And my 2c: an acquaintance of mine who was a director at Amazon summarizes their corporate philosophy as "people are disposable". I've never seen any evidence to contradict this.

7
mempko 5 hours ago 0 replies      
I have a friend who interviewed at Amazon and now works for a startup. He told me one of the people interviewing him told him not to take the job as he looked behind his back to make sure nobody was watching. Ouch!
8
Havoc 5 hours ago 2 replies      
>Thereve been weekends when I havent opened my work computer.

That doesn't sound all that great. I feel that if things are managed well employees shouldn't be working on weekends at all (barring a major crisis)

9
dantiberian 5 hours ago 2 replies      
> Thereve been weekends when I havent opened my work computer.

This makes it sound like most weekends he does open his work computer to do work. I imagine this is pretty similar at any of the top tech firms though?

10
PherricOxide 6 hours ago 1 reply      
I work as an SDEII in marketplace, where I've been nearly two years. I also work in a satellite office (Arizona), so maybe that biases things, but I've never seen the environment that the Times article discusses. I work 40 hour weeks, never work weekends (besides oncall weeks), have had nothing but positive reinforcement from my managers, and get to work on software where my lines of code impact millions of people. I think at a company this big there are bound to be good managers and bad ones, and the Times article only points out the bad.
11
kungfooguru 4 hours ago 0 replies      
He is high level and in the Vancouver office.

Everyone I've known who worked on AWS in Seattle have sounded exactly like the NYTimes article (so no, it isn't just the e-commerce side) and I've never heard any positive feedback.

12
rdtsc 6 hours ago 3 replies      
> Ive only been here for nine months.

It is a big place and some will have good experiences some will have bad. Of course, I am glad Tim likes it there. Another person I know really likes it there as well.

But I think overall I have heard more bad than good so far. And I don't think the stories from the original article were made up. Or everyone who blogged/written/shared over beers are all Google PR shils. On the other hand I usually hear more good than bad experiences from Facebook, Google and Microsoft.

Also as another personal story, I interviewed there and had a laughably terrible experience. You'd think -- "Nah, this has got to be a joke" the whole time. ( You can read it here, sorry if you already saw it in yesterday's post https://news.ycombinator.com/item?id=10065631 )

13
bhouston 6 hours ago 0 replies      
I think that the experience for a highly desirable technology elite like Tim Bray in the premiere highly profitable AWS group is not representative of the standard employee in less profitable groups.
14
port98 5 hours ago 3 replies      
All these lists of what we have and haven't seen are merely anecdotal. That Tim Bray (and others in the last thread) has not seen it does not mean it never happens.

Conversely, while I have seen people crying and get a superficial 7-day PIP so the boss can rapid-fire someone litigation-free, I know that doesn't mean it happens everywhere. Likely all this has happened at MS/Goog/FB as well.

So is the culture toxicity level here above industry average? I think it is slightly, but it's not a huge deal if you're competent; those who are mediocre performers will be impacted most.

The "frugality" is annoying though. Total comp is comparatively low, we pay our own parking $250/mo (and you still might not have a spot!), no free snacks or even soda, and until last year, we had to steal a monitor from ex-employees just to have a 2nd.

15
d23 6 hours ago 2 replies      
It would be helpful for this entire conversation if we could get a data-driven answer for the question of how the work environment is at Amazon. Someone mentioned them having one of the lowest retention times for developers in the industry. If that's the case, it's damning, no matter how many anecdotes come along defending them. If that's not the case, then a few disgruntled employees with axes to grind shouldn't make a difference as to how we judge them.
16
mnglkhn2 2 hours ago 0 replies      
The culture described is by design one that fits an aggressive growth business model. The simplest way to describe it is that you do your time at Amazon the brand and then you downshift in a higher role at a second tier company. The same, for instance, for NYC police: they work hard to build their professional chops and then they write their ticket anywhere in the country. It is hard on the employees and it takes young blood to do it. In the end, that's the bargain: Amazon imparts a little bit of its brand on your resume and then you have to know how to use it.
17
jasode 5 hours ago 0 replies      
>So what happened? I mean, how is it that The Times portrays a hell on earth, a culture that would drive me to quitting in about fifteen minutes?

There's a reason for the disconnect for some of the employees who don't see "hell on earth".

If you do meta-analysis of the mainstream media's coverage of Amazon, a few patterns emerge:

-- The NYT almost always has a negative bias about Amazon. Even before yesterday's article about Amazon being a "bruising workplace", they ran numerous articles about Amazon and book publishers renegotiating their contracts and the stories were always biased towards the publishers.

-- articles from Wall Street Journal and Harvard Business Review are not as negative. Unfortunately, they have smaller circulation numbers, and their articles are more heavily blocked behind paywalls. Their articles are not as widely accessible and therefore not as easy to share. As far as "dictating the narrative" and "shaping public perception" about Amazon employees, NYT drowns out WSJ & HBR.

-- the readership for NYT (more workers) is different from WSJ & HBR (more managers). Therefore, articles about sweatshop pressure at warehouses and white-collar burnout in the corporate offices will resonate with NYT readers. Even if the senior editors at NYT privately don't favor bashing Amazon repeatedly, one can't blame them for always running the stories because they know what their readers like. In the comments section, NYT readers continue the pile on of scorn. It's the same situation with NYT running negative articles about Marissa Mayer at Yahoo and her "unfair" personal baby daycare, eliminating remote workers, etc.

The negative effects at Amazon mentioned by NYT can be true while simultaneously, employees like Tim Bray don't experience it. However, NYT is not interested in running a story that interviews a bunch of "Tim Brays".

18
DannyBee 2 hours ago 0 replies      
Tim isn't at HQ, as he mentions.In fact, he's in a pretty remote office (Vancouver) that isn't all that large (though trying to grow).

It's not shocking it may have a different culture.

19
jacques_chester 4 hours ago 1 reply      
I work at a company where I got in minor strife on my first project for answering emails an hour after leaving work.

This year I took a three week vacation to the far side of the planet. Didn't check email for 3 weeks. Nobody minded.

I tell new employees working with me not to add work email to their phones. If they're at work they can use the machines, out of hours, they're not at work, so why are they checking email?

So something like "sometimes I have a weekend without work email" is frankly foreign to me.

Edit: "anathema" is thesaurus abuse.

20
princetontiger 3 hours ago 0 replies      
this is lame. he's not at HQ
21
marincounty 3 hours ago 0 replies      
22
caiob 5 hours ago 5 replies      
23
GeorgeOrr 6 hours ago 2 replies      
Minds, Brains, and Programs (1980) [pdf] uh.edu
31 points by brudgers  12 hours ago   2 comments top 2
2
westoncb 6 hours ago 0 replies      
Here's a starting point for the non-philosophers: http://plato.stanford.edu/entries/intentionality/
       cached 17 August 2015 04:02:01 GMT