hacker news with inline top comments    .. more ..    7 Oct 2013 News
home   ask   best   6 years ago   
Why the Tech Industry Needs to Deal With Its Ageism Problem laserfiche.com
91 points by slfisher  1 hour ago   47 comments top 22
steven777400 54 minutes ago 6 replies      
This seems to come up a lot, and I agree that age discrimination is an issue. However, there are some obvious reasons for it outside of the the "we think older people are stupid" line. Younger people will often be willing to work more, for less pay, in more marginal conditions. (Witness the discussion a while back about whether ping pong tables and catered food was a perk or a red flag)

Young people have a greater percent of their experience on the most modern platforms and are unlikely to "write FORTRAN in any language" (JS, Ruby, etc).

We've also discussed how homogeneity is valuable to an early startup. Having everyone be culturally similar may allow faster pivoting and interpersonal comprehension.

Finally, tech startups today are largely focused on the "exit": it's not about building and maintaining a product in the long term. Older employees have the planning, analysis, and maintenance experience to establish a product vision for the next decade. But the startup founder (by and large) doesn't want to think of a product over a decade; it's all about MVP, pivots, and fast exits.

Not that any of that is bad, it just doesn't fit well with the average older employee.

I'll admit when hiring I have a little bit of the opposite bias. Very young employees can sometimes be too aggressive about "what is this legacy garbage? we should rewrite it all in RoR and JS". Hey man, we're still maintaining COBOL apps here, slow your roll. It's all about long term planning and maintenance for us.

francoisdevlin 44 minutes ago 2 replies      
Personally, I think ageism in the industry is great. Let my competitors ignore the large pool of experienced engineers out there, and hire some young kid who is "cheaper". While he's fixing his design for the tenth time, I'll pay that expense old dinosaur to do something boring, like get it right the first time.
thoughtsimple 44 minutes ago 2 replies      
I wonder if ageism is really that much of a problem. I'm 51 and still working as a software developer but all of my contemporaries have moved on from development to something else.

I've definitely experienced ageism but I suspect that it is rarely institutionalized. Instead I think that people like me are relatively rare. I have no real interest in management or another career--I like being a developer. That means when I show up for a job interview, I don't exactly fit in since it is likely that few of the candidates are my age or older. Sometimes I can overcome this and other times I'm not really given a chance.

I mostly work as a contractor so I have frequent job changes and I haven't found it to be that much of a problem. It is just another hurdle that has to be overcome. The group I'm working in now, I'm the oldest developer by at least 10 years.

peeters 42 minutes ago 0 replies      
In issues like this, it's important to interpret statistics correctly. For instance, the article cites this:

>Eight of the companies, the study said, had median employee age of 30 or younger. In comparison, the Times reported, the median age of the American worker was 42.3 years old.

Ageism is not the only explanation for this discrepancy. The software industry has exploded in growth in the last twenty years. Most other industries have not seen the same rate of growth. When a labor pool for an industry is strained, it sends a signal to young people to pursue a career in that field. As a result, the labor pool is filled by proportionally more young people.

Eventually the growth of an industry will cease. Then less young people will pursue it as a career path, and the median age of laborers in that field will rise.

carbon8 1 minute ago 0 replies      
Also: "The average and median age of U.S.-born tech founders [of companies that have more than $1 million in sales, twenty or more employees, and company branches with fifty or more employees] was thirty-nine when they started their companies. Twice as many were older than fifty as were younger than twenty-five."


zeidrich 10 minutes ago 0 replies      
The problem isn't that young employees have qualities that make them more appealing. That's just the reality of the situation.

The problem is when a candidate who has better qualities gets passed over for a younger candidate based on prejudices based on the previous generalization.

Say you want a skilled programmer who is going to work for a certain wage and put in an amount of overtime, and you pitch it to a fresh college grad, and a 35 year old who just got downsized out of a job; when the interview concludes, it's obvious that they are both willing to take the wage you're offering, and the 35 year old is far more knowledgeable.

If you take the kid because you think the older guy might not be as willing as he claims to put in overtime, or because you think he might be too set in his ways, or because he might be too old to match the cultural fit... That's a problem.

People can hire young people because they're cheap. Especially startups, who maybe can't afford to pay for experience. Likewise, someone with a mortgage and kids might be less willing to look for a job with a risky business. So the average age might drop in those kinds of businesses and that's OK.

The problem is when you see the effect and invent the cause. "More young people are in successful startups, that means avoid old people if you want to be successful".

Instead it could be just "frugal startups are more likely to succeed, so don't spend too much on your labour" in which case given two candidates willing to take the same wage, the one with the better skills should win, regardless of age.

salmonellaeater 48 minutes ago 0 replies      
While I agree with the sentiment, the statistics quoted don't support the author's thesis.

> The average age of a successful entrepreneur in high-growth industries such as computers, health care, and aerospace is 40.

> Twice as many successful entrepreneurs are over 50 as under 25.

> The highest rate of entrepreneurship in America has shifted to the 5564 age group, with people over 55 almost twice as likely to found successful companies than those between 20 and 34 in fact, the 20-34 age bracket has the lowest rate of entrepreneurial activity.

If ten times as many people over 50 than people under 25 try to start businesses, then the statistics strongly favor the under-25's. These statistics don't mean anything unless you know the number of people in each age group who start businesses.

> 75% have more than six years of industry experience and 50% have more than 10 years when they create their startup.

What does this even mean? Years worked is not the same as having useful experience.

The article would be much stronger if it showed evidence that older workers were more productive, more likely to succeed at starting a business, or were in some other way undervalued by the current job market.

andyhmltn 37 minutes ago 0 replies      
Sadly enough, I've experienced this. At my last job, after requesting a pay rise my boss just said no and followed up with 'well you should think yourself lucky, your pay is pretty good for someone of your age.' I quit about 2 weeks afterwards.
ihsw 23 minutes ago 1 reply      
It's not an ageism problem, it's a loyalty and perception problem. Old people by their nature cling to institutions of loyalty and start-ups are definitely not known for staying around for +3 years.

Furthermore start-ups also implicitly prefer "just good enough" in order to get your company up and flying enough to quickly accrue investor interest, whereas stability forms the bottom-line of our elder engineers -- and stable systems require far more time than some investors are comfortable with.

Even worse, there is the management aspect. Older people have very different management styles, and throwing caution to the wind is definitely not among their managerial toolset. If your company has older people in positions of power then it definitely scares vast swaths of people away -- from eager investors looking to turn a quick buck and energetic young engineers looking to for autonomy to work without anyone second-guessing their decisions.

Finally, the saying "old is gold" applies quite well, however that gold needs regular polish to remain shining.

jdminhbg 51 minutes ago 0 replies      
This is awful. What does the list of out-of-context statistics about entrepreneurship have to do with hiring tech workers? Who is the "Tech Industry" and how are "they" going to deal with an ageism problem defined by the article as simply statistically skewing young?
nknighthb 33 minutes ago 0 replies      
> It seems the young engineers, growing up in an era of unlimited storage, didnt know how to tackle the problem.

Starting off with some ageism and general cluelessness of your own doesn't help. If they wanted a "young engineer" to generically optimize data storage, I would be at least half-competent, and could give them the names of several engineers far younger than 77 who would be excellent at such work in general.

What neither I nor virtually any other engineer, young or old, would know, and what the engineer who helped build the thing would, is anything about the specific equipment and code in use on Voyager. This is specialized knowledge, just like a lot of the knowledge young 2013 NASA engineers have.

It is equally as beneficial to employ those engineers to modify the systems they built as it was to employ Lawrence Zottarelli to modify the system he built. Age is unrelated.

ChikkaChiChi 10 minutes ago 0 replies      
The bigger concern is that a prospective employee must show a willingness to adapt and evolve from a technological aspect, as opposed to feeling that whatever they already know is "good enough"

An unwillingness to train does exist in the more seasoned veteran demographic; but it's just as bad when you get young developer that is a evangelical about their particular niche.

ianstallings 25 minutes ago 1 reply      
I'm getting on the old side of the average, approaching 40. But in my experience I've never seen a startup founder tell me that I or anyone else was too old. Mainly because the industry really is a meritocracy and it's about what a person can do now, usually. Fortunately for me most juniors don't know all my tricks just yet.

I will say this though - I have to constantly step my game up to stay relevant. At 50 I will be expected to not only know programming very well, but the industry as a whole including analysis, planning, finances, management, etc.

badman_ting 9 minutes ago 0 replies      
The funny thing about the efforts to increase diversity among developers is that it's just increasing the inputs of the ageism meat grinder we're all headed into. So everyone besides white males gets to have a harder time joining the profession, only to get chewed up and spat out like the rest of us. Lucky them. (OK, I admit that's actually not funny at all)
carsongross 22 minutes ago 1 reply      
The reality for older developer is that you are going to have to:

1) Move to management to continue advancing

2) Find one of the very few large to midsize companies that has a long-term developer career track comparable to their executive track

3) Start you own thing and exploit the fact that you are more productive than younger developers, and control the purse strings

Large and even mid-size and event ostensibly tech companies still, for the most part, view developers as an R&D cost center. They look at cheap development resources, either young kids or foreign, and think "For the price of 1 developer, I could have 10!"

Now they have 2^10 problems.

Pxtl 29 minutes ago 1 reply      
So, at last count of criticisms our industry is ageist, sexist, able-ist and racist. Did I miss anything?
buckbova 17 minutes ago 0 replies      
Some of my experience on hiring "older" engineers (I currently have over 10 years experience in software eng/dev):

1. They have a family with responsibilities, sick kids, and dance recitals, etc.

The bad:

Leads to, less professional growth outside of work, more personal days, more sick days, more personal phone calls during work.

The good:

Once comfortable, they rarely leave the job unless forced out.

2. They attempt to solve every problem with the same set of solutions. Few attempt to find new technologies or try different things.

The bad:

The implementation may not be the best one available that will set up the business for future success.

The good:

Solutions are generally predictable.

3. Lack of motivation to prove themselves.

The bad:

The project deadlines are just met. They do just what's asked of them and not more.

The good:

Deadlines are met. They don't try to tackle mroe than they are capable of by expanding the problem domain.

lettergram 28 minutes ago 0 replies      
I know personally, I intend to create a company after I get experience from industry. I am a 21 year old college student, so that means ill be somewhere between 28-35 when I start a company. At which point I will probably higher younger for cheaper. Perhaps that is part of the reason those companies have such younger workers. Most Programmers either do consulting, start their own business, burnout, or change career paths by 30 based on the 20 or so I talked to.

I could be totally off, but from what I know and have seen this seems to be the trends.

BigChiefSmokem 6 minutes ago 0 replies      
The sweetest thing to see is an old master working alongside a young buck in tandem. You get the best of both worlds.

Anyone who is on either side of that equation is fooling themselves on the reality of engineering and team dynamics.

hawkharris 25 minutes ago 0 replies      
Great closing line: "Avoid ageism. It isnt rocket science. Even when it is."
Hossenffefer 38 minutes ago 0 replies      
I think this problem will take care of itself. It is a numbers game. Eventually all these twenty-somethings will be thirty-somethings; and with that they will need the long term work relationships needed for raising families and planning for pivot into the Golden Years. Statistically the vast majority of them will have to work for money and I very much doubt that people will just exit the game at 30. They will just have to adjust their expectations.
danso 28 minutes ago 1 reply      
Unfortunately, the list of companies with the oldest workers -- IBM, Oracle, Dell, HP, among them -- would evoke eyerolls from most young hip tech workers today. What's Apple's median age?

Of all the big young tech companies, I would think that Google would be the one to benefit most from older, experienced employees (besides the group of all-stars it already has, such as Norvig)...Google's business encroaches on a lot of other domains, and domain knowledge is something that (usually) gets better with age. Perhaps in 5 to 10 years, there will be a bigger group of 40+ yr old professionals with enough tech experience/savvy to be more obvious assets to tech startups.

Bad Indian Programmers srirangan.net
48 points by factorialboy  1 hour ago   28 comments top 18
smoyer 22 minutes ago 0 replies      
"They're not stupid ... they're demotivated"

Maybe ... or their incentives aren't aligned with yours? We used several different Indian out-sourcing shops at various points and if there was the slightest ambiguity in a specification (and how do you write one without any ambiguities?), they would (purposely?) do what you'd least expect. As a contract programming shop, this led to the most billable hours.

We later purchased an a company that included an in-house division in Bangalore. These programmers were therefore employees of the same company that I was and therefore motivated by the same factors (successful projects meant more work ... unsuccessful projects meant looking for another job). In general, the in-house Indian programmers were competent other than slightly inflated grades (a lead JavaEE programmer with 2 years of experience?).

So from my experience, there are some brilliant Indian programmers, and some worthless ones, with the vast majority falling in the middle ... just like here.

P.S. I'm in the US, but I imagine that "here" is valid for many values.

ishansharma 4 minutes ago 0 replies      
I'm an engineering student (IT) and I think there are two problems that cause this:

First, as the article says, people lack motivation.

In India, the decision of choosing IT/CS does not come from children, it is often influenced but family or friends. Tell anyone that this field pays well and they will be happy to join it without second thoughts. This happens with majority of people.

Second is education system. Article clearly states:

  I don't blame the quality of education here. That's a common  excuse. If a person is motivated, he'll surpass that  constraint.
Well, motivation is one thing but when you have non-programmers teaching programming courses, it becomes hard to surpass that constraint. Last semester, we were asked to make a project in a class. I was really motivated as I had a side project idea (a personal attendance tracker) and wanted to do it. When I presented it to teacher, the response was this:

"Why are you building a 2 page project? Others are making big projects, expand it and make something big!"

I tried to explain that it would take time to nail down UI and design and it was good enough for a single person project. But teacher just didn't understand. And in the end, I ended up making a "Learning Management System" using WordPress.

And in final viva, the questions asked were these:

"What is SSL?""What does "collapse" button do?" (This is a standard WordPress button, just hides the menu)

Since number of students was large, no time was given to explain/present it! This system of having such people as teachers kills any motivation one has. I can't say about others but for me, spending 6 hours in an environment like this and then staying motivated about programming is very very hard!

Add to this that you need to be an expert in Physics, Chemistry and Maths to get to the top institutes (IIT, NIT) even if you want CSE/IT course. This filters majority of people with interest in programming. I have been programming since 9th standard but I wasn't good in Chemistry and Physics, so I couldn't go to a good institute.

Quality of education is a big factor. The education system is blurting out engineers who are experts in cramming and lack any interest in programming.

ivanhoe 3 minutes ago 0 replies      
Well, we all are a bit xenophobic in our perception of the world around us, there is always a notion of "us" and "them" hardcoded into our brains. And when one of "us" is bad we think "this dude sucks" because we think of him/her as a person. When dealing with a foreigner, and specially in an online, very non-personal way through a middle-man, for us that person is always just one of "them". If he does a bad job we will almost always jump to generalize the situation and get a "these guys are stupid" conclusion (works the same way with the positive experiences too). You never do that with "us", because you know it's not true, you know there is a lot of brilliant people around you, but you don't know "them" good enough to be able to think outside the black & white picture of your very limited experience.
mrng 32 minutes ago 4 replies      
> So you want to hire somebody for less than ~ $20 per hour.

> And you expect the quality of $200 per hour experienced developer.

> Stop having crazy expectations.

Enough said.

unlimit 1 minute ago 0 replies      
I agree with the author on every point he has made. But I would like to add one more to his list, the good ones eventually become demotivated because nobody gives a shit about good code in these outsourcing shops. Add to it the bad working conditions in some places, I work on a remote desktop/vmware desktop, the servers being in USA, it is so slow, so slow that I get 4 hours of productive work out of 8 hours that is billed. And because the estimation will not change, I start taking shortcuts. Don't blame me, I do what I have do do.

And writing good quality code will not give you good rating, will not give you promotions. These firms make profit when they have more headcount. A good experienced coder becomes a liability because he/she has to be paid more and therefore the firm makes less money off him/her. To make profit these firms need more fresh out of college kids, who can be paid peanuts to make more profit.

neals 1 minute ago 0 replies      
Have been paying an Indian Programmer $35 an hour. Writes great C++ code, couldn't be more happy. We have been raising his salary every now and then (we started at $20 and we can raise it to $50 over time).

There's two sides of this: If you don't get paid a lot, you probably are demotivated and yes, you can choose to deliver below average quality. However, that way, it is very unlikely that you will every get paid more or get better projects. Hence: You're stuck at where you are.

If you however sacrifice yourself a little bit, deliver great code, put in the extra effort, you probably negotiate your way up.

At least, that's how I got from an unexpierenced webdeveloper to owning a business and having a great staff <3.

arunitc 21 minutes ago 1 reply      
I'm a programmer from India. Here its more of quantity vs quality. We have a ton of programmers, but really small number of good ones. Most of us come into this field not for the love of programming, but for an "onsite opportunity".
lmm 19 minutes ago 0 replies      
So I'm happy to agree that outsourcing to bad programmers is done by bad managers. That doesn't mean they're not still bad programmers.

And, y'know, even the best organizations make mistakes sometimes. If a company tried outsourcing to India once, for good reasons, discovered the results were bad and has sworn off ever doing it again, it could be there's nothing wrong with the company - it just made a mistake. Less than that in fact - it did a worthwhile experiment and got a negative result.

VinzO 22 minutes ago 0 replies      
Over the years, I had several bad experiences with outsourcing to big companies in India. But recently I had a discussion with an Indian colleague. She used to work for a company for which my previous company outsourced to. She told me most of workers there are just out of school or the rest are bad programmers. Good programmers find quickly a job in better paid positions in other companies.

It seems the main issue is deciders in the west are happy to find cheap outsourcing companies in India. Their pick goes to the company with the lowest hourly rate. What they don't realize is that the good programmers move very fast in better paid position elsewhere... letting you work with the less qualified people.

gesman 23 minutes ago 0 replies      
When you hire cheap indian/pakistani programmer through some * lance *.com site - without knowing it, in many cases you'll be paying rock bottom rate for middleman who pays less than half of it to actual programmer.

Welcome to the cost "efficient" world of outsourcing!


drunkpotato 54 minutes ago 0 replies      
Interesting post, and I think very true. My takeaway: If you want quality {employees,contractors}, you have to pay them well. Other takeaway: Blaming others for failure is easier than introspection.
jmspring 17 minutes ago 0 replies      
One consistent thing I've seen on the outsourcing front is there are those in the management chain (and thus the ones who sign off on outsourcing) that will insist on setting up some form of outsourcing -- first just the non-business critical stuff; a little later on more and more will be outsourced. This can be fine and a good way to grow.

However, many of the times I have encountered this there was a desire to do more outsourcing where the quality of prior work didn't merit the expansion. More often than not, the one insisting on the outsourcing had some sort of personal relationship with the group the outsourced work was going to.

This happened with a couple of different regions/nationalities.

Why relevant to this article? It's not just about pay, but specifically with outsourcing, you need to look at whether or not the decision makers are listening to feedback about the quality of work produced or insisting on steaming ahead regardless.

legedemon 5 minutes ago 0 replies      
I think you have covered most of the factors already but I wanted to recount my experience here to give everyone an idea of the kind of Indian programmers who get hired.

I used to work for a MNC who had an office in Bangalore and had good relations with most of the senior staff. After a while I joined a start-up (which kind of became successful) and my salary went high pretty quickly. Fast forward 5 years - I get a call from the GM of my business unit (who had become the VP by then) who asked me to join the company back and offered me 20Lpa(Lakhs per annum) when I was already working for 24Lpa. He just couldn't reconcile with the fact that my salary had become more than the people who went for on-site visits by staying in the same company. I had to politely refuse.

wslh 5 minutes ago 0 replies      
Well, if you look at the Google top management page you can count more Indians than americans. Probably, just an speculation, there are more ways to be a developer in India, more people, and that produces a wide range of skill levels.
DjangoReinhardt 11 minutes ago 0 replies      
> So you want to hire somebody for less than ~ $20 per hour.

> And you expect the quality of $200 per hour experienced developer.

> Stop having crazy expectations.

Replace the word developer with its equivalent in any 'talent'-related job and it still applies, e.g. Radio, Voiceovers, Writing, Editing, to mention a few.

If you can't value my time, I won't find enough motivation to value yours. You pay peanuts, you get monkeys. :/

bencollier49 10 minutes ago 0 replies      
When I've seen bad jobs done by Indian companies, it has generally been as a result of poor project management from the client. Given the right milestones, targets and contractual obligations, and no goalpost-shifting, the results get better.

That said, I heard tell of a piece of code where "if n < 20" was implemented as a series of 20 "if n = 0", "if n = 1" conditionals, which beggars belief.

mguterl 22 minutes ago 2 replies      
Unfortunately, paying someone well does not necessarily mean you're going to end up quality either.
rjuyal 17 minutes ago 0 replies      
Totally agree with the post. I see lots of software professional in this field who really don't want to code.
GnuPG 1.4.15 released gnupg.org
60 points by austengary  1 hour ago   4 comments top 2
chimeracoder 43 minutes ago 1 reply      
Anybody have any idea on when we'll be able to use elliptic curve encryption with GPG?

According to tptacek, that's in the works, but seems like that didn't make it into this release.

Why Dont More Men Pursue Female-Dominated Professions? freakonomics.com
28 points by tokenadult  40 minutes ago   9 comments top 6
zeidrich 2 minutes ago 0 replies      
I think it's generally the same reasons that more women don't pursue male-dominated professions. I think this is typically that there's an established culture, and not fitting in with that culture makes you seem like an oddity or that you've got ulterior motives.

Is that man teaching because he needs a job, or because he enjoys teaching? Or does he just want to prey on the girls?

Is that woman developing video games because she's a good programmer, or is she just desperate for male attention?

The biggest difference is that fewer people see this as a problem when it's affecting men. Culturally men are expected to bear their problems instead of lament them.

Tichy 4 minutes ago 0 replies      
I suspect there are two reasons why men would push for more women in IT:

- lower wages because of higher competition (the employer's incentive)

- more "attractive" workplace wink wink (the employee's perspective)

I know it's not politically correct to say that, but don't shoot the messenger.

I don't think men feel the need for more "female perspectives" in programming. Especially as they wouldn't even know what those would suppose to be.

Women (feminists) push for more women in tech because they have seen some people become rich via tech jobs and feel left out. They don't push for more garbage women and so on.

lnanek2 5 minutes ago 0 replies      
Nurse might be cool, you get to help save people's lives and make a real difference without as many barriers as becoming a doctor. Googling the average salary, though, it's 1/4 what I earn as a developer. So no way.
dgabriel 5 minutes ago 0 replies      
There are organizations devoted to getting men into traditionally-female occupations. We don't hear about it as much, because it's not tech oriented?


jkscm 4 minutes ago 0 replies      
[citation needed]

I think this is the appropriate answer to these guys talking about this kind of a topic.

skylan_q 14 minutes ago 2 replies      
Because there are never "not enough men" in some field, it's always "not enough women."
Show HN: Mummify - preserve web content, fight link rot. mummify.it
21 points by zek  39 minutes ago   13 comments top 9
gabemart 5 minutes ago 1 reply      
Most of the time, one will not know which pages one links to will disappear in the future. This service therefore only seems really useful if you use it with every link you make. That would make the biggest plan short on "mummies" by a couple of orders of magnitude, at least.
desireco42 19 minutes ago 0 replies      
I don't get the number of free/paid mummifications? Seems very low, space or bandwidth are abundant.

I think, and with respect to original developer, this is more a feature then an app and probably would help if it would be developed further to target more specific problem/group.

Having said that, I wish you best.

contextual 11 minutes ago 0 replies      
This is an example of how great branding can help explain a product. Love the name, love the look, the copy is clever... but I don't like the low number of mummifications you get per plan.

I suggest adding more value or lowering the monthly price.

Overall, very cool.

junto 24 minutes ago 1 reply      
One point to note is that a DMCA takedown targeted at Mummify.it will remove the content just as it would from the nytimes.com.

If I manually save that content to disk then any DMCA take down doesn't affect the content stored on my local hard disk.

jboynyc 29 minutes ago 1 reply      
Interesting in light of this discussion: https://news.ycombinator.com/item?id=6504331

But to trust that something like this to make a permanent copy of stuff I'm linking to, I'd need to know a bit more about them. Else this is effectively like using a link shortener -- a single point of failure.

itry 15 minutes ago 1 reply      
First I had to make an account.

Then it doesnt work. Stuck at "caching page".

I hate you.

Nux 24 minutes ago 0 replies      
Let's centralise the interwebs!

It seems like a bad idea, but they do have a point. Maybe when referencing a link also make a small note to one of these archive sites?

Pxtl 27 minutes ago 0 replies      
Things like this should be required when posting links in Stack Overflow and the like.
MWil 24 minutes ago 0 replies      
and if Mummify goes down...

Would it be a better option if the "permanents" were shared across p2p/bittorrent and every unique item had at least 10 shares distributed across the globe, maybe a max of 20. When one share host goes down, it just picks up a replacement.

Core.Typed Adds an Optional Type System to Clojure infoq.com
46 points by austengary  1 hour ago   3 comments top
ambrosebs 1 hour ago 1 reply      
The information on what needs annotating isn't quite complete: loops and some other macros need annotations.

I'm probably responsible for the thinking that annotations are only needed for "top levels and function parameters". I usually forget about the other ones, but I think those two are the most significant.

Juce: A one man impresive C++ cross-platform library juce.com
32 points by wslh  1 hour ago   22 comments top 7
shaggyfrog 50 minutes ago 4 replies      
A former client of mine considered Juce for a cross-platform project several years ago. It seems to be popular in the the audio programming segment. The biggest drawback is the lack of native look-and-feel; Juce apps don't "feel" native, and that's usually enough to keep it an also-ran for most use cases.

I don't really see the big advantage for these cross-plaform GUI frameworks, anyway. Trying to force a standard interface across diverse platforms means coming up with odd idioms or patterns to achieve a kind of artificial homogeneity. Better to concentrate effort on making the business logic cross-platform, IMO.

asveikau 12 minutes ago 2 replies      
Kind of weird how he wrote his own string class, as well as rewriting some other perfectly good standard library stuff.

His version of scoped_ptr seems to fake rvalue references without actually using rvalue references, but does so in a copy constructor. IMO this is a bit bonkers. If you're not going to use rvalue references I think move semantics is better done from a method, not a copy constructor.

Still looks like a handy library for wrapping things that are otherwise not portable.

Edit: I was mainly basing that comment from looking at juice_core... There's a crapton of other stuff too. Impressive for a one-person work.

nicholassmith 46 minutes ago 1 reply      
Looks interesting, I come from a Qt background so cross-platform toolkits always have a bit of interest for me. It does seem like it's definitely for people building highly customised interfaces, rather than looking for OS-based look and feel but there's some nice bits. I think I'll give it a whirl when I've got time for a side project.
wbond 17 minutes ago 1 reply      
I realize this may be slightly off-topic, but is there something like this that focuses on building cross-platform apps that have native UIs?

I've obviously seen wxWidgets, but I'd even be interested in commercial offerings.

hemmer 22 minutes ago 0 replies      
I've been using JUCE for a couple of years for VST development. I've found it to be a very well put together library for audio work (diving straight into Steinberg's SDK is pretty daunting for someone new to audio like myself), and a lot just works out of the box.

There is strong leadership from Jules on the direction of the library which is generally a very good thing, though it does mean that sometimes there isn't much room to budge on controversial issues. Font rendering is one aspect that several have battled with for a while, I've struggled to get good crisp smaller fonts without resorting to using freetype. Jules argument seems to be that small fonts shouldn't be used period, therefore the library wont render them well (I think there are technical as well as philosophical reasons for this, particularly on OSX). While I agree they should generally be avoided, there are certain situations where this isn't the case (reproducing an exiting GUI for a client, fitting non-critical text in when screen real estate is at a premium etc).

Overall I would certainly recommend for anyone starting out in audio development, but be prepared to fiddle around with fonts; I'm not so familiar with the non-audio parts of the library.

Bill_Dimm 38 minutes ago 0 replies      
According to the About JUCE page the license is GPL, so you need a commercial license if you want to use it in a non-GPL project. The pricing doesn't look too bad if the library is solid (399 for a single project, 699 for unlimited).

They really need a lighter color for the text on their website -- tough to read.

jbrooksuk 35 minutes ago 1 reply      
This makes me wish that the developer of Sublime Text would release his UI toolkit. It's completely cross platform too.
Dropplets - A simple database-less CMS dropplets.com
60 points by alwaysunday  2 hours ago   36 comments top 19
pioul 25 minutes ago 1 reply      
The concept and simplicity make me think of Ghost's (http://ghost.org/features/): free, open source, near-minimalist, and self-hosted.

The seemingly only advantage is that Dropplets doesn't require a database, and its landing page is amazingly beautiful, clear, and to the point (though Ghost's "features" page is slick as well).

On a more technical note, click events seem to be propagated up the player on the landing page, closing it when toggling HD for example.

xauronx 1 hour ago 0 replies      
The presentation is awesome. Super clean, I love the logo and the video's that let you know just what this thing is. The blurbs of text were enough to get me interested enough to watch the video. The videos were clean and to the point. Damn, for me this is what all landing pages for projects like this should be. I'm not sure if I'll use your product but I'll certainly bookmark the page to learn from later.
reidrac 1 hour ago 0 replies      
"Dropplets is compatible with most server configurations [...]". Requires PHP (at least), I guess none of my servers has a common configuration :)

EDIT: the installation part in the README.md needs some extra info. Like it requires PHP and some file/directory permissions.

dombili 38 minutes ago 2 replies      
This looks promising. I'm gonna give this a try since I'm looking for a simple and lightweight CMS for my blog. Speaking of which, anyone have any recommendations about that (apart from Jekyll)? I still haven't found what I'm looking for and I'm on the verge of coding my own blog in plain HTML instead of just installing something like Wordpress but that may be a pain in the ass to manage once I have more than handful of posts on my host.

But who knows, maybe Dropplets is the one.

ohwp 30 minutes ago 1 reply      
A small rant: I think it is bad practice to hide content below the page height. I just wanted to close the website because I thought 'nothing to see here' before I accidentally scrolled and more content was revealed.
dkuntz2 34 minutes ago 0 replies      
Having tried it out before, I found the format used for blog posts to be incredibly arbitrary. It requires your twitter username for every post, and a bunch of other things. Plus, instead of using key/value pairings with yaml or something similar, it required that your headers be in a specific order.

It seems cool, it also seems like a little bit of work could be put into making it more accessible to people who didn't write it.

dubcanada 1 hour ago 1 reply      
I thought this had something to do with Drupal. I guess not :)
robbfitzsimmons 1 hour ago 2 replies      
Mods - I think this should be "CMS" (content management system) instead of "CRM" (customer relationship manager), as it's a blogging tool.
andyhmltn 41 minutes ago 0 replies      
Just a note: Chrome 24 on Ubuntu and the videos aren't autoplaying. I have to right click + play. Would be nice to show some controls :)
Cthulhu_ 1 hour ago 1 reply      
One could argue that in this case the server's file system acts as the database.
jc4p 1 hour ago 0 replies      
Quick link to demo: http://dropplets.com/demo/
Touche 55 minutes ago 1 reply      
Anyone looked at the code and see how this works? When you publish does it generate html or does it do it for each GET?
cl3m 26 minutes ago 0 replies      
It is powered by PHP. They probably feel ashamed as they don't say it anywhere on the webpage ;-) Seems nice tough!
yogo 1 hour ago 0 replies      
My only recommendation would be to use the new password api going forward and password_compat for versions less than 5.5. I don't see a php version requirement in their docs though.
testdrive5 48 minutes ago 1 reply      
Can someone please tell me how this is different from Jekyll? They both work based on the same concept if I'm correct?

Also, how about support for category pages? I mean, show a page full of posts from one particular category only? Possible??

aw3c2 1 hour ago 1 reply      
All I see is a link to a zip and a link to "http:///"?
fmitchell0 44 minutes ago 0 replies      
looks great! definitely will have to try this out.

it's a little discouraging, however, that the issue queue has so many pull requests, comments, etc. without comment.

i love the concept of simplicity and i'm sure the maintainers have a roadmap in mind. it'd be nice if that was communicated a bit so i can know how simple they plan to keep it.

potomak 1 hour ago 0 replies      
Nice design, next step, a mobile friendly CSS.
Scala: sharp and gets things cut fogus.me
15 points by SanderMak  43 minutes ago   3 comments top 3
mercurial 5 minutes ago 0 replies      
> I am a great programmer and I choose static type systems because I demand sharp tools.> Of course by taking this position you run the risk of pushing a fallacious, authority-based angle. However, in the weak or strong case youre already pushing anecdotes around, so you may at least choose the one that is open for objective measurement.

Which part is open to "objective measurement"?

I don't know if I'm a good programmer, whatever that is, but I use static type systems when I want a system to work both today and tomorrow after I refactor it.

nathansobo 1 minute ago 0 replies      
I also found his talk fascinating, but the paper he referred to was a bit unapproachable for me. Any suggestions for resources that could help get me more comfortable with reading and understanding type calculi?
seanmcdirmid 9 minutes ago 0 replies      
The article is weird. Martin Odersky is an incredibly good programmer, just dive into the source code of scalac, he was probably just being modest to make a point.
IOS7 Omnigraffle Stencil njimedia.com
31 points by sailer  1 hour ago   11 comments top 6
ChikkaChiChi 2 minutes ago 0 replies      
Stencil is very well done, and easy to work with!
bambax 17 minutes ago 0 replies      
I can't read this site; (very) light grey on white background is invisible.
kalleboo 1 hour ago 2 replies      
When I add to the stencil palette in OmniGraffle 4, it ends up super tiny http://imgb.mp/jk2.jpg

Maybe time to reconsider if I use OmniGraffle often enough to pay $50 to upgrade to 6... I don't really need any new features, just to fix all the bugs that have cropped up.

phinnaeus 1 hour ago 1 reply      
The little pointer arrows on the "Select | Select All" and "Select | Select All | Paste" tooltips are a different color from the body of the tooltip.

But it looks really nice overall.

jivid 1 hour ago 1 reply      
I'm a little confused by the "Main action" and "Cancel" buttons. Shouldn't the Cancel button have the red text and not the other way around?
srik 1 hour ago 1 reply      
That looks well done. What is the license on it?
The Internet is trolling the FBI's Bitcoin Wallet via public notes techcrunch.com
28 points by aelaguiz  1 hour ago   6 comments top 4
nwh 42 minutes ago 1 reply      
Notes that are only visible to people looking at a certain address on a certain website, nowhere else. I truly do not think this it at all newsworthy.

"Public notes" are a construction of blockchain.infodespite the confusing namethey are most certainly not part of the actual Bitcoin blockchain.

robert_foss 35 minutes ago 0 replies      
The top and bottom notes are a Bill Hicks quote, that's pretty classy as far as trolling goes.
RF7803081 37 minutes ago 1 reply      
The real troll is that people use bitcoins
Google Chromebook developer setup guide afaqdar.blogspot.com
30 points by afaqurk  2 hours ago   13 comments top 5
lnanek2 1 minute ago 0 replies      
Interesting that there are ways to keep Chrome OS around. Most developers I know using one just go pure Ubuntu and don't care at all about keeping Chrome OS as a dual boot option or whatever.
bluedino 59 minutes ago 2 replies      
What are the odds of programs like SublimeText being able to run on ARM in the near future? I'm sure Eclipse would be far too heavy.

Most of the 'Chromebook as a developer laptop' make it into a fancy SSH terminal - which is fine. But I don't really see a 'terminal' as a 'developer laptop'. I understand you can still access the internals of Linux along with the shell and other languages, but it doesn't work for a lot of people when you can't use the same apps/devices as you can on a your 'normal' machine.

A $999 XPS 13 or MacBook Air looks pretty inviting, even at 3X the price of a $299 Chromebook when you realize how much productivity gain there is when you don't have to dink around with the OS and aren't limited in your app selection.

pekk 1 hour ago 2 replies      
While this is probably very helpful to Chromebook owners, it doesn't make me want to use a Chromebook for development, or understand the choice.

If the hardware is particularly good (is it?), why doesn't someone make it easier to just straight up run Linux from the hard drive? I mean no weird scripts, SD cards, etc. but just a proper distro like Debian.

bestdayever 48 minutes ago 0 replies      
I think the thing that has always held me back from finally pulling the trigger on A chromebook pixel is the fact that you have to dismiss the "developer mode" popup everytime you boot. I guess that is a bit of a nitpick though and it otherwise seems like a solid machine.
gum_ina_package 1 hour ago 3 replies      
If Google came out with a 13" Chromebook for around the same price, I'd buy that in a heartbeat. The 11" version is just too darn small for my beefy hands.
Time-lock crypto puzzles gwern.net
27 points by kiba  2 hours ago   6 comments top 4
im3w1l 3 minutes ago 0 replies      
I think I have a better scheme. Say you have a 10 bit keyspace or something, and then encrypt a very large number of times with random keys. You don't have to perform as much computation as your adversary. By the law of large numbers the probability to solve all of the puzzles in a much shorter then expected time is low. And it is much less parallellizable then just one encryption with a random key.
earlz 39 minutes ago 1 reply      
Possible variant on this scheme to make it harder to estimate the amount of time required: Don't use a fixed number of hash iterations. Instead, use a bitcoin-ish scheme like: "the key to this file is given by hashing 'xxxx' until the hash's bottom 8 bits are 0"
pbaehr 32 minutes ago 0 replies      
This is interesting in its own right, but the Assange use case doesn't really make sense to me. Wikileaks doesn't want the encryption to be broken after a certain amount of time, they want it broken based on the condition of assassination.
pacofvf 40 minutes ago 1 reply      
Maybe it's just because is Monday but, besides of the Julian Assange's example and the time capsule, I can't think of another use case of a Time-lock crypto puzzle, anyone?
Age of Internet Empires: One Map With Each Country's Favorite Website theatlantic.com
51 points by jeanbebe  4 hours ago   40 comments top 13
kijin 2 hours ago 0 replies      
> Baidu dominates China, though its spill-over popularity into neighboring countries makes the researchers doubt whether data from those countries is accurate.

The data for Korea clearly isn't accurate. Baidu is almost unheard of in this country. Not many young people can even read Chinese. Naver, a local company, dominates 70% of the Korean search market and a significant portion of the social networking scene as well. Its anti-competitive behavior is currently a hot topic in Korea.

I suppose the anomaly is due to the fact that the authors used Alexa (mentioned in the bottom right of the second image). Hardly anyone in Korea has the Alexa toolbar installed. People here, like elsewhere, pollute their PCs with all sorts of other toolbars, but rarely Alexa. The language barrier probably plays a part. I wouldn't be surprised if those who do have Alexa (usually foreigners) tend to have ties to a certain neighboring country with a very large population.

TranceMan 3 hours ago 2 replies      
The article is a little light on the details which data was used and compiled - are the majority of Google hits due to that being a persons default url when they open their web browser?
krosaen 1 hour ago 1 reply      
> Among the 50 countries that have Facebook listed as the most visited visited website, 36 of them have Google as the second most visited, and the remaining 14 countries list YouTube (currently owned by Google).

Does that mean they aren't defining Google as Google properties? What would the map look like if they did?

contingencies 0 minutes ago 0 replies      
Does this support "lower education levels equals greater facebook use"?
Systemic33 3 hours ago 3 replies      
Interesting that the most popular site in Kazakhstan is a russian site. (Mail.ru)

Another interesting thing to measure would be the most popular national website, ie. for denmark, the most popular .dk domain, and then represented in size per population.

EDIT: I'd hypothesize that the top website would be whatever bank has the most customers, or websites for government functions.

pmelendez 3 hours ago 1 reply      
No wonder why Google is trying to push Google+ so badly. Although I don't know what is worse, a world dominated by the New Google or by -the always full of controversy- Facebook.
raamdev 3 hours ago 4 replies      
Yahoo's prevalence in Japan surprised me the most.
jeanbebe 4 hours ago 2 replies      
Google's domination is still kind of amazing.

Also, there should be a new version of RISK: The Game of World Domination. It should feature tech companies as the attacking hordes.

LanceH 1 hour ago 0 replies      
Interesting that facebook does better on islands that aren't Oceania.
spindritf 3 hours ago 1 reply      
Facebook dominance seems to correlate with poverty.
jeanbebe 2 hours ago 3 replies      
#1 People still install toolbars, that's crazy to me.

#2 Google has so many popular products (search, gmail, youtube, maps) that it makes sense that they're that big. It's equivalent to a person having a bank account with $1bil in it. Just leaving that money in the account and raking interest, you just continue to get bigger by being. In google's case, there isn't strong enough competition to stop them from "being" and gaining more share based on their prior efforts.

#3 Could a new US based search engine compete with Google? Or are they just that big that the task is a fool's errand?

prawn 2 hours ago 0 replies      
Facebook has always been at war with Eastasia, err, Google.
Show HN: Dpadd.com (Goodreads for games) is now open to all dpadd.com
6 points by claytoncorreia  16 minutes ago   discuss
Attorney General (NY) hits AirBnB with subpoena for user data nydailynews.com
141 points by donohoe  4 hours ago   146 comments top 15
jval 4 hours ago 4 replies      
Right... so Attorney General Eric Schneiderman's campaign contributors last election?


At number 4 with 1.58% of total campaign moneys, East 103rd Street Realty. Parent corporation: Glenwood Real Estate Corp.


"Luxury Apartment Rentals in New York City"

Obviously the fact that people are renting their apartments is not something that buildings are capable of managing for themselves and has become a matter of great importance to the entire state of New York.

dbags 4 hours ago 7 replies      
Good. I know that Chesky is trying to pretend that this is about people who occasionally share their homes, but that's bullshit. There are a lot of people who are stuck living next to illegal, untaxed hotels because one of their neighbors AirBNBs their place full time.

I know that a lot of people on this site think that if you add the words 'on the internet' you should be exempt from all regulation and taxation, but that's just not how the world works.

I hope that the people who've been profiting from the lack of enforcement are forced to play on a level playing field.

Disclaimer: my experience with NYC AirBNBs have been incredibly negative, including people listing with fake names, revealing that they'd given fake addresses at the last minute (when it was already too late to change plans), showing deceptive photos, and giving false descriptions.

king_jester 3 hours ago 0 replies      
It is important to note that is it a big deal to NYC to ensure that residential spaces do not become de facto hotel rooms that avoid the taxes and regulation associated with running a hotel. NYC has been experiencing tremendous rent cost increases over the last decade and housing costs are becoming more unaffordable. Any residential space that gets turned into a pseudo hotel that generates income at the expense of being a livable space for an NYC resident only makes this situation worse.
camus 11 minutes ago 0 replies      
To all the people that think Airbnb is great , why should landlords have tenants so tenants can sublet ?

landlords will end up posting offers on Airbnb directly ,that's what will happen "en masse" in the future.

And You'll need to pay the Airbnb premium to rent anything.

After all , if i'm a landlord and ask for 2000$ a month for normal tenants , i can just go on Airbnb and ask for 200 or more an night , so i only need to rent it for 10 days a month for it to be profitable...

Landlords arent stupid we'll eventually end up in that situation on a large scale.

wehadfun 51 minutes ago 0 replies      
I own a vacation rental and list on AirBnB. AirBnB should pay the taxes. AirBnB is not a simple directory of houses/apartments/tents for rent. They are a "reseller" or maybe "re-renter" of houses/apartments/... For example you may charge $100 a night but AirBnB sells your property for like $125 a night. Is the host supposed to pay taxes on the whole $125? Really the guest is not even renting from the host. The guest is renting from AirBnB. The guest pays AirBnb not the host. When a guest pays AirBnb keeps the money until the guest arrives and pay you at their discretion. If there is a problem and the host wants some of the guest security deposit. The host has to ask AirBnb for it and AirBnb determines how much the host gets. A host can't even email the guest or get their phone number until AirBnb allows. As a host you are a supplier to AirBnb and suppliers in general do not have to deal with taxes. If you buy a coke from a gas station, the gas station is responsible for paying local taxes Not The Coca-Cola Company.
kfk 4 hours ago 4 replies      
The thing is, you have laws and tax laws and, for how unjust they might seem, a State has to enforce them, full stop.

However, users data. I don't know, there is a thin line between enforcing the law and not respecting citizens freedom and privacy.

The housing argument, instead, is just ludicrous. Let's put all the hotels in the city out in the suburbs then.

beedogs 4 hours ago 1 reply      
This was pretty much bound to happen. Next they'll go after each host for all sorts of taxes and penalties.
argumentum 2 hours ago 3 replies      
There seems to be some confusion about the tax issue here .. many commenters seem to think that AirBnB hosts are avoiding tax on their income via AirBnB. That's simply not true, in fact AirBnB reports this income to the IRS and you are required to fill out a tax form to continue hosting.

The issue at question is a tiny (% wise) hotel tax. In my view AirBnB is not resisting this tax for its own sake, but rather because it risks classifying hosts as hotel operators.

AirBnB is a wonderful service, and yes it has flaws, but NYC is caving to the demands of political interests who know how to play the lobbying game.

testing12341234 3 hours ago 3 replies      
Does anyone know if an out of state ecommerce site like Amazon is required to give user data in this manner. Focusing on the state tax issue, it would appear that these two systems are similar in that taxpayer is required to declare and remit taxes (but often doesn't).

(obviously the other issues about whether it's even legal to rent out in the first place is another matter)

danielweber 3 hours ago 4 replies      
Is it normal for nydailynews to have editorializing right in the subheadline?

"State is concerned about hotel occupancy taxes and possible evictions by greedy building owners."

Greedy is a value judgment with big negative connotations.

lotsofcows 3 hours ago 1 reply      
"A drunk European"? Nice, I do like a bit of casual racism.
sneak 3 hours ago 1 reply      
Yes, they just characterized property owners seeking fair market value from tenants as "robbing the city" (direct quote).

Thanks, guys.

danso 3 hours ago 1 reply      
I wonder how much this could've been avoided had Airbnb built in tax collection into the system? Yes, the pain of doing it by jurisdiction is one of those unscalable tasks, but you only have to run through it once (i.e. look up the state tax laws) every few years, or hire a single lawyer/accountant to do it full time.

I did a Airbnb stay in Rome last year...the gov't there has been trying to better enforce its tax laws, even to a comedic degree (you, the customer, can get in trouble for walking out of a gelato shop without a receipt). My host made very sure I signed the right paperwork after my stay...and I'm guessing as long as the state gets its share, it has less incentive to crack down on it.

olleicua 1 hour ago 0 replies      
This is completely unreasonable. The NSA doing this in secret is one thing but now they aren't even trying to hide it.
wahsd 55 minutes ago 1 reply      
This is so damn disgusting!

But it also reveals, and people should realize how much of our society is actually a facade. This is not liberty or freedom, for you to do with with your apartment or home how you wish. Next thing you know the government is going to subpoena Craigslist to hunt down sales taxes for the banged up table you sold?

This is subversion, perversion, and corruption of public resources and policy to ensure private profits and gains because corporations really don't compete in America, they simply rig a system that makes is look like competition. Our economy is fraud, just like that Western Town at Disney World is a fraud.

Next up, you have to have a tracking device in your car to make sure you pay your taxes for being a DD and they buy your food and soda in exchange.

FastMails servers are in the US what this means for you fastmail.fm
233 points by masnick  11 hours ago   152 comments top 24
nullc 11 hours ago 5 replies      
> There are of course other avenues available to obtain your data. Our colocation providers could be compelled to give physical access to our servers. Network capturing devices could be installed. And in the worst case an attacker could simply force their way into the datacentre and physically remove our servers.

> These are not things we can protect against directly but again, we can make it extremely difficult for these things to occur by using strong encryption and careful systems monitoring. Were anything like this ever to happen we would be talking about it very publically. Such an action would not remain secret for long.

> Ultimately though, our opinion is that these kinds of attacks are no different to any other hacking attempt. We can and will do everything in our power to make getting unauthorised access to your data as difficult and expensive as possible, but no online service provider can guarantee that it will never happen.

This kind of frank disclosure should be highly rewarded. I provided similar frank disclosure text (elsewhere) only to have it whitewashed.

When everyone is underplaying the real limitations it's impossible for people to choose alternative tradeoffs "Why should I use this slightly harder to use crypto thing when foo is already secure?" because the risks have been misrepresented. Underplaying the limitations also removes the incentives to invent better protection "Doesn't foo already have perfect security?".

westicle 9 hours ago 2 replies      
> Australia does not have any equivalent to the US National Security Letter, so we cannot be forced to do something without being allowed to disclose it.

This is not true. The Australian Crime Commission has some of the most extensive secret coercive powers in the Western world.


I would suggest that either:

a) Fastmail is aware of this and is covertly spreading the word that it might be compromised; or

b) Fastmail needs better lawyers.

andrewfong 11 hours ago 3 replies      
Note the obvious caveat though:

"There are of course other avenues available to obtain your data. Our colocation providers could be compelled to give physical access to our servers. Network capturing devices could be installed. And in the worst case an attacker could simply force their way into the datacentre and physically remove our servers."

As the colocation providers are based in the U.S., they would be subject to the National Security Letters. FastMail claims this is no different from any other hacking attempt. But in a normal hacking attempt, colocation providers would be free to explain to FastMail the extent of any hacking on their end. Moreover, hackers typically do not have physical access to any data. Even with encryption, physical access opens up a lot of attack vectors that most sysadmins don't anticipate.

rdl 10 hours ago 0 replies      
The personal location of the operators is probably the #1 most important security risk; location of customers, location of servers, and country of incorporation are also important.

It's much easier to compel operators to do something (through legal threats or potentially physical threats) than it is to do any active modifications to a complex system, undetectably. Passive ubiquitous monitoring is a concern because it's passive and thus hard to detect -- it's highly unlikely TAO can go after a large number of well-defended systems without getting caught. Obviously they'd be likely to hide their actions behind HACKED BY CHINESEEEE or something, but even then, it's relatively rare to have a complete penetration of a large site in a way which isn't end-user affecting, and rarer still for the site not to publicize it.

That said, if I wanted to compromise Fastmail, I'd either compromise a staffer or some of their administrative systems to impersonate staff.

sschueller 11 hours ago 4 replies      
The US government will just take their server. They don't care if you go out of business.

Look at what they did to megaupload.com.

robn_fastmail 10 hours ago 4 replies      
Hi, FastMail employee and author of (most of) that blog post here.

Just so we're clear, the point of this post was not that we don't think the rules don't apply to us. Instead we're trying to make it clear where position on these things are. The topic of this thread is a sensationalist sound-bite, nothing more.

I'm not going to go over the points again here because I'm pretty sure we said it all in the post (but ask questions if you like, I'll be here all week!).

The most important point to take away from this post is that your privacy is your responsibility. We're trying to provide you with as much information as we can to help you determine your own exposure, and to let you know what we will work to protect and where we can't help. Its up to you to determine if our service is right for you. No tricks, and no hard feelings if you'd rather take your business somewhere else!

bad_user 9 hours ago 2 replies      
I found this article brutally honest. What they are saying is that (1) NSA snooping is more expensive for the NSA as they can't engage in blanket surveillance on all of their users, while keeping them silent, but on the other hand (2) you can't expect and shouldn't assume privacy, because if the NSA wants to listen on your traffic, they will.

This in combination with FastMail being acquired by its former employees, coupled with their investment in CardDAV and CalDAV, makes me really excited about them. I was actually looking for a good replacement to Google Apps and FastMail might be it. It's still a little expensive though, compared to Google Apps, I hope they'll bring those prices down just a little.

workhere-io 5 hours ago 1 reply      
There's one question they haven't answered: Why do they even need to have their servers in the US? Their blog post admits that there's a big chance that the US is spying on their customers. Given the fact that FastMail is a Norwegian/Australian company, why don't they just move their servers to e.g. Norway?

I realize that even if the servers were in Norway, an email from a FastMail user to a gmail.com account would still be read by the NSA (because it would pass through American servers), but email sent from FastMail to other email hosts in relatively safe countries would not be read by the NSA.

brongondwana 10 hours ago 0 replies      
Hello inflammatory headline.

That's a very small part of a lot of what we have to say, most of which is:

* we can't be compelled (under current laws) to install blanket monitoring on our users

* we can't be compelled to keep quiet about penetration that we notice

* there are always risks, including the risk that any random group knows unpublished security flaws in the systems that we use

We have written some things about techniques we use to reduce those risks (physically separate internal network rather than VLANS on a single router for example) - these help protect against both government AND non-government threats. But we can't make those risks go away entirely.

What we're saying is - the physical presence in the USA only changes one low-probability/high-visibility threat, which is direct tampering with our servers.

Regardless of the physical location of servers, we would still comply with legally valid requests made through the Australian Government.

It is our belief and hope that this process is difficult enough to mean that US agencies only ask for data when they have good cause rather than "fishing" - but still easier than taking our servers and shutting us down, with all the fallout that would cause.

MichaelGG 11 hours ago 1 reply      
The only real benefit I see here is that your IP won't be easily revealed. That is, given a fastmail account, the e.g. FBI cannot quickly get your login IP, like they can with e.g. Outlook or Gmail. So, for just low-level anti-surveillance, SSL to fastmail might suffice instead of using Tor with Gmail.

Unless you're using PGP or S/MIME, SMTP is still most often unencrypted.

rdl 5 hours ago 0 replies      
As far as I know, Australian law is common law and would allow a judge to seal a warrant. So, fastmail's asertion that there is nothing like an NSL where they couldn't disclose a search is incorrect. I'm sure it is just lack of awareness, rather than intentional deception.

(Ianal, ianaa, but I am pretty sure I am correct on this point.)

CurtMonash 10 hours ago 0 replies      
The persuasive part of this is disclosure. It's a promise to be open about any breaches, plus an observation that the US lacks the legal clout to stop the promise from being kept.
topbanana 8 hours ago 1 reply      
They don't need to seize the server. SMTP is plaintext and on a well known port number. I'm sure the NSA have a record of every email sent through the US in the last few years.
Quai 8 hours ago 1 reply      
I know that my word doesn't mean much, but I have had the chance to talk to several of the guys working at Fastmail during their years at Opera Software. They are -serious- about mail and they are -serious- about privacy.

Next time I'm out shopping for email services, I will give my moeny to them! (And, to give something back for all the Tim Tams brongondwana brought with him to Norway ever time he was on a visit ;) )

iSnow 9 hours ago 3 replies      
Since the Silk Road bust we know the US LE is able to convince or force colocation providers to provide them with an image of a server. After that, pretty much any communication can be considered open to the NSA. I am not surprised that he does not clearly mentions this.

So FM should move their servers out of the US even if that's inconvenient.

Maximal 6 hours ago 1 reply      
As Australia is a member of the five eyes group, I do not see any added protection from FM being incorporated there rather than in the USA.

This is why I use a email service in Norway (runbox.com), which, as far as I know, is not sharing information by default.

traeblain 11 hours ago 2 replies      
So they are saying that they can never get a NSL to turn over information, but where are these servers? Who has the keys to the door of the server room?

So maybe they don't get the NSL, but the people/group/company that is handling the servers might. This seems disingenuous. I could be wrong, but it feels like they are making claims that will dupe people into their service because they feel safe.

dutchbrit 6 hours ago 0 replies      
Or the US could just go to the Datacenter and force them to give access.
frank_boyd 10 hours ago 1 reply      
> our primary servers are located in the US

Why would you do that, especially when you're not even a US company?

bckrasnow 7 hours ago 0 replies      
Transparency takes precedence over everything else in this post, aka the thing you haven't seen US companies doing at all.


smegel 7 hours ago 0 replies      
Now swear in blood you weren't under any kind of nondisclosure order when you wrote that.
616c 9 hours ago 1 reply      
Thank you, Fastmail. This is why I pay for you.
phy6 10 hours ago 1 reply      
If I was going to set up a honeypot for evil-doers/dissidents, this is the message I would spread.
tweeeyjg 11 hours ago 1 reply      
This is a joke right? How much were they paid by the NSA to write this post?
US now produces more oil and gas than Russia and Saudi Arabia marketwatch.com
63 points by ck2  4 hours ago   58 comments top 8
joshuahedlund 2 hours ago 5 replies      
Commenters bemoaning US running out of resources sooner or ruining the environment by increasing usage should note that US oil demand has essentially peaked[1] and will likely drop in the coming years due to a variety of factors including demographics and the increasing viability of alternative energy sources (solar power + Tesla et. al.) In this context, the US gas/oil energy boom is a wonderful thing, a last hurrah of cheap energy to supply the economy before the alternatives are finally ready to compete. Furthermore, it's not even bad for the environment in the short term; thanks largely to the natural gas boom replacing the even dirtier coal, US CO2 emissions are at their lowest in 20 years[2] I admit I'm probably a little too optimistic (Edit: fracking may be less than "wonderful"), but even correcting for my bias I strongly suspect it's not nearly as bad as many seem to think.

[1] http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&s=WR...[2] http://content.usatoday.com/communities/ondeadline/post/2012...

coldcode 2 hours ago 0 replies      
But it costs more to produce here. Most oil wells in Texas for example are now fairly low volume and require a lot of expense to pump. The Saudi wells are so high volume they spend almost nothing to get the oil out. Also the Saudis generally control the output to keep the price high. If they wanted to they could flood the world with oil.

Gas is a different story but currently the price is pretty low compared to oil.

alphydan 1 hour ago 0 replies      
So the US produces 11 mbpd (million barrels per day) of liquids (oil, gas2liquids, ethanol, biodiesel). However it consumes about 19 mpbd [0]. Unfortunately this is just a bleep because more expensive prices have made fracking (a 30 - 20 year old technology) economical, but the typical decline rates of these fields are extremely high (produces loads the first year, dies out very very quickly thereafter) [1]. In the longer view (5 - 10 yrs) it's a bump in a long,unstoppable and ever more expensive decline [2].

[0] http://www.eia.gov/forecasts/steo/report/us_oil.cfm

[1] http://www.theoildrum.com/node/9506

[2] http://ourfiniteworld.com/2013/10/02/our-oil-problems-are-no...

triplesec 3 hours ago 2 replies      
Great. So now the US'll just run out relatively sooner than they would have done compared with those guys. WTG short-term thinking with finite resources. Humans are idiots.
devx 2 hours ago 1 reply      
Can we end oil subsidies yet? Or have the oil companies not made enough trillions of dollars in profit since they started receiving subsidies a century ago?

I'm all for subsidies to accelerate an emerging market or technology, especially if it's "the future", but for 5-10 years at most, until it becomes mature enough to handle itself. Subsidizing highly profitable companies for a century is beyond stupid, because it also means those companies can be a lot more wasteful, knowing the taxpayers will cover the difference.

I'd ask for an end to Middle East oil-wars, too, but that seems even less likely to happen, so I'll happily take the ending of subsidies for now.

e13tra 3 hours ago 3 replies      
Looks like the new arms race is about digging your own environmental grave.
ck2 4 hours ago 2 replies      
What I want to know, since we only consume a fraction of it, where is all that money going?

All that profit doesn't seem to be staying inside the US.

known 1 hour ago 0 replies      
Will US stop importing oil and gas?
Javascript apps can be fully crawlable prerender.io
128 points by beernutz  9 hours ago   87 comments top 24
dwwoelfel 8 hours ago 6 replies      
This is a great approach, but detecting the user-agent is the wrong way to decide if you should pre-render the page. If you include the following meta tag in the header:

   <meta content="!" name="fragment">
then Google will request the page with the "_escaped_fragment_" query param. That's when you should serve the pre-rendered version of the page.

Google has documentation on this here: https://developers.google.com/webmasters/ajax-crawling/docs/... and we've been using this method at https://circleci.com for the past year.

Waiting for google to request the page with _escaped_fragment_ should also prevent you from getting penalized for slow load times or showing googlebot different content.

timr 8 hours ago 3 replies      
Don't do this.

Rendering different content based on user agent is tempting the webspam gods. Rendering nothing but a big gob of javascript to non-googlebot user agents is a recipe to get the banhammer dropped on your head.

You're either gambling that Google is smart enough to know that your particular big gob of javascript isn't cloaking keyword spam (in which case you should just depend on their JS evaluation, since you already are, implicitly), or you're gambling that they won't bust you even though your site looks like a classic keyword stuffer.

_lex 9 hours ago 2 replies      
This will get you penalized for having a website that takes forever to load. This is what happens:

Googlebot requests page -> your webapp detects googlebot -> you call remote service and request that they crawl your website -> they request the page from you -> you return the regular page, with js that modifies it's look and feel -> the remote service returns the final html and css to your webapp -> your webapp returns the final html and css to Googlebot. That's gonna be just murder on your loadtimes.

If this must be done, for static pages, it should be done by grunt during build time, not by a remote service. For dynamic content, it's best to do the phantomjs rendering locally, and on an hourly (or so) schedule, since it doesn't really matter if googlebot has the latest version of your content.

Or perhaps I'm mistaken and the node-module actually calls the service hourly or so and caches results on app so it doesn't actually call the service during googlebot crawls. If that's the case, I take back my objections, but I'd recommend updating the website to say as much.

Isofarro 7 hours ago 2 replies      
An entire project written to simulate progressive enhancement (badly). One that only works for specified whitelisted User-Agents, instead of being based on capability.

I'm also not understanding the use-case for this project. Everytime the topic of "Web Apps", "JavaScript Apps", "Single page web apps" comes up, evangelists point out that they are applications (or skyscrapers), not just fancy decorators for website content.

So exactly what is this project delivering as fallback content? A server-generated website?

This project just seems pointlessly backwards. Simulating a feature that the JavaScript framework has already deliberately broken. One that introduces a server-side dependency on a project deliberately chosen not to have a server-side framework.

This just looks like a waste of effort, when building the JavaScript application properly the first time, with progressive enhancement, covers this exact use-case, and far, far more use-cases.

The time would have been better spent fixing these evidently broken JavaScript frameworks - Angular, ember, Backbone. Or at least to fix the tutorial documentation to explain how to build Web things properly. (This stuff isn't difficult, it just requires discipline)

I call hokum on people saying there's a difference between Websites and Web apps (or the plethora of terms used to obfuscate that: Single-page apps, JavaScript apps). This project proves that these are just Websites, built improperly, and this is the fudge that tries to repair that for Googlebot.

ewillbefull 9 hours ago 1 reply      
Wouldn't the pre-render based on useragent be penalized because Google doesn't like being shown pages differently than non-Googlebot useragents?
wldlyinaccurate 8 hours ago 1 reply      
If you are able to "pre-render" a JavaScript app like this, then you should be serving users the pre-rendered version and then enhancing it with JavaScript after onload.

JavaScript-only apps are a blight on the web. All it takes is a bad SSL cert, or your CDN going down, and your pages become useless to the end-user.

franze 7 hours ago 0 replies      
hi, my 2 cents

>Javascript apps can be fully crawlableyes, and i think it's cool that you try to provide a solution as a service for this.

but as with every technology, there are some tradeoffs

a) serving google a different response bases on the user-agent is the definition of cloaking (it's not misleading or malicious cloaking, it's cloaking non the less)

b) you hardcode a dependency to a third party server - you have no control over - into your app (and from the sample code on the page, there is no fallback available if this server is down)

c) there are latency/web-performance issue i.e.: for a first time request by a search engine the roundtrip would look like so:

[googlebot GET for page -> googlebot detected -> app GET to prerender.io -> prerender.io GET to page -> app delivers page -> prerender.io returns page to app -> app returns page to googlebot]

this will always be slower than

[googlebot GET for page -> app returns page to googlebot]

so basically the prerender.io approach creates some issues. said that. we don't have - yet - another "no tread-off" solution

the "make ajax crawlable" approach basically allows - non malicious, non misleading - cloakinghttps://developers.google.com/webmasters/ajax-crawling/docs/...

(sorry google, but ?_escaped_fragment_= was really one of your must stupidest specs ever, even worse then "nofollow")

so if you target "?_escaped_fragment_=" in the GET request, and not the user-agent cloaking a.k.a. sending different responses is ok

but: it creates a double googlebot crawl issue i.e.:

[googlebot GET http://www.exmaple.com/test -> googlebot parses HTML and finds <meta name="fragment" content="!"> in the HTML -> googlebot pushes http://www.exmaple.com/test?_escaped_fragment_= into its "stuff to crawl-queue" (a.k.a. discovery-queue) -> googlebot crawls http://www.exmaple.com/test?_escaped_fragemten_= -> gets server side get request (or if you would use a prerender.io service the whole roundtrip to the prerender.io site would start) ]

this is a no go if you have a big site with hundred of thousands to millions of pages.

and there is another much, much bigger issue:

  * showing JS clients   * and "other only-partially-JS clients" (google parses some JS) different responses 
just does not work in the long turn.

why? if there is no direct feedback, then there is no direct feedback!

non-responsive mobile site currently offer overall poor user experience, why? because all the guys working on the site sit in front of their fat office desktops. no feedback equals crap in the long run.

and it's worse for "for robots only" views, because people just don't have to live with the crap they server spits out, as they always just see the fancy JS versions. since the hashbang ajax crawl-able spec came out it consulted some clients on this question, everyone who choose the _escaped_fragment_ road anyway did regret it later on. even if the the first iteration works, 1000 roll out later, it doesn't - if there is no direct feedback, then there is no direct feedback.

conclusion: if you have a bit site and want to do big-scale (lots of pages) SEO you are stuck with landingpages and delivering HTML + content via the server + progressive enhancement for functionality, until the day google get's its act together.

and for first-view webperformance i recommend the progressive enhancement approach anyway, too.

eonil 5 hours ago 2 replies      
Static rendering of dynamic content? I don't think this does make sense.

If it's pre-rednered, it's missing something. If it has all the data at first, then it's not dynamic.

Pre-rendered(static) javascript app(dynamic)...? Hmm... I don't see anything more than something like JWT in JS instead of Java?

pzxc 5 hours ago 0 replies      
A better way is to do a hybrid single/multipage app as described here:


It's a multipage app, that uses ajax to function as a singlepage app. From the user's point of view it's a singlepage app, but it's accessible from any of the URLs that it pushStates to, so it's like the best of both worlds. It's fully crawlable because it functions as a multipage app, but it's got the speed of a singlepage app (if your browser supports pushState)

bfirsh 4 hours ago 0 replies      
This is a similar thing, but is far faster because it uses Zombie instead of Phantom: https://github.com/bfirsh/otter
anonymous 9 hours ago 1 reply      
I was under the impression that Googlebot already executes javascript on pages.

A more interesting idea would be if you do this for every user - prerender the page and send them the result, so they don't have to do the first, heavy js execution themselves. I know it sounds a bit retarded at first - you're basically using javascript as a server-side page renderer, but think about this: You can choose to prerender or not to prerender based on user agent string -- do it for people on mobile phones, but not for desktop users. You can write your entire site with just client-side page generation with javascript and let it run client-side at first, then switch to server-side prerendering once you have better hardware.

gkoberger 8 hours ago 1 reply      
I can see a lot of issues with this (slow, displaying different content to Google can get you penalized, etc)... but this is a really clever hack.

Google is less important (they already execute JS), however it's good for sites like Facebook (which doesn't when you share a link).

acqq 3 hours ago 2 replies      
I surf with JavaScript turned off and I see just a blank page. If it's "crawlable" I certainly expect it to be visible to me without turning JavaScript on.
RoboTeddy 9 hours ago 2 replies      
This looks similar to Meteor's "spiderable" package


ivanhoe 7 hours ago 0 replies      
Still the main problems is not solved: you risk getting penalized for serving a different content to the googlebot
selvakn 1 hour ago 0 replies      
Shameless plug:https://github.com/selvakn/rack_phantom

Similar idea, but implemented without a server for rendering, with a phantomjs process. And only for rails/rack app.

beernutz 9 hours ago 0 replies      
I have been looking for something like this for a long time. Seems very straight forward.

I have not tested it yet, but I wonder if the speed of render will penalize you in the google results. Seems like a separate machine with a good CPU might be worthwhile if you are going to run this.

gorm 5 hours ago 1 reply      
Very cool, but something I don't get:

- Try to go to prerender.io, press "Install It -> Ruby on rails". Now it loads the ruby on rails example.

- Then go all the way down and change to "Prerendereed content". Pressing "Install It -> Ruby on rails" doesn't do anything now.

Shouldn't it render the same content? "Add the middleware gem to your Gemfile..." and so on.

chadscira 7 hours ago 0 replies      
I recently needed to do this for google, but i wanted the rendering time, and delivery of the page to be under 500MS, so i hacked up something that works with expressjs


It uses phantomjs but removes all the styles initially so the rendering time is much faster. (my ember app was averaging 70MS to render, but i prefetch the page data)

radq 7 hours ago 0 replies      
I believe Bustle.com does something similar to this. There was a talk about it in the Ember NYC August meetup.


tjmehta 8 hours ago 0 replies      
I tried using phantomjs in the past to serverside render a complex backbone application for SEO, and it was taking over 15 seconds to return a response (which is bad for SEO).

Looking at the prerender's source I did't see any caching mechanism.

What kind of load times have you see rendering your apps?

Have there been recent significant improvements in phantomjs's performance?

commanderj 5 hours ago 1 reply      
Making JS heavy sites crawlable is also possible with libraries like https://github.com/minddust/jquery-pjaxr and https://github.com/defunkt/jquery-pjax . Plus the push state has the advantage of "real" urls.
t0 9 hours ago 1 reply      
Why hasn't Google implemented this yet? Their current solution isn't good enough (https://developers.google.com/webmasters/ajax-crawling/).
ateevchopra 7 hours ago 0 replies      
This is a really great idea !. I mean now data in apps made on js can be searched. My question is that can we add "Search with google" to out javascript app then ?
The 2013 Nobel Prize in Physiology or Medicine nobelprize.org
48 points by subsystem  6 hours ago   15 comments top 4
apl 4 hours ago 0 replies      
Worthy choice, especially Suedhof. His work is a seminal contribution to our understanding of one of the most basic principles governing information processing in virtually all nervous systems: the chemical transmission of a signal from one neurone to the other.

His review "The Synaptic Vesicle Cycle"[1] in Annual Reviews offers a somewhat accessible look at the critical bits.

[1] Use scholar.google.com if you want to find "liberated" PDFs.

sandipc 1 hour ago 0 replies      
The prize for Schekman and Rothman has been a long time coming - well deserved! For anyone unfamiliar, worth reading their groundbreaking Cell papers from the early 1980s. (or look at any cell biology textbook!)
X4 5 hours ago 9 replies      
Leaves a dull taste to hear that a once prestigious prices has lost all of it's meaning. When people like Obama [1] win a Nobel Prize and Putin [1] get's suggested for another.


[1] http://en.wikipedia.org/wiki/2009_Nobel_Peace_Prize

[2] http://www.haaretz.com/news/middle-east/1.550147

Stencil a Grunt templating plugin for generating static HTML files howareyou.com
3 points by dawson  15 minutes ago   discuss
SideProjectors: marketplace to buy and sell side projects sideprojectors.com
43 points by martinml  4 hours ago   12 comments top 6
toblender 1 hour ago 0 replies      
You might want to use CloudFlare for the site. It's not really loading.
imechura 1 hour ago 0 replies      
I really like the layout of your site and its a great idea.

Did you start from a template or from scratch? If you used a template can you point me to it?

joshdance 42 minutes ago 0 replies      
Site seems to be down due to HN. But seems like a cool idea.
davewasthere 4 hours ago 1 reply      
My initial thought with the domain name was: "That's a rather specific thing to be selling... Side Projectors... Like creating a website selling just Dvorak Keyboards..."
zeckalpha 2 hours ago 2 replies      
How does one calculate value for a side project?
AGresvig 4 hours ago 1 reply      
You gotta juice up your hosting! Site is suffering under heavy load it seems.
Node.js and the new web front-end nczonline.net
3 points by thisisblurry  17 minutes ago   discuss
Archaeology: The milk revolution nature.com
91 points by yread  9 hours ago   21 comments top 11
nkoren 6 hours ago 1 reply      
Fascinating true(?) story about how lactose intolerance changed the course of history, as recorded in the ancient sagas[1]:

When the Vikings established their colony in Vinland, they wished to establish peaceable relations with the Native Americans. They invited the local chiefs to a party at their longhouse, in which they served an amazing new drink -- milk -- which the Americans had never seen before. The following morning, suffering from intense abdominal pains, the natives accused the vikings of trying to poison them, and promptly laid siege to the colony until the Vikings packed up and buggered off.

But for this incident, it's entirely possible that the Vikings might have established a durable colony in the Americas, leading to contact between the old and new world 500 years earlier.

1: http://www.ancientworlds.net/aw/Places/District/1009811

chime 7 hours ago 0 replies      
The map of Lactase Hotspots is pretty interesting. My family comes from the State of Gujarat in India, one of the darker (70%+) regions in the map. The "White Revolution"[1] was started in the 70s by independent dairy farmers in the district our village belongs to. The Gujarati and neighboring Rajasthani diet has always included milk and byproducts as a significant ingredient. The 3m+ milk producers supplying milk to the dairy cooperative Amul [2] have made India the largest producer of milk and milk products in the world. I highly doubt this would've happened if the region was 30%-40% dark like the rest of India.

Similar to how presidential elections are impacted by a 100 million year old coastline [3], this small genetic mutation has affected the entire world's economy, especially agriculture-based industries. Once we enter the post-natural-selection era where we can select the DNA of our offspring, I wonder how different will the long-term future be.

[1] http://en.wikipedia.org/wiki/White_Revolution_(India)

[2] http://en.wikipedia.org/wiki/Amul

[3] http://deepseanews.com/2012/06/how-presidential-elections-ar...

forgottenpaswrd 7 hours ago 2 replies      
"But lactase persistence also took root in sunny Spain, casting vitamin D's role into doubt."

Well, Spain is not entirely sunny. Certainly the north of Spain is not. You can see a darker line in the map that divides Spain. This is divided by a set of mountains called "Cordillera cantbrica" that makes the north way more rainy than the rest of Spain.

There are places in Spain where it rains more than in the north, but only in a very small period of the year. Most of the year is sunny there(the part that is not the north).

VLM 3 hours ago 0 replies      
Just last night I was reading "The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World" by David Anthony and I was around the part of the book describing the lactose mutation and its spread across the world etc. Pre-historic time is pretty complicated, and interesting.

Its a fairly academic book oriented more toward the spread of proto-indo-european language and related topics. And I got the pointer to that book from a podcast delivered lecture series "WS3710 History of Iran to the Safavid Period" a tolerable recording (tolerable from a technical standpoint; OK to listen to, but not going to win any awards for audio engineering). Its a recording of a class at Columbia from 2008.

My interpretation of the book and lecture series is people kept livestock for quite a long time before some mutant gained the ability to drink milk, which given the herd of meat animals meant they gained a lot of nutrition compared to the non-mutants, which is a huge survival gain.

I've found I enjoy university lectures much more now that I don't need to take midterms and write papers, so thats pretty much all I listen to.

petercooper 6 hours ago 0 replies      
I became lactose intolerant at the age of 30 with all the described accompanying 'issues'. It took 6 months to work it out though as it was so sudden and I'd consumed milk frequently up until then. Curiously, though, both of my daughters are lactose intolerant, so it seems something weird gene-wise was going on there..
jdmitch 8 hours ago 0 replies      
The article ignored the emergence of lactase persistence in West Africa and Saudi Arabia (where two other hotspots on the map are) but there is more on that in this study:


amalag 4 hours ago 1 reply      
What load of bullshit. It does not even mention pasteurization. As most of the uninformed comments on this page as well. As a comment rightly notes:

Raw milk contains lactase producing bacteria, so anyone consuming milk in its raw form would be able to digest it without any digestion problems. Only pasteurized milk is lactase free as heating destroys the bacteria that produce the enzyme. Most milk historically would have been consumed raw so an adaptation to produce lactase would have been unnecessary and would not provide a significant competitive advantage.

samuel 5 hours ago 0 replies      
Project LeChe, great name!! Leche is the spanish word for milk.
contingencies 7 hours ago 0 replies      
There's some pretty vicious comments on that article!

Personally I am mostly interested in Asian history. We know that South Asian and Tibeto-Burman people have been milk and cheese consumers for a long time, as have the Central Asian peoples including the Mongols, who are said to have actually preferred liquid foods to solid ones.

These days, I know first hand that a lot of people in China are getting stuck in to milk products for the first time. How can this be, if they should keel over in pain and indigestive flatulence? The only person I've ever seen mass-produce cheese in an apartment was a Burmese friend in China, who I'm sure wasn't after the lactose for its apparent usefulness in diluting heroin! Wikipedia states: Some studies indicate that environmental factorsmore specifically, the consumption of lactosemay "play a more important role than genetic factors in the etio-pathogenesis of milk intolerance" ... ie. the intolerance notion is largely bullshit and people can adapt to lactose. That seems to fit the observations.

Anyway, interesting to ponder... I went and polished off a block of New Zealand cheddar to celebrate. (Saving Roquefort for a salad tomorrow, then it's off to Indonesia where cheese is no doubt harder to find!)

bausson 8 hours ago 1 reply      
I think I've already seen that link around here.Still a very good read for those who didn't.

I love how this little plus provide in the long run an overwhelming advantage.

DiabloD3 3 hours ago 1 reply      
As a reminder, milk isn't paleo.
Time to hand over the reins before Capistrano costs me my youth? groups.google.com
438 points by codebeaker  22 hours ago   218 comments top 20
patio11 20 hours ago 7 replies      
Thanks for creating software which has been an immense service to the community, and which I rely on quite a bit.

Tangent mode on:

Somebody really, really needs to write the How To Deploy Rails Without Losing Your Sanity handbook. I will buy a copy. It will sell thousands of them.

A lot of the problems with people's interactions with Capistrano are environment/ops problems which have known solutions that work, but which rely on people having a great understanding of arcane trivia which is spread across conference presentations, blog posts, commit messages, and the practical experience of the best Rails teams. Unless you're prepared for an archaeological expedition every time you start a new Rails project, you're going to do something wrong. You should see the bubblegum and duct tape which I came up with, and it mostly works, but I know it is bubblegum and duct tape.


Non-deterministic deploys of code from (usually) un-tagged source control

I feel lucky in that I was mentored by an engineer who decided to teach me, one day, Why We Tag Shit. But for the Why We Tag Shit discussion, I would be like almost every other intermediate Rails engineer, and be totally ignorant of why that was a best practice until lack of it bit me in the keister, at which point the server is down and one has to rearchitect major parts of the deployment workflow to do things the right way. Why We Tag Shit is only about a 500 word discussion, but it's one piece of organic knowledge of the hundreds you need to do things right, and it is (to the best of my knowledge) not covered in docs/QuickStarts/etc because that seems to be out of the purview of the framework proper (I guess?).

I'm sure that I'm ignorant of several of the hundreds of pieces of things one needs to do to do deployment right, as evidenced by my fear every time I execute my deploy scripts. I, and I must assume many other companies, am willing to pay for an option which gets me to a non-bubblegum and duct tape outcome.

Seriously, folks: there is a product here.

forsaken 21 hours ago 9 replies      
I just wanted to point out how poisonous our community is. It's something that I've been struggling with for a long time, and trying to slowly change.

The fact that people read this article, and don't feel the need to mention his fear of releasing software just shows how broken things are. It shouldn't be an accepted fact of open source that if you release new code that might be backwards incompatible, you get vitriol for it.

His quote:

... but I too cowardly to release it and make it mainstream, as Im afraid it'll destroy whatever good will for open source I have left when the flood of support questions inevitably comes in, followed by all the people who are unhappy with what I've built and feel obliged to tell me how bad I am at software.

bretthopper 21 hours ago 3 replies      
Some unsolicited advice from someone who's never run an open source project as popular as Capistrano:

* Ditch v2 ASAP (seems like you've already decided on this). It's pretty obvious you aren't motivated to work on that codebase anymore. I've looked at v3 and it's much better thanks to relying on Rake tasks.

* Be selfish. It's your project so if you think v3 is the way to go forward, go with it and who cares what the "community" thinks.

* Seems like you already have a few people helping out, so continue and maybe make formal "core" team. There's nothing with yourself taking a step back from the heavy coding. But I believe that Capistrano would be better with your guidance than without it.

codebeaker: There was no mention of Harrow in that post. Are you still working on that? I'd assume that if you were you'd continue work on Capistrano since it's based on it.

codebeaker 22 hours ago 18 replies      
I'm the OP of the mailing list post, and have maintained Capistrano for the last 5 years. I'm passionate about providing great open source tools, my business and reputation are built on Capistrano and I don't want to give it up, but it's destroying me.
AlexMuir 21 hours ago 1 reply      
My first thought was "I owe this guy, Capistrano is the main reason why I have spent ~ $100 per year on VPS servers and not $100 per MONTH on Heroku et al.

I'd suggest Lee runs a Kickstarter type thing and I'd very happily throw in $100. But I don't think he will because it doesn't seem quite right.

So here's a (wild and completely off the cuff) startup idea - a pre-emptive Kickstarter. Someone creates the project "Lee Hambley, continue working on Capistrano." and we all pledge into the pot. If Lee agrees to do it, he gets the money. If not, we don't pay anything.

gnufied 21 hours ago 2 replies      
For real long time Capistrano v2 has been exclusively going forward with Pull requests and next to no new development while Lee worked on v3 on separate branch, which looks like a rewrite.

As a result various releases of v2 were buggy. Capistrano is a hard to test application agreed but its test coverage is plainly woeful.

About 6 months back when 2.4.12 release was broken (https://github.com/capistrano/capistrano/issues/434) I suggested to remove asset pre-compilation stuff from Capistrano. Capistrano is a general purpose tool, company where I work we use it for deploying java, php, ruby and all sort of stuff. I don't understand why it should have poorly tested asset pre-compilation things built in.

I don't know what made Lee work on a rewrite. I can only imagine how difficult it must have been for him to work on something so big singlehandedly while running a company.

His last point is very valid about using RVM, rbenv etc in production. I don't know why people do that. Does that make it easier? Aren't people aware of something like - https://launchpad.net/~brightbox/+archive/ruby-ng ?

joevandyk 19 hours ago 0 replies      
Really looking forward to Docker being 1.0.

What you want to do is build a single package of everything your application needs (which includes the application code and all dependencies -- libc and up), then copy that package to the production servers.

It shouldn't matter if your application server has Ruby 1.9.3 and you need 2.0.

It shouldn't matter if the last deploy of your app needs Nokogiri compiled against libxml 2.8 and you now need 2.9.

It shouldn't matter if you are running 5 different apps with 5 completely different set of dependencies on the same machine.

It shouldn't matter if you need to use the asset pipeline.

It shouldn't matter if github or rubygems drops out half-way through the deploy process.

All the production server should get is a single package of all that your application needs, then a 'restart application' command.

Docker should be able to handle all this simply.

alrs 21 hours ago 4 replies      
As always, it bears repeating: rvm/rbenv don't belong in production. They exist to allow developers on Macbooks to sync their version of Ruby with whatever is packaged in the Linux distro or BSD variant that runs in production.

If I had a Mac I'd skip the ad-hoc Ruby environment switchers and skip straight to Vagrant.

ChikkaChiChi 2 hours ago 0 replies      
As much as this is an open invitation to rail on the RoR community, I think this is a problem that is a lot more indicative of this brave new software culture both open source and (independent) commercial.

If your tool sees any sort of uptake, it suddenly no longer is yours. The community suddenly expects you to not only to continue to modify the base code to improve functionality, but to also adhere to a sort of backwards compatability so that everything they know and loved about your baby never changes.

I can't imagine how much more taxing this would be once the tools you built become integral part of other team's workflow. The burden and stresses of keeping "the world" afloat would cause many a sleepness night for people of strong constitution.

ealexhudson 22 hours ago 0 replies      
A good decision - get out while things are still positive. Not enough people are brave enough to step down at the right time (or even when it's obvious it's already the wrong time).
AhtiK 21 hours ago 2 replies      
Does anyone know what's wrong with the Rails asset pipeline that is mentioned in the post as one of the issues?
kawsper 21 hours ago 0 replies      
I am a bit sad that he feels this way about it.

I have used Capistrano a lot, I built my "default" setup, compiled it into a gem, and released it here: https://github.com/kaspergrubbe/simple-capistrano-unicorn and moved on with my life as a developer.

I know of at least two bigger organizations that depend on Capistrano (and my gem) for their deploys. I feel like Capistrano is the way to go if you manage your own servers, and need to deploy to them.

Capistrano started my Rails experience, and I am very grateful for the work put into it. But I never wrote and said "Thank you" or "Great job", maybe we need to be more vocal to the people that put in time and energy to build the software that we use a lot.

joeblau 21 hours ago 0 replies      
It's sad to see when an open source project becomes overwhelming. On one hand the project is open source, so hopefully, someone else can pick up the torch. We saw this happen in the node.js community and node's been moving along. On the other hand, based on what Lee is saying, it looks like situation is pretty bleak. I'm not a Rails user, but I feel like most of the "hot-startups" in San Francisco run a Ruby stack. From an observer looking into the community and platform though this post, I never realized how many challenges there were in that development environment.
tomdefi 19 hours ago 1 reply      
For anyone interested in an overview of Capistrano v3, I wrote an introduction last week - https://medium.com/p/ba896a142ac
grandalf 19 hours ago 0 replies      
Check out fabric as a much faster alternative to Capistrano. Combined w cuisine.py it's a simple and powerful alternative to chef solo.
chrismealy 21 hours ago 7 replies      
I love ruby and rails, but yeah, I'd switch to any framework in any language that made deployment stress-free. Except php.
joaomsa 12 hours ago 0 replies      
Capistrano really has saved us multiple times, sad that a vocal part of the community tends to exhibit such behavior.

At our company, we develop multiple RoR apps and we've run into many of these issues (mostly related to the asset pipeline), yet none of them actual problems with Capistrano. Since it's the bridge between so many things, I can imagine why it's easy for it to become cannon fodder.

We've tried to standardize many of our recipes such as local asset precompilation into a single cohesive gem (https://github.com/innvent/matross). That has saved us the trouble of debugging the same issues over and over when they inevitably pop up across applications.

yannk 19 hours ago 0 replies      
"Whilst I believe strongly in Capistrano as a general purpose tool [...] I do think the future of software deployment is in small, containerised VMs and so-called PaaS, as what we're all doing right now has to end, some time."

Kudos. It takes a lot of courage to admit your baby is not going to fulfill the future you had initially imagined.

machbio 19 hours ago 0 replies      
Thanks for the awesome software.. I just started learning about capistrano recemtly, just amazed by how simple it is..

I believe when you said that PAAS will go, only reason I use heroku and dokku(from docker) is due to its easy deployment.. and for no other reason than deployment..

stevewilhelm 19 hours ago 0 replies      
Check out 'Deploying Ruby Applications to AWS Elastic Beanstalk with Git' [1]

[1] http://ruby.awsblog.com/post/Tx2AK2MFX0QHRIO/Deploying-Ruby-...

FBI struggles to seize 600,000 Bitcoins from alleged Silk Road founder theguardian.com
100 points by spindritf  4 hours ago   129 comments top 27
eterm 3 hours ago 1 reply      
"At current exchange rates, that represents slightly more than 5% of all bitcoins in circulation"

Why would exchange rate affect what percentage of bitcoins 600k are! This is terribly confused thought from the journalist:

600k bitcoins, that's "almost $80m".

$80m! But at current exchange rates that's "just over 5%!".

When sitting there are the actual numbers of bitcoins in his wallet and bitcoins in circulation...

300bps 2 hours ago 7 replies      
If Ross Ulbricht is really the guy who ran this site, why in the world was he doing his work from San Francisco? His alleged job was the pinnacle of "work anywhere". He took so many precautions to keep his identity secret that he must've known that his activities would at some point gather focused attention from the authorities.

Living in San Francisco allowed the feds to pick them up on their lunch hour. Even just hopping the border to Mexico would've required them to get international cooperation and extradite him.

He'd likely be a free man if he were in Croatia or Kazakhstan.

appleflaxen 3 hours ago 3 replies      
Right after the DPR bust there was a lot of discussion about how it was a death knell for bitcoin, but this is exactly why it's such a revolutionary type of technology: if DPR had any currency in a paper wallet, he could simply print it out, put it in a safe deposit box, and pick it up once he is out of prison.

For a currency to be so secure that a state cannot seize it from a citizen is unprecedented.

It will be fascinating to see how this plays out.

kybernetyk 3 hours ago 2 replies      
> Even if the FBI is not able to transfer the money, merely having possession of the wallet file itself is enough to prevent the coins being spent.

Yes ... if no backups exist.

Ellipsis753 1 hour ago 2 replies      
"Even if the FBI is not able to transfer the money, merely having possession of the wallet file itself is enough to prevent the coins being spent."

Isn't this totally false? He could have easily made backups of the wallet and even given copies of it to others. I'd expect there's a whole bunch of ways that these bitcoins could still get spent?

rayiner 2 hours ago 0 replies      
The asset seizure issue with regards to bitcoin is interesting, but not unprecedented. A lot of the same issues that come up here come up in the more traditional context of seizing assets stashed in Bermuda, etc. You have Bermudan banks that won't allow access to assets without consent from the owner, and the question is whether giving that the owner can be forced to sign the consent form or whatever.
yason 2 hours ago 3 replies      
This is interesting as a more generic case: before criminals had to stash their money in a hide in case they got caught and into prison. Now they can convert the money into bitcoins and simply arrange that they can't unlock the wallet. Things that help:

- you can make indefinite copies of your locked wallet so TLA basically can't confiscate it

- you can protect your wallet with a secret (passphrase and/or a key) so that they can't unlock it

- you can distribute the secret among several people using one of the secret sharing protocols

- or hide it steganographically in an ordinary file while the TLA in question still can't prove it's there

danielweber 2 hours ago 0 replies      
Even if the FBI is not able to transfer the money, merely having possession of the wallet file itself is enough to prevent the coins being spent.

Ugh, no. Having the file means you can transfer out the bitcoins. Anyone having the file can transfer out the bitcoins, so the FBI securing that wallet doesn't lock down those bitcoins.

The FBI cannot properly "seize" the bitcoins unless they use the wallet to transfer the coins to a fresh address they make and control. And I'm not sure that traditional seizure laws allow that, because AFAIK we've never had this scenario before.

whyleyc 3 hours ago 1 reply      
It's surely only a matter of time in this situation before the FBI start using rubber-hose cryptanalysis [1]

[1] Obligatory XKCD http://xkcd.com/538/

nwh 3 hours ago 2 replies      
I can't imagine they would let a password get in the way of this. Even with an encrypted wallet, the balances of the content addresses are available for viewing for convenience sake.

The reference client (I's not mentioned which wallet software) uses hundreds of thousands of rounds of key stretching, enough that on a GPU you're only getting a few attempts at the key. Might irritate them enough to crack out a good sized farm.

27182818284 3 hours ago 3 replies      
I've had a question that I"m curious to hear the community's reaction on: Doesn't the FBI wanting to seize the bitcoins do more to legitimize the currency than anything thus far? Isn't it tantamount to saying "Yes, this is like dollars, francs, or another legitimate currency that we need to seize."

I'm wondering if that conversation has come up among management at FBI and what the outcome has been.

SeanDav 2 hours ago 1 reply      
I don't understand why possession of the wallet file means the FBI have seized his bitcoins (even if they can't access them yet)?

Surely you can make copies of your wallet and keep them in various secure locations?

Synaesthesia 3 hours ago 1 reply      
Good encryption is still uncrackable, unless you induce the owner to give up his passphrase.
gedrap 2 hours ago 2 replies      
What's is rather shocking is the kinds of mistakes he made.

Not some subtle but plain simple. Like using personal email to register on forums and promote SR and recruit people.

Using stackoverflow with personal email, again... Yes they are not some solid evidences, but made the FBIs life, to get the guy, much easier. You might say that it's easy for me to point out those mistakes but they are so basic and it's not that he was running some Nigerian type of scam, he must have been way more careful.

Or keep messages about the 'murder-for-hire'. Yeah it's rather obvious that no one got hired but good luck explaining this negotiation tactics to the judge. Plus he mentions $80k for other 'murder-for-hire'. I'm really curious how the thing with this 'murder' will end up. I mean he could have just simply deleted them just in case. It's not FB that it would stay forever...

lettergram 2 hours ago 1 reply      
The author doesn't really seem to understand what a Bitcoin Wallet is. First, I HIGHLY doubt that 600,000 Bitcoins would be in one Wallet. More likely, the bit coins would be spread between several thousand wallets. Second, even if the FBI was in possession of "the wallet" couldn't Ulbricht just access his backups else where?
llamataboot 2 hours ago 1 reply      
What's overwhelmingly clear from this article is that either the feds, or at least the people they are sending to talk to the media, have really no idea how bitcoin works.

Saying "The Bureau is in a position equivalent to having seized a safe belonging to a suspect with no idea of the combination and no hope of forcing it open any other way." is completely incorrect.

rdl 2 hours ago 0 replies      
Big question is whether turning over the passphrase would be "testimonial". I could see it being argued that it's a forgone conclusion he owns these bitcoins -- enough evidence to tie him to silk road, and evidence in the block chain showing those coins came from silk road.

(This is relevant due to the fifth amendment to the constitution. In many cases, turning over a password or combination is considered self incrimination and thus cannot be compelled by the state.)

bsullivan01 2 hours ago 0 replies      
after shuffling Ross from horrible prison to horrible prison and making his prison life miserable on purpose...

So Ross, how about you tell us the password and instead of 14 life sentences you get only 20 years?

the_watcher 54 minutes ago 0 replies      
How can the FBI "seize" his wallet as if it were a safe? Isn't the point of Bitcoin that it is digital, hence he would be able to access it if he could get to a computer? Or is there a way to physically control the wallet so that it can't be accessed elsewhere?
joshdance 38 minutes ago 1 reply      
Will having 5% of Bitcoin out of circulation cause any economic problems with the currency?
rarw 1 hour ago 0 replies      
The more interesting question regarding this seizure is whether the FBI can compell the creators of BitCoin to assist in decrypting what they have just seized. For example, there are a number of statutes that require those operating communications networks maintain the ability for the governemnt to access them regardless of the encryption or other security features being used. I don't know the corresponding baking law as well but it would not surprise me if the same laws that require banks to comply with seizing fund, blocking wire transfers, and tracking where money goes in the course of a criminal investigation could kick in here.

Certainly something like this would have a big effect on the BitCoin market. I'm interested to see what happens in the future.

seniorsassycat 58 minutes ago 0 replies      
This article makes it sound like all of DPR's bitcoins are in one wallet. If 5% of all bitcoins were being funneled into one bitcoin wallet couldn't people have discovered that wallet by analyzing all BTC transactions?
noarchy 2 hours ago 0 replies      
If the FBI cannot manage to successfully steal these Bitcoins, then it might be quite the advertisement for the overall security of Bitcoins, no?
scottcanoni 1 hour ago 0 replies      
Here are the bitcoins they were able to seize:http://blockchain.info/address/1F1tAaz5x1HUXrCNLbtMDqcw6o5GN...
azsromej 2 hours ago 0 replies      
a walk on the lighter side: the bitcoin is inside the computer http://www.youtube.com/watch?v=zQGX3J6DAGw
TausAmmer 3 hours ago 0 replies      
Biggest thieves of the block, what else can be said.
cphoton 1 hour ago 1 reply      
Can't they just use an old version of bitcoin and use the timing attack to find the passphrase?

EDIT: I mean exploting this bug: https://github.com/bitcoin/bitcoin/issues/2838

Machine Learning with scikit-learn amueller.github.io
49 points by derpapst  7 hours ago   12 comments top 4
blauwbilgorgel 3 hours ago 1 reply      
Andreas Mueller is one of the core devs of scikit learn.

He is active on Kaggle.com too.

For more practical ML projects see: https://github.com/amueller

ColinWright 7 hours ago 2 replies      
I cannot tell you how much I hate these drip-feed presentations. There isn't even an indication of how long it goes for. The early stuff is obvious (for me) - how many times do I have to click to get to the interesting bits?

There might be some great stuff here, but many of your potential audience will never find out, because they'll give up.

sjtgraham 3 hours ago 2 replies      
I know what I'm doing tonight. Great idea including sample data to play with in the library! Is that the MNIST data set?
tucson 1 hour ago 1 reply      
I'd like to know more about slide 6:


Why the [Classification][100K sample?] checkpoint?

And more info in general about this whole cheat-sheet.

Why Java Now Rocks More Than Ever: Part 1 The Java Compiler zeroturnaround.com
88 points by mustapha  4 hours ago   106 comments top 17
jonstewart 3 hours ago 5 replies      
Beefs with this article:

1. Runtime-linking _is_ dynamic linking, and it's a PITA that Java doesn't have an option for static linking, especially given the inherent fragility of the CLASSPATH.

2. clang and gcc have compatible command-line option syntax.

3. The options example he lists for gcc is a pure strawman. Maybe they are necessary to compile that particular source file, but it is not necessary to use all these options in the average case.

4. The article title says "now more than ever", but there's really nothing about recent developments here. This article could have been written ten years ago.

This really just seems like crappy linkbait.

rsynnott 2 hours ago 4 replies      
> All this is only worsened by the black box nature of the optimization level switches.

JVM switches like -XX:CMSInitiatingOccupancyFraction=70 -XX:SurvivorRatio=2 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:NewSize=2048m -XX:MaxNewSize=2048m (taken from real recommended settings for an open-source project) are, of course, not opaque-to-the-average-user at all :)

petercooper 2 hours ago 0 replies      
I'd argue it's more the JVM and its ecosystem that rocks more than ever. Sure, Java is getting better, but it's the JVM and its ecosystem that seem to be impressing people and bringing fresh blood in nowadays.
kamaal 59 minutes ago 3 replies      
One of the things I see with Java today, how difficult things have gotten with the language. Its next impossible to deal with any large Java project without an IDE. The verbosity of the code is mind boggling. Often method calls are 4 - 6 layers deep, which in itself makes is very difficult to remember or even implement even if you read the documentation well.

The resulting code is massive walls of text. 90% of that is machine generated through eclipse. This is an indication that the language idioms are unable to support the current complexities in application programming trends. And you have to interplay with them heavily to squeeze out usable programming logic.

That doesn't end there. Perl is older than Java, yet despite that I see Perl can support a lot of idioms far far better than Java can with its bulky frameworks.

Its just that the language is beginning to show its age.

The only reason to use Java these days is basically availability of super low cost devs, Legacy code, tooling etc. Basically for reasons as with any tool that has an advantage with age.

sgt 2 hours ago 10 replies      
Java is rapidly becoming more viable as an alternative to all of these "trendy" languages these days, e.g. Ruby, Python, Go.

What I mean by that is simply that a lot of developers are prejudicious against Java due to historically it being slow and having tedious development feedback cycles.

We use Java extensively (and Java EE 6) in an agile IT business and it is truly an asset. I encourage others to look in the direction of Java.

pekk 1 hour ago 3 replies      
Why Big Band Music Rocks More Than Ever: Part 1 - You Can Dance To It

There are still fans of Big Band music!

Java fans (or maybe I should say, employers) who want to keep exhuming this dead horse might be well served to emphasize "you can get a job" and "it's enterprise" and "JIT makes Java faster than Assembly" as they have been doing for decades, rather than tarting it up (a Java logo with an electric guitar, seriously Dad?) and comparing it to languages which are even more ancient. Java is closing in on 20 years old...

RivieraKid 28 minutes ago 1 reply      
Java as a language is not that great, but the alternatives are usually even worse:

- Python: slow, dynamic typing

- Scala: slow compilation, very complex syntax, worse tooling,

- C#: MS-centric (but other than that C# is superior to Java)

JasonFruit 2 hours ago 0 replies      
This article didn't impress me, but from the author's bio sketch I was led to look up the eigenharp, which did.
moron4hire 2 hours ago 1 reply      
I rather much thought that "DLL Hell" was a solved problem. I haven't had to worry about DLLs on my Windows box in years, basically since Win2k an XP.
jgalt212 2 hours ago 1 reply      
All this is only worsened by the black box nature of the optimization level switches.

Doesn't the JIT do all sorts of crazy stuff that's not always reproducible and thus hard to profile? I know this is the case with V8, but probably also so with Java. I don't know much about compilers, but the author seems to know even less.

alkonaut 1 hour ago 1 reply      
I don't know if the whole oracle/sun debacle is to blame for the complete stagnation of the jvm and the java language, but the development pace is laughable these days. We're still waiting for proper lambda expressions after all these years. We still have a runtime without generics. And so on.

The irony of the article is that ZeroTurnaround not only make money from the java community, but does so by selling tools that work around shortcomings of the jvm. So not only is it in their interest to have a large dev community on the jvm, if you are a bit mean you could say its in their interest to have a crap jvm :)

bcl 2 hours ago 1 reply      
Oh, shucks. I was expecting to see an article on compiling Java to machine code, not bytecode.
topbanana 1 hour ago 0 replies      
A more interesting question is how much other languages rock in comparison to Java.

(As I was in the middle of this reply, the Java Auto-Updater popped up and stole my focus!)

nness 2 hours ago 1 reply      
Argh. You can't use Open Sans Light as a body font...
waynecochran 1 hour ago 0 replies      
Note that the debugger (jdb) isn't listed -- for a good reason. I never understood why jdb is so poor being such a critical tool.
hakcermani 2 hours ago 1 reply      
like a university professor, dusting off some old notes for a 'new' lecture. Students are meanwhile doing crazy exciting stuff out there.
duedl0r 2 hours ago 0 replies      
this title makes me giggle.. ok, I know...I'm too lazy to even open this link..
RSA-210 factored mersenneforum.org
122 points by Mithrandir  14 hours ago   33 comments top 8
randomwalker 14 hours ago 1 reply      
To clarify, this is not a new factoring record for products of two primes. RSA-768, a 232-digit number (22 digits longer) was factored in 2009, and that record still stands. http://en.wikipedia.org/wiki/RSA_numbers#RSA-768

The algorithm used here is GNFS (general number field sieve), which is the same algorithm that's been used for about two decades. In other words, this has no impact on the security of RSA.

More information: http://en.wikipedia.org/wiki/Integer_factorization_records

simonster 14 hours ago 1 reply      
"RSA-210" is the official name for this challenge, but it is a bit of a misnomer. The modulus has 210 decimal digits but about 210 * log2(10) = 697 bits.

EDIT: Wikipedia article says 696 bits.

Mithrandir 14 hours ago 0 replies      
In case you've never heard of the challenge: https://en.wikipedia.org/wiki/RSA_Factoring_Challenge
tgb 3 hours ago 0 replies      
I was curious as to how the cash prizes [1] for this challenge compared to playing the lottery. Consider the next smallest one left: RSA-768. By my very rough estimates [2], $1 worth of computing time on a typical desktop gives you a probability of 10^-58 of factoring the prime by picking random numbers smaller than its square root.

[1] https://en.wikipedia.org/wiki/RSA_Factoring_Challenge[2] https://www.google.com/#q=%241+%2F+(%240.09+%2F+kwh)+%2F+100...

gus_massa 13 hours ago 1 reply      
From the logs:

> Mon Sep 23 11:09:41 2013 commencing Lanczos iteration (32 threads)

> Mon Sep 23 11:09:41 2013 memory use: 26956.9 MB

> [...]

> Thu Sep 26 07:17:57 2013 elapsed time 51:56:44

Im not sure that Im understanding all the details. Does this mean that they factored a 210-digit number in 52 hours in a single machine?

DanBC 4 hours ago 0 replies      
What hardware is used for the first round of sieving? Is it "just" distributed computing on normal machines, or are they using special FPGA farms?
mrcactu5 3 hours ago 2 replies      
how did they know it was the product of two primes in the 1st place?
arange 14 hours ago 3 replies      
For us that don't understand, what does this mean? Is RSA less secure now?
How do Nobel laureates spend their prize money? phys.org
25 points by m_class  6 hours ago   13 comments top 3
Maxious 4 hours ago 3 replies      
> Since the Reagan tax reforms of the mid-1980s, the United States, the only country to tax the [Nobel] prize, has taken another 40 percent or so off the top. (http://www.cnbc.com/id/49341627)

Really? Oh yes it's right there under "Pulitzer, Nobel, and similar prizes." http://www.irs.gov/publications/p525/ar02.html#d0e8326

Stay classy America.

VLM 4 hours ago 1 reply      
The article missed the famous quote along the lines of spending it all in Amsterdam.

The winners buying real estate need to look out for maintenance costs and taxes... better expect to spend 5% the cost of the house, or more, annually, for taxes and utilities and upkeep, so if the prize is more than perhaps 10 times your annual income you're eventually going to be in a world of hurt. I've had some relatives end up land-poor and its not a pretty sight. Here's 5 million dollars of lakefront property. Whoops he doesn't make 500K/yr.

mmgutz 1 hour ago 0 replies      
Pretty sad really. I'm a boxing fan. Floyd Mayweather made over 90 million dollars in his last fight. Nobel winner - 1.25 million. Where are our priorities?
       cached 7 October 2013 16:02:01 GMT