hacker news with inline top comments    .. more ..    17 Jun 2015 News
home   ask   best   3 years ago   
The Final ES2015 (ES6) Draft ecmascript.org
38 points by kolodny  6 hours ago   discuss
The 100-year-old scientist who pushed the FDA to ban artificial trans fat washingtonpost.com
68 points by Hooke  5 hours ago   25 comments top 8
1
bad_user 16 minutes ago 0 replies      
After decades of artificial trans fats being advertised as being healthier than fats of animal origin, with restaurant chains being forced to switch to trans fats, I find this to be quite funny. And then people wonder about the French paradox ... well, it's because you've been fed with lies.
2
jrapdx3 3 hours ago 1 reply      
Inspiring to read about anyone taking on the FDA to improve our food supply and the nation's health. Especially impressive considering that we're talking about a person 100 years old.

Goes to show age is not necessarily a barrier to productivity and creativity. Maybe it's fair to describe Dr. Kummerow as a kind of "geek" in his field. It follows that by all means we should honor old geeks, it doesn't serve our interests very well to discard them at 50 which I'm told is a current trend.

Article is a nice juxtaposition too, since just a few hours ago there was mention here of an article about diet and obesity. No doubt consuming substantial quantities of artificial trans-fats contributes to the health consequences of obesity if not obesity itself.

3
rm_-rf_slash 7 minutes ago 0 replies      
I suppose if I want good health it's worth listening to the guy who made it to 100.
4
reinhardt1053 3 hours ago 0 replies      
His book worth a read: "Cholesterol is Not the Culprit: A Guide to Preventing Heart Disease"

http://www.amazon.com/Cholesterol-Not-Culprit-Preventing-Dis...

5
spiritplumber 3 hours ago 3 replies      
Well if he's 100 year old and still active enough to do lobbying, he probably is doing something right diet-wise...

(Did you know that the guy who invented LSD died at 102?)

6
nekopa 3 hours ago 2 replies      
What do you guys think about the fact that the current head of the FDA (edit: I am referring to M. Taylor, deputy commissioner of food and vet.) has spent a lot of his career working with Monsanto?

On one hand, we want someone running the FDA who has experience with the food industry. (Just as we would probably prefer having a manager who has spent time in the trenches coding).

On the other (tin foil hat) hand, has the food industry placed a 'friend' in charge of the people that should be regulating them?

7
huherto 4 hours ago 4 replies      
What about the trans fat already in our arteries ? Is it going to stay in our arteries for ever?
8
Daishiman 2 hours ago 1 reply      
Dude looks like he's 30 years younger.
An Insiders Guide to Shenzhen Manufacturing makezine.com
55 points by sohkamyung  4 hours ago   12 comments top 6
1
cherls 2 hours ago 1 reply      
I was in Shenzhen a few weeks ago. It's indeed a wonderful city.

Originally the massive manufacturing industry was accelerated by the large amount of people from rural areas flocking into the city due to reform and opportunities. The increase in population resulted in a large unskilled workforce and thus very cheap manual labour and large factory assembly lines.

It's a lot better now given a much more competitive market. Salaries are higher and quality of living is better than the rest of the country. A lot of people became very rich in Shenzhen within the last decade.

2
asymmetric 1 hour ago 2 replies      
Interesting article, but I get the impression it was downplaying environmental and social concerns quite a lot. A lot of "promising signs" but in the meantime you get heavy metals in your food, and workers' conditions are just bearable. It's probably better than most other parts of China but still, the situation seems to be quite bad.
3
option_greek 2 hours ago 0 replies      
It's funny how "focusing on core competencies" usually reduces the set of core competencies gradually till all that left is management.
4
primigenus 3 hours ago 0 replies      
Highway1 (http://highway1.io) is doing a great job of helping startups figure out how to get their manufacturing strategy set up. The advice and insight the two companies I worked with in their most recent cohort (Spinn Coffee and Game of Drones) received was absolutely essential to getting off the ground, so I'm expecting to see more of this kind of incubator/accelerator in the future.
5
hundunpao 1 hour ago 1 reply      
I wish we would not need Shenzhen.
6
unexistance 3 hours ago 0 replies      
good read, love the part on how to choose among the factories by meeting directly with their boss(es)...

If this is the norm for other type of factories in China, I don't mind settling there too

Ask HN: Company got acquired, new contract seems oppressive
84 points by ExhibitAClause2  6 hours ago   127 comments top 25
1
Animats 3 hours ago 2 replies      
As others pointed out, you need an hour with a labor lawyer.

Fish and Richardson, the law firm, says "Employees: Non compete agreements - don't sign them.[1]

It's often effective to take the contract, cross out and initial sections you and your lawyer consider overreaching, sign that, and turn it in. Then the company has to argue with you paragraph by paragraph, tying up their legal counsel, if they really want those terms. Also, there are special legal provisions about requiring a new employment contract from existing employees.

I went through this years ago with a very big company, refused certain clauses, and after some huffing and puffing, they gave in. This was important, because I did work for a startup on the side and got stock.

[1] http://www.fr.com/files/Uploads/Documents/Dos-and-Don%27ts-o...

2
functional_test 6 hours ago 5 replies      
Find an employment attorney. Pay that person for an hour or two to read the contract for you. They will be able to offer much better advice than HN.
3
grabeh 1 hour ago 1 reply      
When you say acquisition, I'm assuming you mean an asset acquisition rather than share purchase? I only say, because technically if it's the latter, the contracting entity won't have changed and depending on the State/country laws, there is no change in the employment relationship.

It's a different matter for an asset acquisition but generally, if you're performing the same role, in certain jurisdictions your existing contract terms have to be respected (this is the case in Europe at least, I would assume that in the US, the position is more flexible though).

You'd like to think your new employer is reasonable and would at least consider feedback/amendments from you in the first instance. At worst they can reject the proposed amendments and then you will have to decide to accept or look elsewhere, but at least you would have given it a go.

Contracts obviously seek to impose clarity on a relationship and so I have some sympathy with a company attempting to create a completely black and white position (if it's not carved out, it's ours). If you are concerned about this approach and want more flexibility then you could revise so any work in your private time unrelated to anything work-related is yours. This comes with its own pitfalls in some ways - it's difficult to nail down with clarity where the dividing line is, which in part explains the company's desire for a black and white approach.

In terms of ownership of previous IP, it would depend on the wording of the agreement, they might just be looking for an assignment of future IP developed whilst working for them, or they might want an assignment of past IP not expressly referenced in the agreement. The latter would be rather draconian but that's not to say the company wouldn't request it!

I'd be glad to give a read of the contract on an informal basis, if you want.

4
chrisbennet 4 minutes ago 0 replies      
If an employer wants you to sign a new employment contract, wouldn't that imply they were re-negotiating your employment?

"I assume from this new employment contract that we're renegotiating my employment. Let's discuss my new salary..."

5
pvg 5 hours ago 4 replies      
This paperwork is standard. The company wants to protect itself against a scenario in which you, after being steeped in its business, come up with some way to do it better/more efficiently/cheaper but claim the idea came to you while you were at home in the shower and thus they have no rights to it. Your out is the form that lists the 'inventions' you might have that you want excluded from this clause. You can always add to it later, too, should you come up with something that is unrelated to your employer's business that you want to work on yourself as long as you and the employer can agree it doesn't interfere with your full-time duties as an employee. Usually, all of this is a formality - just keep the paperwork up to date when needed. I don't think your new employer is trying to screw you.
6
jacquesm 1 hour ago 0 replies      
Talk to a lawyer. On top of that: the company being acquired does not technically (normally) force you to sign a new contract, they should honour the terms of your old one. But depending on where you're employed they might easily find some grounds to throw you out if you don't sign it so a lawyer should be your first stop. And not a lawyer in any way shape or form associated with the company, make sure they are really on your side (with very large companies especially in smaller towns it can be quite hard to find a lawyer that has not been in some way employed by the company before or that is not in a partnership that has dealings with the company).

The real question is how much do you need this job? What is the state of mind of your co-workers about this subject?

Good luck!

7
hinkley 2 hours ago 0 replies      
Do not, under any circumstances, go to your bosses looking for advice on this.

In many cases the C-level employees of the old company have bonuses tied up in retaining a certain fraction of the original employee team for the term of their incentive package, and they lose out on part of their payout if they don't.

Not to say your bosses are going to steer you wrong, but it's very likely that it's now a conflict of interests for them to weigh in.

8
olefoo 5 hours ago 1 reply      
Remember that this contract is a negotiable agreement.

You can strike clauses and file an amended agreement, they can refuse to accept such things; but you are not obligated to sign unless they are compensating you adequately for what you are giving up. Approach this as an equal; decide what _you_ are willing to put up with. Nobody on this forum can tell you what you can and cannot live with.

Do figure out your BATNA at this time.

9
patio11 5 hours ago 1 reply      
Are there any "uber for lawyers" services online

c.f. Lawdingo (YC 13), which is Uber for lawyers. No relation; never pulled the trigger on actually using it.

Incidentally, my last employment contract had a similar clause in it. After consulting with my bosses, who thought it was the usual boilerplate and didn't really expect a young engineer to have meaningful IP, we came up with a list which looked like:

1) Bingo Card Creator [the only IP I was really worried about]2) Various contributions to the OSS projects listed in Appendix A [these days I'd literally just print a listing of all repos in Github]3) Miscellaneous computer programs, inventions, and documents which exist on physical or electronic media as of $DATE and are impractical to list -- $COMPANY acknowledges this disclosure is adequately specific for its purposes

10
borski 5 hours ago 1 reply      
This is standard practice in most default employment contracts, including literally every single one I've signed as an employee. It's in our employer contract too, and we've all signed it.

I usually include, as one of the disclosed items, something along the lines of "other open source or business ideas I may come up with or have come up with on personal time and while using personal, non-company, property."

One of our employees did this too, and I took no issue with it. My guess is if you have an employer suing you for infringement based on work you did there, you have burned a bridge and have much bigger problems than just this lawsuit.

Edit: IANAL, this is not legal advice, etc.

11
retrogradeorbit 4 hours ago 1 reply      
Strike the clause. You are in a negotiation. They are going to structure the contract to be filled with things they'd love to have. Most people just sign. But there a clauses that are love-to-haves, but not must-haves. Maybe this is one of them. If it's one of the must-haves, they'll let you know by saying they cant accept the contract with that clause struck.
12
geophile 6 hours ago 0 replies      
Talk to a lawyer.

FWIW: While this clause may be oppressive, it is not uncommon. It has been in (almost?) every employee agreement I've signed. I always filled in the addendum to exclude ideas I had previous to the job, that I wanted to pursue on my own.

13
chvid 5 hours ago 1 reply      
Talk to a labor union.
14
spacecowboy_lon 1 hour ago 0 replies      
This is pretty standard for I assume the USA. And I would avoid Uber for lawyers you need a real lawyer who specializes in labour law.

As my mate Patrick who is a senior Industrial relations specialist and a lawyer said you don't want the guy that did the papers for buying your house advising you.

15
kzhahou 2 hours ago 1 reply      
> Assuming I were to sign and return without enumerating any specifics THEY WOULD OWN the IP to anything I've done previous to this?

ianal, but fwiw here's an interesting tidbit I've picked up from lawyers in the past, when in a similar situation: they don't necessarily think of it as "we will own your IP." Instead it's "we will CLAIM to own your IP." The point being that it's not some absolute uncontestable ownership. You're always free to claim ownership yourself, despite anything stated in writing.

Anyway, I thought it was interesting because my non-lawyer brain thinks in terms of things I own and don't own, end of story. But the legal department thinks in terms of arguing ownership and resolving disputes in front of a judge.

16
bsder 5 hours ago 0 replies      
Get a lawyer to look this over.

LegalShield (https://www.legalshield.com/) is effectively a multi-level marketing scheme, but the product is actually sound. It has helped a couple of friends of mine with both contract and criminal defense issues.

LegalShield is also very useful if you happen to suffer from "driving while brown/black" as they can be called 24/7.

17
egocodedinsol 5 hours ago 1 reply      
Assuming you do get a 'reasonably priced' lawyer in this situation, how confident can you be in the answer? Put another way, if @ExhibitAClause2 were to end up in court how much would the outcome depend on the quality of lawyer arsenal at his disposal versus BigCo?

I ask because there are countless scenarios when an attorney clears something and an expensive lawsuit still occurs, e.g. patent trolls.

18
19
jucaloma 2 hours ago 0 replies      
Well, the typical state of things, come to the hackerdojo here in Mt View CA, we have a resident lawyer that can probably help you with any questions.
20
jucaloma 2 hours ago 0 replies      
yeah, come to the hackerdojo if u are here in the Peninsula.We have a resident lawyer here.
21
Silhouette 5 hours ago 0 replies      
Standard disclaimer applies: You need a lawyer qualified in your jurisdiction to check your contract. Personally I always recommend this for any employment contract. The cost of one decent lawyer for an hour vs. the potential risk that an employer sneaked something irrevocable and completely disproportionate in? It's not even close.

That said, I once had exactly the described problem: post-acquisition, new company wants to adjust a lot of contractual wording on things like IP heavily in their favour, at a software business where many of the staff are also creative outside work in one way or another. Most of my colleagues didn't realise the implications of the proposed IP clauses and in particular the potential impact on their time outside office hours until these dangers were pointed out, but many strongly disliked the new terms once awareness was raised.

Without getting into details I possibly shouldn't, let's just say that what the acquiring company's lawyers or HR people would like to happen will probably be outweighed by a significant proportion of staff from the acquired company refusing to sign the oppressive deal and threatening to walk. If you can reach critical mass, management is likely to step in and do what they have to so they can protect the new investment and CTA. In the end, the wording of the relevant sections in our new contracts was identical to the corresponding sections in our old contracts.

Incidentally, probably one of the biggest mistakes of my professional career was sticking around for too long after I already knew what kind of business the new employer was from their initial behaviour. With hindsight, I should have given them a fair chance once they'd backed down -- a few months, perhaps -- but then having confirmed that the new corporate culture was similarly unwelcome in many other respects I should have started looking long before I actually did. YMMV.

22
leriksen 6 hours ago 0 replies      
maybe just print out "ls -alR /" and say "these are all my pre-existing inventions and and ideas, and supporting software"
23
uberweb 4 hours ago 0 replies      
This sounds like a plot lifted straight from silicon valley.
24
ninjakeyboard 5 hours ago 0 replies      
I was cut after an aq - at least you're still there :)
25
cvs268 3 hours ago 0 replies      
Relax. Take a break. Watch Mad Men. Specifically this episode http://www.imdb.com/title/tt1484414/?ref_=ttep_ep13
Y Combinator growth equity fund? sec.gov
261 points by kamilszybalski  13 hours ago   61 comments top 18
1
xenophon 12 hours ago 1 reply      
Another relevant article hinting at this development: http://www.businessinsider.com/y-combinator-raising-money-fo...

The impetus behind a growth equity fund, according to the article, would be to provide "long-term capital that allows startups to continue to operate in beta [sic, I assume -- they probably mean privately] without having to go public."

I can see why this approach would make sense for optimistic investors who are familiar with the impatience of public market investors with the kind of moonshot, long-term investments that are game-changing but don't pay off during next quarter's earning call.

That's one charitable interpretation of this decision, if it's true -- Y Combinator wants to counteract the abundance of hedge fund money pouring into this space (with attendant expectations of a near-term public liquidity event) with strategic capital and a longer time horizon.

2
softdev12 12 hours ago 5 replies      
It would be interesting if Y Combinator attempted to convert to an entity that was able to publicly list on a stock exchange and sell shares to the average investor - much like private equity shops Blackstone and Carlyle have done by going public.

https://www.wsws.org/en/articles/2007/06/blac-j25.html

http://www.carlyle.com/news-room/news-release-archive/carlyl...

If there is a bubble valuation in the public-to-private market, YC could potentially arbitrage the valuation difference into cash for its LPs.

3
datashovel 11 hours ago 4 replies      
My question is, long term how does SV not become just another large bureaucratic / corrupt power center like D.C. or Manhattan? Today it feels good to see those who deserve it get rewarded, but I imagine that's how people in NY felt about Manhattan when it was a fraction of what it is today. Same of course with D.C. shortly after (for example) the American Revolution.
4
rdlecler1 2 hours ago 1 reply      
YC: you probably want to run this by your legal counsel. By posting your open ended 506b filling for a proposed growth equity fund on Hacker News (which you own), you are engaging in general solicitation and advertising, which requires a 506c filling. You don't want to end up like Goldman Sachs when they tried to offer the Facebook pre-IPO fund to their private wealth clients. The SEC shut them down.
5
staunch 11 hours ago 1 reply      
Most of the money to be made in VC is by doubling down on successful investments. YC has foregone billions by not doing this. Competing with later stage VCs may incline them to compete with YC, which would be a very great thing for the world.
7
mwilkison 12 hours ago 3 replies      
Is this a fund for follow-on investments in YC startups? In the past YC has indicated they dislike follow-on investments by accelerators since it sends a negative signal re: the startups they decide not to invest in.
8
gtirloni 11 hours ago 4 replies      
I've heard that Delaware is 1) safe heaven for litigation and 2) most forward-thinking in terms of business-related bureaucracy. Is that true? What makes West Coast companies incorporate so far from SV?
9
sharemywin 11 hours ago 0 replies      
YC has been talking more about working on bigger more ambitious projects/companies, so maybe its away to fund things that won't get funded by traditional VCs.
10
late2part 11 hours ago 1 reply      
My Uber drive today was bemoaning how hard it is to get a job in the DC area with his name, Mohammed. I said that wasn't right, but there's no reason he couldn't go by Michael or Moe. It's interesting to see that on this form, the "Related Person"'s last name is "YC CONTINUITY MANAGEMENT I, LLC". I suppose it's not terribly remarkable, but it goes to show that many of these forms have ambiguous meanings that are wide to receive.
11
phantom_oracle 11 hours ago 3 replies      
They sure do take a playbook from the innovation they try to foster and do new things (although it isn't necessarily "new" in the sense that other funds like this don't exist).

In the business of business-acceleration, I guess this makes YC the McKinsey or GS?

Thing is, they can't keep stretching the payout to investors.

Even a moonshot (as a business) needs to experience a liquidity event of some sort, so they're either inflating the so-called bubble with this or...

They're playing dirty with some of their first-to-market companies by helping them grow and stay cheap enough until they emerge as monopolies (-redacted- AirBnB come to mind mostly).

Edit: to my surprise, Uber isn't a YC company, edit made.

Edit 2: I am checking a list of YC companies and other big ones I see that have potential are:

- Disqus

- Heroku (exited so doesn't count)

- MixPanel

- Olark

- Embedly

- HomeJoy

- Stripe (of course!)

- Codecademy

- Firebase

I stopped at Summer 2011, but some of these are now so ubiquitous on the internet, that it makes you wonder...

12
hkarthik 7 hours ago 0 replies      
If I had to guess, YCombinator is starting to diversify it's funding strategies for early stage startups as it starts to diversity the types of startups that it invests in.

The same funding terms simply won't work for an e-commerce shop selling Jellyfish compared to one trying to commercialize nuclear power. This new type of fund probably allows them to fund the latter startups in a more appropriate way.

13
jaydub 8 hours ago 0 replies      
Interesting that Kleiner Perkins is moving downstream http://www.nytimes.com/2015/06/17/business/dealbook/kleiner-...

Related?

14
jackgavigan 12 hours ago 0 replies      
It could be that they've invested all the money from their last fund and are just putting a new fund in place to continue making accelerator investments.
15
andy_ppp 12 hours ago 0 replies      
"Pooled Investment Fund Interests"[1] is checked which mean it's a fund for shares in multiple companies? Not sure if they are going full on VC?

Anyway I'll buy some :-)

[1] More info: https://www.moneyadviceservice.org.uk/en/articles/what-are-p...

16
pbiggar 10 hours ago 1 reply      
`Does the Issuer intend this offering to last more than one year? No`

Does this mean that they're only offering entry into the fund in the next year, or that the money will all be distributed over the next year?

If the latter, this would imply that this is a single investment vehicle. Though the wording does imply the former, I would think.

17
marincounty 9 hours ago 0 replies      
So why not start up a pooled venture capital fund? It will be one risky fund, but they all seem risky?

I am eagerly awaiting the day Janet Yellen raises interest rates! There's too much free money being given out, and it's not going to the poor, or middle class.(I thought stricter banking regulations were good after the crash, but boy was I wrong!)

These investment entities(hedge, venture, etc.) have too much Monopoly money to throw around. Why shouldn't Y Combinator get in on the Party? Actually, they late to the Party? 'Let's get the best loans, and while we are at it snag the reluctant Retail Investor who's 2008 wounds are starting to close, and just might give up their bloody wad of cash siting in that horrid CD?'

18
adoming3 12 hours ago 0 replies      
"Just shut up and take my money" - every investor
How to receive a million packets per second cloudflare.com
495 points by _jomo  19 hours ago   73 comments top 13
1
danpalmer 24 minutes ago 0 replies      
> Last week during a casual conversation I overheard a colleague saying: "The Linux network stack is slow! You can't expect it to do more than 50 thousand packets per second per core!"

> They both have two six core 2GHz Xeon processors. With hyperthreading (HT) enabled that counts to 24 processors on each box.

24 * 50,000 = 1,200,000

> we had shown that it is technically possible to receive 1Mpps on a Linux machine

So the original proposition was correct.

2
adekok 13 hours ago 2 replies      
Nice, except recvmmsg() is broken.

http://man7.org/linux/man-pages/man2/recvmmsg.2.html

 The timeout argument does not work as intended. The timeout is checked only after the receipt of each datagram, so that if up to vlen-1 datagrams are received before the timeout expires, but then no further datagrams are received, the call will block forever.
Which makes it useless for any application that wants to service data in a short time frame. The only way around it is to use a "self clocking" method. If you want to receive packets at least every 10ms, set a 10ms timeout... and then be sure to send yourself a packet every 10ms.

I've done similar tests with UDP applications. It's possible to get 500K pps on a multi-core system with a test application that isn't too complex, or uses too many tricks. The problem is that the system spends 80% to 90% of its time in the kernel doing IO. So you have no time left to run your application.

Another alternative is pcap and PF_RING, as seen here: https://github.com/robertdavidgraham/robdns

That might be useful. Previous discussion on robdns: https://news.ycombinator.com/item?id=8802425

3
edude03 18 hours ago 6 replies      
Hmm, I might be missing something here, but don't most high performance network applications skip the kernel for this exact reason? (IE http://highscalability.com/blog/2014/2/13/snabb-switch-skip-...)

Makes me wonder how often bypassing the kernel is used in production networked applications.

4
shin_lao 15 hours ago 1 reply      
It's an interesting post.

If you really want to squeeze out all the performance of your network card, what you should use is something like DPDK.

http://dpdk.org/

5
jedberg 18 hours ago 14 replies      
A joke answer and a serious question:

A: "Use BSD"

Q: Why is there such a strong focus on trying to get Linux network performance when (I think) everyone agrees BSD is better at networking? What does Linux offer beyond the network that BSD doesn't when it comes to applications that demand the fastest networks?

ps. I think the markdown filter is broken, I can't make a literal asterisk with a backslash. Anyone know how HN lets you make an inline asterisk?

6
chx 16 hours ago 3 replies      
Perhaps because I am not really a low level programmer it strikes me odd that "receive packet" is a call. I would expect to pass a function pointer to the driver and be called with the packet address every time one has arrived.
7
brobinson 9 hours ago 0 replies      
Why even have netfilter ("iptables") loaded in the kernel at all? Won't those two rules still have to be evaluated for each packet even if the rules are saying not to do anything?

There are additional things at play here, too, including what the NIC driver's strategy for interrupt generation is and how interrupts are balanced across the available cores, whether there are cores dedicated to interrupt handling and otherwise isolated from the I/O scheduler, various sysctl settings, etc.

There's further gains here if you want to get really into it.

8
netman 15 hours ago 0 replies      
The Automattic guys did some testing a few years ago with better results on SolarFlare. I wonder where their testing ultimately ended up. https://wpneteng.wordpress.com/2013/12/21/10g-nic-testing/
9
zurn 16 hours ago 1 reply      
Where does the funny 50 kpps per core idea in the lead-in come from? This would mean falling far short of 1 gigE line rate with 1500 byte packets! This is is trivially disproven with everyday experience of anyone who's run scp over his home lan or crossover cable?
10
bitL 14 hours ago 0 replies      
Excellent article! Thanks for sharing! I am glad to learn something new today! ;-)
11
samstave 16 hours ago 0 replies      
I was curious to see how many pps our servers are handling...

We have an app server that currently handles 40K concurrent users per node. I get ~63pps only:

TX eth0: 42780 pkts/s RX eth0: 64676 pkts/s

TX eth0: 41570 pkts/s RX eth0: 63401 pkts/s

TX eth0: 41867 pkts/s RX eth0: 63697 pkts/s

TX eth0: 41585 pkts/s RX eth0: 63187 pkts/s

TX eth0: 40408 pkts/s RX eth0: 61912 pkts/s

TX eth0: 41445 pkts/s RX eth0: 63299 pkts/s

TX eth0: 41119 pkts/s RX eth0: 63186 pkts/s

TX eth0: 41502 pkts/s RX eth0: 63153 pkts/s

TX eth0: 40465 pkts/s RX eth0: 62118 pkts/s

TX eth0: 42105 pkts/s RX eth0: 63986 pkts/s

But this is utilizing 7 of 8 cores on each node... with CPU util very low.

12
known 2 hours ago 0 replies      
man ethtool
13
floridaguy01 8 hours ago 0 replies      
You know what is cooler than 1 million packets per second? 1 billion packets per second.
Microscopic footage of a needle moving across the grooves of a record dangerousminds.net
287 points by batbomb  16 hours ago   49 comments top 16
1
bkraz 3 hours ago 0 replies      
I am stoked to see my work on the front page of HN! Let me know if you have any questions. I've lurked on this forum for a long time, but rarely post.
2
nate_meurer 15 hours ago 5 replies      
Wow! I never knew the needle move side-to-side! I always assumed phonograph needles moved up and down.

Edit: Ben explains this in the video; there are actually two axes of movement, sort of diagonal to the plane of the disk, each of which encodes one channel of a two-channel stereo recording. I'm sure many LP fans already know this, but it's a revelation to me.

BTW, Ben Krasnow is a heck of a guy. A true polymath and a generous teacher.

3
daniel-levin 2 hours ago 0 replies      
The YouTube channel [1] where this video comes from is a treasure trove for the intellectually curious, and it's one of my favourite things on the internet. The guy behind it, Ben Krasnow, is an engineer at Google. From explaining and demonstrating (reverse) spherification to encoding information in fucking fire and picking it up with his oscilloscope, this channel will interest and delight most folks who enjoy HN for hours.

[1] https://www.youtube.com/channel/UCivA7_KLKWo43tFcCkFvydw

4
xigency 14 hours ago 2 replies      
The only thing is, the needle isn't moving here. This is an animation of what a needle would look like moving across a record, but taken in stop-motion style. I wonder if the use of an electron microscope is really needed? The grooves in a record are hardly small enough to escape visible light...
5
jokr004 12 hours ago 0 replies      
I highly recommend everyone check out other videos on Ben Krasnow's youtube channel [0]. I've been following him for about a year now, really awesome stuff!

[0] https://www.youtube.com/user/bkraz333

6
afandian 1 hour ago 1 reply      
Why was an electron microscope necessary? Surely at this scale a conventional light microscope would have done the job, wouldn't have needed all these workarounds, and could have recorded live footage of a needle actually playing a record?
7
kitd 1 hour ago 0 replies      
A bit OT, but this reminds me of an inspirational mathematics teacher I had, who loved to present the subject as a series of applied problem-solving exercises, rather than the usual learn-by-rote.

One of his problems was: if a 33 1/3 rpm record is 12 inches in diameter and plays for 25 minutes, how wide is the groove?

I had a clear visualisation in my head of the problem, including the groove cut into the vinyl. Watching this is like listening to my teacher speaking to me again.

8
vinkelhake 15 hours ago 1 reply      
This is just blogspam. Link to directly to the video instead. There are a lot of videos worth watching on Ben's channel.
9
amelius 15 hours ago 1 reply      
> You would think that if you have an electron microscope and a record player, youre most of the way there to being able to record close-up footage of a needle traversing the grooves of a long-player record.

Actually if you would have only an electron microscope, you could play the track without even needing the record player.

10
Panoramix 14 hours ago 2 replies      
This guy is a real jedi. Did he build his own Ag evaporator?
11
ianphughes 14 hours ago 1 reply      
Am I the only one who could listen to him narrate just about anything for hours at length?
12
JabavuAdams 9 hours ago 0 replies      
Funny, I was just watching this last night. I'm in awe of the Applied Science guy, and am really grateful for the information he's shared.

Having said that, I was shocked to see his thermite BBQ video. I don't know whether the people in that video realize how close they came to being maimed.

13
InclinedPlane 14 hours ago 0 replies      
The "root post" is just the youtube video itself.
14
udev 14 hours ago 1 reply      
The guy's diction is flawless.
15
agumonkey 13 hours ago 0 replies      
First time I get to see one of this legendary video vinyl ... Thanks.
16
bluedino 13 hours ago 0 replies      
Next up - laser hitting a compact disc.
Optimizing an Important Atom Primitive atom.io
273 points by mrbogle  15 hours ago   83 comments top 18
1
jerf 15 hours ago 7 replies      
You know, it's funny how it's 2015 and we're just dripping with raw power on our developer machines, yet, open a few hundred kilobytes of text and accidentally invoke a handful of O(n^2) algorithms and blammo, there goes all your power. Sobering.

Edit: We need a type system which makes O(n^2) algorithms illegal. (Yes... I know what I just dialed up. You can't see it, but I'm giving a very big ol' evil grin.)

2
drewm1980 16 minutes ago 1 reply      
I really, really, don't get the whole "implement everything using web technologies" thing. As an outsider from that dev ecosystem it looks like the youtube videos you see of people implementing electronic circuits in Minecraft.
3
martanne 13 hours ago 1 reply      
A piece table[0] solves this rather elegantly. Since it is a persistent data structure, a mark can be represented as a pointer into an underlying buffer. If the corresponding text is deleted, marks are updated automatically, since the pointer is no longer reachable from the piece chain. Lookup is linear[1] (or logarithmic if you store pieces in a balanced search tree) in the number of pieces i.e. non-consecutive editing operations.

[0] https://github.com/martanne/vis#text-management-using-a-piec...

[1] https://github.com/martanne/vis/blob/master/text.c#L1152

4
dunstad 15 hours ago 2 replies      
I tried out Atom a few weeks ago. I loved the UI! Absolutely fantastic, beautiful, nothing but praise there.

But I had so many issues with stability, and really missed small but important features that were present in my other editors. I also found that most of the plugins worked either poorly or sporadically.

In the end, I decided that it was not worth either using Atom or spending time contributing to it when I have some "pretty close" solutions today. Definitely looking forward to the 1.0 version though, and hats off to all those spending their time contributing to it. I'm sure it's going to become something great!

5
twic 14 hours ago 0 replies      
Didn't the Xanadu project solve this problem in 1972?

https://en.wikipedia.org/wiki/Enfilade_%28Xanadu%29

Solve it, keep it secret, and then fail to properly write about it to this day.

6
Veedrac 14 hours ago 1 reply      
I actually just retried Atom yesterday. Aside from the normal complaints (it's sloooww, undo doesn't affect markers or selections), one thing that struck me is that markers can't be zero-width. Well, they can but they won't show up. I'm wondering if this is related to the technique mentioned here - it's certainly been a pain to work around. Sublime Text even has multiple options for this (DRAW_EMPTY and DRAW_EMPTY_AS_OVERWRITE).

That said, I'm loving the API design. Coming from Sublime Text, it's a massive upgrade. The ability to embed literally anything a web browser can render in a well-designed framework is mindblowing.

7
Erwin 13 hours ago 0 replies      
If you thought this was an interesting article, here 's the obligatory link to just about the only book on crafting a text editor, "Craft of Text Editing": http://www.finseth.com/craft/
8
octref 13 hours ago 0 replies      
Recently I learned all contributors will receive a gift for Atom pre-1.0, and when I asked a Github stuff when will I receive the gift (I'm moving during this summer) he mentioned it would be sent out in early July. I guess we can expect a pre-1.0 before August.

One of the main remaining functionality to be implemented is good support for large files. Looking at this issue [1], it seems Atom team is making some progress but there are still some problems to be tackled.

In 0.208.0 (released 7 days ago) they mentioned in the changelog Atom now opens files larger than 2MB with syntax highlighting, soft wrap, and folds disabled. We'll work on raising the limits with these features enabled moving forward. Little bit disappointed at the progress as you could open large file with these features disabled long time ago through a package "view-tail-large-files".

Just updated to 0.209.0 and using ember.js (1.9 MB) to test. Editing/scrolling has some delays but it's better than previous versions.

Good luck Atom team!

[1]: https://github.com/atom/atom/issues/307#event-325455529

9
asQuirreL 1 hour ago 0 replies      
Hmmm... So the article seems to suggest that for every insertion of a character, a log time lookup is made. Is that really the case? If so, why is the leaf node that the cursor is in not saved? If you were to use a B+-tree implementation then you would already have access to neighbour pointers for rebalancing purposes, making the majority of incremental changes very cheap (constant time). This is just a thought, there may be good reasons why it's not possible.
10
revelation 15 hours ago 1 reply      
Yet, the onKeyDown handler still takes 50ms. Are you kidding me? You can push a billion tris in that time.
11
msoad 14 hours ago 1 reply      
This kind of knowledge and experience exists in Microsoft campus for years thanks to Visual Studio team. That's why Code is much more efficient. I only wish if it was open source so I could totally move away from Sublime Text.
12
ohitsdom 15 hours ago 0 replies      
Appreciate the candidness of the team writing about their naive approach. Definitely would have been a simpler fix to just search the currently visible text, but I'm glad they fixed the root issue to make markers more efficient for all.
13
alexchamberlain 15 hours ago 0 replies      
What is the data structure used for the text itself? A rope? The markers could be stored as offsets to the substrings themselves.
14
z3t4 2 hours ago 0 replies      
One thing I love about vanilla JS is that you can both set and get with the same property. I wonder if having both setters and getters is enforced by CoffeeScript or a design decision of the Atom team!?
15
romaniv 15 hours ago 1 reply      
This reminds me of how I re-implemented nested sets in relational databases as spans in a "coordinate" system.

 | Root | | Node | Node| | Node | Node |
I stored only "X" and "Y" coordinates for every node, so you had to read "next" node in a row to get current node's "size".

It was a bit more human-readable when looking at the data. More importantly, it reduced (on average) the number of nodes I needed to update on insert compared to nested set and gave an easy way of retrieving immediate children. But you still had to "move over" all the nodes "right" of the one you're inserting.

The structure in the article looks eerily similar. I wonder whether it's somehow possible to apply GitHub's optimization to this "coordinate" based schema and make it relative without messing up the benefits of column indexing. Hm...

16
imslavko 15 hours ago 1 reply      
Vim also has a similar optimization: when a file changes, Vim only runs syntax highlighter on a visible part of the text + some buffer in both directions.
17
caiob 15 hours ago 4 replies      
Does it open files >2mb yet? My terminal vim does.
18
baldfat 15 hours ago 4 replies      
Atom is still a hog on my main programing machine. It makes it unusable for me still.

It is an OLD i3 Dell from 6 years ago desktop.

Introducing Empire: A Self-Hosted PaaS Built on Docker and Amazon ECS remind.com
174 points by streeter  10 hours ago   45 comments top 11
1
justinsb 9 hours ago 1 reply      
This looks great: simple yet powerful. I'm working a lot with Kubernetes, and you don't actually need to run an overlay network on AWS (or GCE). On AWS, there's some VPC magic that surprised me when I first saw it! But I believe that's beside the point; it's not about ECS vs Kubernetes, it is about what we can build on top.

In particular, I think the idea of embedding a Procfile in a Docker image is really clever; it neatly solves the problem of how to distribute the metadata about how to run an image.

2
fosk 8 hours ago 2 replies      
This is neat. You might want to check out KONG (https://github.com/Mashape/kong) instead of putting a plain nginx in front of the containers/microservices. It is built on top of nginx too, but it provides all the extra functionality like rate-limiting and authentication via plugins.
3
bgentry 3 hours ago 0 replies      
Really cool stuff. Seems like you found a good way to hand off most of the hard stuff to AWS and only do a few key things yourselves to make the experience better. As such I think Empire has the potential to be a viable option for many companies, which is something I rarely say about a PaaS project :)
4
rymohr 8 hours ago 0 replies      
Thank you, this looks awesome! As someone who still hasn't embraced docker due to all the orchestration / discovery madness I really appreciate such an elegant solution. I love and run everything on AWS so building on top of ECS is just another selling point.
5
mixmastamyk 4 hours ago 0 replies      
Congrats, not everyone can create a simple elegant platform and write about it in such an accessible manner. I suppose you're standing on the shoulders of giants, but still.

This is the level of engineering/communication I always shoot for, and which (somewhat disappointingly) is rare where I've worked.

6
nickpsecurity 7 hours ago 0 replies      
This work has plenty about it that was interesting. The best part to me was their answer to "why not feature X?" They said they prefer to build upon the most mature and stable technologies along with naming a few. Too many teams end up losing competitiveness by wasting precious hours debugging the latest and greatest thing that isn't quite reliable yet. Their choice is wiser and might get attention of more risk-conscious users.
7
sagivo 8 hours ago 3 replies      
personally i use dokku (https://github.com/progrium/dokku). i would be happy to see one standard "heroku-like" paass since i feel too many people trying to tackle the same problem.
8
phantom_oracle 8 hours ago 0 replies      
http://www.openshift.org/

Just putting this out there in case anyone is looking for an alternate open-source PaaS.

I've never personally used it before (self-hosted), but it may be something that someone out there is looking for.

9
stephenr 8 hours ago 3 replies      
Does this really classify as "self hosted" if it's heavily dependent on AWS?
10
jordanthoms 8 hours ago 1 reply      
How do you handle running one-off tasks (consoles, migrations etc) on this setup? This is something most of these systems seem to ignore...
11
floridaguy01 8 hours ago 5 replies      
aws is silly expensive. Why didnt you build this on top of digitalocean? Digitalocean is so awesome right now. They dont even charge for bandwidth overages.
List of blocked aircraft on Flightradar24 (2013) flightradar24.com
14 points by ce4  2 hours ago   18 comments top 4
1
luso_brazilian 6 minutes ago 0 replies      
The position of the site admin [1] is:

>> Quote Originally Posted by Mike

>> Will will not publish any information in the FAQ just to keep this as low below "radar" as possible. Every exclusion request is handled manually and we will no comment or publish this process in public.

>> Quote Originally Posted by Mike

>> FR24 should work with authorities, and not against them.I think we will close this thread as there is nothing more to discuss. This is not about democracy or censorship, but about keeping our hobby alive without authorities enforcing new laws in order to limit ADS-B usage.

>> UPDATE. Maybe I was not clear when I posted this before. There is a discussion on FAA meetings about encrypting the ADS-B signal. By angering the authorities, we will only speed up this process.

This is very similar to the policy of many sites, to enforce "self censorship" by accepting polite take-down requests in order to avoid harsher consequences.

This is reasonable from the perspective of the site owner. Sometimes an acceptable loss is worthy to keep the most freedom intact.

On the other hand there is a chilling effect on speech, the fear it cause in others potential site and service creators who could give up instead of creating for fear of this kind of public drama.

[1] http://forum.flightradar24.com/threads/5217-Blocked-flights?...

2
jMyles 1 hour ago 3 replies      
I require some context to fully understand what's going on here.

I imagine "blocked" means that the flights in question are not displayed? Despite a user presumably uploading the metadata for the flight in question as part of their feed?

And thus, can't someone still see these flights with their directly-connected SDR device?

And thus, can't someone just start a new FR24 service that doesn't censor these data?

> There is a discussion on FAA meetings about encrypting the ADS-B signal. By angering the authorities, we will only speed up this process.

Holy hell, for real? At this point, there are vanishingly few laws with which I agree even in principle, but "if you are going to fly a 375-ton hunk of metal through the air, you need to clearly broadcast flight details in plain text" is a pretty damn reasonable one.

What possible justification can there be for targeting services like FR24?

3
pjc50 42 minutes ago 1 reply      
Open flight information has in the past been used to track both the world's biggest illegal arms dealer Viktor Bout, and also the CIA rendition-to-torture flights.
4
gadders 48 minutes ago 0 replies      
My key takeaway - Holy shit! The Crossfit people have a plane!!
Building Street Fighter II in Ruby [video] nikolay.rocks
122 points by MadRabbit  10 hours ago   26 comments top 13
1
mmanfrin 10 hours ago 1 reply      
It is really fun to see games coded in a language you know (and is not used for gamedev) -- completely new way of programming/thinking.

There is another great video similar to this of Tom Dalling coding a flappy bird clone using Gosu:

https://www.youtube.com/watch?v=QtIlyU2Br3o

2
keyle 8 hours ago 1 reply      
Very cool Nikolay. I also dig the git-wayback machine briefly showcased.

https://github.com/MadRabbit/git-wayback-machine

3
minhtran 36 minutes ago 0 replies      
Do you have plan to add the code when 2 players hit each other?
4
BilalBudhani 3 hours ago 1 reply      
I used to be a die hard fan of this game in my early teenage days and now Ruby is my most favorite. It would be a lot of fun building this game and connecting to some of my old memories. Thanks Nikolay.
5
hellbanner 6 hours ago 0 replies      
Cool stuff! Coding attack sequences, combos and reading commands like quarter-circles is a fun exercise, too. You have to balance between precision and ease of use.

Interesting strategies arise when this glitches:

http://wiki.shoryuken.com/E._Honda_(ST)#Stored_Oicho Street fighter 2)

6
ivan_ah 5 hours ago 2 replies      
Very cool and readable.

For something even cooler (but less readable) here is SF alpha, in js: https://github.com/gamedev8/js-sfa (very faithful to the SF alpha game mechanics)

7
Rainymood 2 hours ago 0 replies      
Awesome! Please finish this up I have always been so intrigued on game design, especially fighting games.
8
lanebrain 1 hour ago 0 replies      
Very cool! Well done Nikolay. Cheers!
9
whistlerbrk 8 hours ago 0 replies      
Fantastic tutorial, I really like his presentation style of progressively walking through the commits. Great work.
10
amorphid 9 hours ago 1 reply      
Hey Nikolay, now write it in Bash!
11
angeloxlr8 8 hours ago 0 replies      
this is very awesome, big fan of the presentation technique
12
untog 7 hours ago 1 reply      
Nah, needs to be written in CSS or I'll never be impressed.
13
andrewdon 8 hours ago 1 reply      
the link is dead
Redundancy vs. dependencies: which is worse? (2008) yosefk.com
5 points by ripitrust  2 hours ago   discuss
Coinbase Launches Instant Exchange coinbase.com
6 points by markmassie  58 minutes ago   discuss
Linode turns 12, transitions from Xen to KVM linode.com
139 points by alexforster  13 hours ago   36 comments top 12
1
ksec 7 hours ago 1 reply      
Linode is great, however there are three things I really love to see.

Object Storage - Which LiquidWeb, RackSpace, AWS already has and many other Hosting Companies are providing it.

Memory Optimized Plan - Everything is getting in memory. But most dont need 20 core for 96GB Memory. There should be a low CPU count plan with 128GB+ and may be up to 512GB. ( or Higher )

CDN - Please Resell a decent CDN or even make your own one. So we can get everything in one place.

2
alexforster 10 hours ago 1 reply      
There's also a new Singapore datacenter that launched recentlyhttps://blog.linode.com/2015/04/27/hello-singapore/
3
joeyh 8 hours ago 1 reply      
I hope this will make it much easier to run your own (or your distro's own) kernel on Linode

While possible currently (and I do), it requires some pv-grub configuration, and IIRC recent distro kernels don't work with Linode's pv-grub version, and so quite complex a pv-grub chaining is needed.

WRT security, I'm much more concerned with getting prompt kernel upgrades from my distro or rolled by hand, when there are network exploitable bugs, than with hypervisor bugs that might allow the small group who share the physical hardware to do something naughty.

4
mwcampbell 9 hours ago 3 replies      
Anyone know why the performance difference is so dramatic? My guess was that the difference would go the other way -- that Xen would be more efficient, because it was designed for paravirtualization rather than hardware emulation, and the guest kernels had to be modified to accommodate it.
5
btrask 11 hours ago 2 replies      
No mention of security? Xen isn't perfect, but according to the Qubes team it's the best we've got.

> We still believe Xen is currently the most secure hypervisor available, mostly because of its unique architecture features, that are lacking in any other product we are aware of.

https://raw.githubusercontent.com/QubesOS/qubes-secpack/mast...

6
orthecreedence 10 hours ago 0 replies      
Is KVM burstable? From what I know about Xen (very little), at least in Linode's case, CPUs were not burstable. I always thought of this as a feature because while it's that if I need a little extra juice I can have it, I don't want my neighbors parking on my lawn every time they have a party. I'd rather have predictability over performance is my point. Is this still the case?
7
diminish 1 hour ago 0 replies      
Here is the type of technology I love;

Essentially, our KVM upgrade means you get a much faster server just by clicking a button..

8
baudehlo 7 hours ago 1 reply      
Are they really going to keep the same number of hosts per server, given they can now get more out of a server? It would be great if they will, but I have doubts.
9
Veratyr 11 hours ago 0 replies      
10
aladine 7 hours ago 0 replies      
Great service. I absolutely love that. Customer support is fast and informative.
11
vfclists 12 hours ago 3 replies      
What is the difference between paravirtualized KVM and fully virtualized KVM? I thought KVM has always been fully virtualized which is why it is capable of running any OS.

Is KVM paravirtualizaton a new feature?

12
drzaiusapelord 7 hours ago 0 replies      
>The kernel build time dropped from 573 to 363 seconds.Thats 1.6x faster.

Wow, that's quite a nice upgrade upgrade, especially considering the price.

Beating Node.js with Tcl pietersz.co.uk
26 points by blacksqr  6 hours ago   4 comments top 2
1
amelius 31 minutes ago 0 replies      
If it really matters that much how many connections per second you can handle, perhaps Node is not the right tool for the job. Try Go instead, which also runs on multiple cores more naturally.
2
onion2k 36 minutes ago 1 reply      
So some Node written by a self-confessed Node novice, based on a 'hello world' example, and running on an old(ish) verson of Node is actually quite similar in performance terms compared to TCL written by someone who knows how to write a TCL server from scratch. That is not a reasonable comparison. Get someone with some Node experience to write the Node server, then compare them.
Academic publishers reap huge profits as libraries go broke cbc.ca
96 points by benbreen  10 hours ago   23 comments top 9
1
christudor 21 minutes ago 0 replies      
The greatest barrier to a change in the way publishing works is the fact that tenure still depends on publication in high-impact journals--and high-impact journals are resolutely not Open Access.

Regardless of how difficult it would be get academics to change how they do things[1], it's also a problem that we can't agree on an acceptable and sustainable Open Access model.

The 'green' option sets an embargo (usually 6 months), during which time universities (etc.) must pay to view the article. After that, however, the journal article becomes freely available.

The 'gold' option asks academics to pay a small fee (couple of hundred) to have their article published--what's known as Article Processing Charges.

The problem with the 'green' option is that in disciplines like science, medicine and technology, the first six months after publication probably encapsulate 90% of the article's value--after which point it has been replaced by something else--which means people would probably continue to pay for these things anyway.

The problem with 'gold' is that (a) you start publishing stuff based on who has the ability to pay, rather than academic merit, and (b) it would make academic publishing the only industry in the world where the supplier is paying the purchaser/buyer.

[1] A joke about Oxford Uni goes like this:Q. How many Oxford academics does it take to change a lightbulb?A. Change!?

2
robertwalsh0 6 hours ago 1 reply      
I think as time goes on we'll find that journals are able to do more and more of the work themselves in such a way that disrupts the monopoly as we know it today. Journals and the academic community already perform the review process themselves (with little reward) and are finding that the "typesetting" and "dissemination" value-adds from publishers are things that they can do on their own as well.

On a large scale, initiatives like PLOS One (http://www.plosone.org/) are a great example of this. On a smaller scale, journals like Sociological Science (http://www.sociologicalscience.com/) & (http://www.gsb.stanford.edu/stanford-gsb-experience/news-his...) are also being successful managing the entire toolchain themselves.

While things seem dire now, I'm confident they'll get better. Scholars with status are increasingly throwing their weight behind Open Access initiatives. Tim Gowers, that droque mentioned in this thread, is evidence of this.

Disclaimer: I co-founded a startup in the space (http://www.scholasticahq.com) and Sociological Science uses our platform for managing their peer review process.

3
slashnull 6 hours ago 4 replies      
> Traditionally, most journals were published by non-profit scientific societies. But when journals shifted from print to online digital formats, those societies couldn't afford the cost of the equipment needed to make the switch. Instead, they sold their journals to large, for-profit publishers

wait, what

I can't figure out how it would be possible that shifting from physical paper to online hosting could be more expensive

4
effie 8 hours ago 2 replies      
The researchers themselves should take more action, plainly refuse to cooperate with publishing companies and establish an independent publication system on universities' or grant agencies' websites. Sadly, most researchers, with exceptions, seem to not care that they contribute perpetuating this absurd money redirection scheme that hurts the society and its benefit from public-funded research. This is probably also because similarly to the publishers, they are benefiting from the scheme as well - a publication with the right journal is "the way" to make researchers' career, get higher social status and earn more money. The most successful then end up in the editorial boards of those journals :(
5
droque 8 hours ago 2 replies      
Tim Gowers (and several other mathematicians) called to boycott academic publishers with The Cost of Knowledge (http://thecostofknowledge.com/), giving largely similar reasons. It had some success if I recall correctly.
6
Pinatubo 10 hours ago 0 replies      
Preston McAfee (head economist at Microsoft) and Ted Bergstrom (an economist at UCSB) have been studying this for years:

http://www.econ.ucsb.edu/~tedb/Journals/jpricing.html

7
aaron695 1 hour ago 0 replies      
They forget to mention entire buildings worth 10s of millions are now free of journals for other uses.

There's far less staff needed to curate the journals.

And the millions saved in time with researchers getting them on line.

Libraries are much better off. They just could be even more so.

8
akshat_h 10 hours ago 2 replies      
In our class, we have discussed repeatedly that the best way to make money would be to start a new journal with the word "International" in it. The problem with some open access journals is that they accept anything. Apart from Computer Science(and physics as mentioned in the article), where most papers can be found as open access, there isn't an alternative for universities to buying content, even if open access were to be the new norm from now due to large amount of historical content.
9
drpgq 7 hours ago 0 replies      
Is there some fields where this is way more of a problem (I'm guessing medicine)? As someone in computer vision you are often dealing with IEEE for conferences and journals, which has never struck me as that bad, although for a non-profit and as a member I would prefer they were a little more open. It is nice that CVPR has been open access for a couple of years.
DuckDuckGo on CNBC: Weve grown 600% since NSA surveillance news broke technical.ly
3 points by wnm  1 hour ago   discuss
Read-only deploy keys github.com
133 points by bado  14 hours ago   47 comments top 8
1
mianos 8 hours ago 4 replies      
Now we wait another five years for the ability to share deploy keys across repositories. If you have more than one project in your CI deployable app (for example a couple of internal python libraries), you can't use the same deploy keys. Their suggestion, "don't use modules, package everything in one application or use a full key". Now deploy keys can be R/O (fantastic), this limitation is double annoying.
2
pwenzel 13 hours ago 1 reply      
Read only deploy keys are also a feature of Bitbucket:https://confluence.atlassian.com/display/BITBUCKET/Use+deplo...
3
MatthewWilkes 13 hours ago 8 replies      
Deploy keys weren't read-only already? Seriously?
4
codyps 9 hours ago 0 replies      
Now we just need branch restricted keys & keys that aren't allowed to force push (both of these would make me feel a lot better about using certain 3rd party automation in combination with my github repos).

Not that I really expect that to happen anytime soon, I believe others have been asking for the above for quite some time.

5
datajeroen 12 hours ago 0 replies      
I asked this question 5 years ago on SO: http://stackoverflow.com/questions/2868432/github-readonly-a.... Glad it got addressed.
6
andmarios 13 hours ago 1 reply      
I always thought that deploy keys are read-only. I can't understand why one would need a special interface to add a read-write key that is the same as any other key you add manually.

Iirc only the owner can create deploy keys, so it wasn't a feature aimed to teams either.

7
jtchang 12 hours ago 0 replies      
Deploy keys could only exist in one repo at a time. And I think a lot of people thought they were read only.
8
renke1 13 hours ago 1 reply      
I would love to see auth token that provide read-only access to select repositories. I find SSH keys much harder to use in a Docker-based deployment.
Raspberry Pi Official Case raspberrypi.org
151 points by ingve  16 hours ago   83 comments top 20
1
jokoon 1 hour ago 3 replies      
I just want a thick, 6 or 7 inch laptop with a foldable keyboard.

I don't know why nobody is doing such a pocket "laptop". It would be really great to type code. I can't really type properly with a touchscreen, it can be sluggish, maybe I need to get used to it, bu typing characters like []{}(); etc is not really practical.

Maybe because there's nothing else than android for something between a laptop and a smartphone. One might want a minimal OS, but that could do a little more than android.

2
crimsonalucard 13 hours ago 20 replies      
My phone should be doing all of this by now. Technically it's more powerful then the pi (and has a better case then this), just not as good from a usability/hack-ability standpoint.

I wonder what's stopping someone from producing an all in one device: Desktop/phone/hobbyist micro-controller

Startup idea?

3
juliangoldsmith 14 hours ago 0 replies      
Still waiting on that DSI screen [0], which they announced over a year ago, but never actually started producing.

[0] http://raspi.tv/2014/raspberry-pi-official-7-inch-dsi-protot...

4
shabble 8 hours ago 0 replies      
I quite like the ModMyPi Modular Case[1] for being nicely built, and having things like optional locking covers for the SD card and USB/HDMI ports, which can be set up so they're only removable after you open the whole case. Or you can trim the tabs and they are just aesthetic/strain relief.

The best part is a bunch of stackable spacer-plates between case and lid that let you put various stack-ups of expansion boards and still get the lid on.

My biggest complaints would be the lack of good mechanical drawings (although that's true of I think every single case I've looked at), and that the slots for LCD/camera ribbons are default-open, when they could have been a snap-away seam or something since I'm not convinced most Pis have them attached anyway.

This new one looks interesting, but the curvy bits will be a nightmare to put UI controls (buttons, etc) in, and (probably) can't easily be swapped for laser-cut replacements. Maybe 3d-printed ones? Although I don't know how durable a 1-2mm thick panel would be with most hobby technologies, and how long htey'd take to make.

[1] https://www.modmypi.com/raspberry-pi/cases/modmypi-single-co...

5
BillTheCat 13 hours ago 0 replies      
I cut some holes in the box it came in and am using that as a case. It's not pretty but it sits at the back of a shelf so no one sees it anyway.

I feel it embodies the hacker mentality of the pi.

6
teekert 1 hour ago 0 replies      
Very nice, most cases really don't take into account the fact that you want the GPIO easily accessible while the Pi is in a case.

But I'm still very much waiting on this touchscreen: http://techcrunch.com/2014/10/21/pi-pads/ It was announced for and of 2014. Does anybody have any news on this?

7
jstsch 11 hours ago 1 reply      
Nice story, and the new case looks cool. Will get one. Just wanted to chime in that I've been hacking a bit with the r-pi again the last few weekends and that the recent Raspbian distro is really quite polished, especially compared to where we came from back in 2012.

Had some sensor/gpio fun (1-wire temperature probe, http post to web, SQLite and some Google Charts), with a random wifi dongle, working quite nicely straight from the box. Same with another pi (classic model b), 1usd aliexpress webcam, humidity sensor, some quick scripts and another random wifi dongle.

Great fun and highly recommended for a weekend of (home automation) hacking.

8
soggypenny 12 hours ago 0 replies      
I absolutely love it when companies publish their product development stories. Being forced to switch injection molding suppliers mid-production is often a nightmare scenario for product developers, and it's great that they were able to salvage their tooling. I'd love to learn more about what exactly was wrong and how they were able to fix it.
9
aaggarwal 14 hours ago 0 replies      
For just about $8, this is awesome. I wonder if they will open-source its design files for 3D printers.
10
totallynotcool 6 hours ago 0 replies      
I am a huge fan of this[1] case. I like the single power input for powering the pi and a USB hub. I like the room inside the case for adding things; hhd, gpio pins. I just wish this design was more common... and cheaper.

1. https://www.amazon.com/gp/aw/d/B00JNXERM2/ref=aw_wl_ov_dp_1_...

11
mschuster91 13 hours ago 4 replies      
A case? Seriously, what?

There's an unused display connector on millions of Pis, and the firmware stuff for CSI is not open so you can't connect $random_camera_chip to your Pi.

That's where the priorities should be, imho.

12
sjs382 14 hours ago 0 replies      
It doesn't seem like they would stack very well, which might be an issue for some users. It looks pretty, though!
13
richerlariviere 16 hours ago 0 replies      
Quite cool but I prefer to build a custom lego case, which brings a DIY feeling. :
14
schappim 12 hours ago 0 replies      
Technical Specifications:

- Official enclosure from the Raspberry Pi Foundation

- 5 part enclosure

- Dimensions: 96mm x 70mm x 25mm

- Raspberry coloured enclosure with White removable lid and sides

- Removable lid is provided for easy access to the camera and display ports. This removable lid will also support access to an attached Raspberry Pi HAT device.

- Removable GPIO side is provided for easy access to the 40-pin GPIO port (and attaching a ribbon cable).

- Compatible with the Raspberry Pi 2 Model B and Raspberry Pi Model B+

Source: http://raspberry.piaustralia.com.au/products/raspberry-pi-of...

15
shabble 8 hours ago 1 reply      
Those copper blocks about halfway down look a whole lot like sinker EDM[1] electrodes, which makes me question the accompanying quote: "instead you have to use magic electrolysis (like they taught you at school)". Spark erosion isn't really electrolysis in any way I can think of, and is sufficiently awesome of its own right that it deserves more attention :)

[1] https://en.wikipedia.org/wiki/Electrical_discharge_machining...

16
shabble 7 hours ago 0 replies      
Injection moulding is one of those ubiquitous technologies that appears deceptively simple (heat plastic; squish into mould; eject), but in reality is amazingly complex.

The number one fact people seem to know about it is "Oh, the mould tooling is really really expensive", which is kind of true, but doesn't really tell the full story.

The issue is that the mould is operating at very high pressures, rapidly temperature cycling, and still requires very high accuracy. Plus you need it to survive for the life of your production run.

On a large scale, this means using hardened steel[1[ milled or EDM'd from a solid block. The texture of the inside of the mould cavity directly determines the surface finish of your parts, so it needs to be polished mirror smooth, even though it's probably not flat.

Then it gets even more fancy. The moulds usually need channels bored through them as close as possible to the cavity, which will allow coolant to be pumped through to set the plastic faster so it can be ejected. Some complex shapes also have internally embedded heating elements to keep the plastic liquified for long enough to reach where it needs to be.

Then you have the ejector mechanisms, usually some pins driven pneumatically to push the solid moulding out of the fixture at the end of the cycle. They need to retract to precisely the right depth during moulding otherwise you end up with those little dimples characteristic of IM parts.

And it gets crazier still: some parts will have embedded metal or other plastic parts such as bearings or threaded screw inserts. These get inserted each cycle by a robotic fixture when the mould is open. It then closes up and the plastic is injected around them. Doing this with multiple types or colours of plastic is the 'double-shot' technique that lets you put rubberised grips or other embedded features into things.

Oh, and time is money, so each part is ejected as soon as possible to start on the next cycle, so everything has to be incredibly delicately choreographed, but left long enough that you get a decent yield of useable parts.

When I learned about all of that (and probably a whole lot more I don't know), it makes sense how expensive the whole thing is upfront. There's also the costs incurred by your factory in shutting down production to change out the tooling to run your job.

One of the coolest things I've seen online is Kenneth Maxon's home-made injection moulding rig[3] (although just about all the other things he does are pretty astounding too. I'd claim it's only technically home-made because he lives there ;p)

[1] Protomold[2] get away with doing it a lot cheaper because they mill moulds out of aluminium, which is much less durable but entirely acceptable for short production runs.

[2] http://www.protolabs.com/injection-molding/fundamentals-of-m...

[3] http://www.users.qwest.net/~kmaxon/page/side/mold_mach_137.h... [4]

[4] WARNING: serious 90's webdesign, and sadly a lot of broken images.

17
SloopJon 14 hours ago 0 replies      
Had been looking at some fancier ones with wood or aluminum accents on Amazon, but cheap is good. I almost picked up a cheesy case at Micro Center today, but now I think I'll go with this cheesy one instead. Priced right at less than $10, and the removable lid is a nice touch.
18
4ad 12 hours ago 2 replies      
Non-stackable case?!?!?

Why, just why did they think that a curved top was more important than the ability to stack more devices together, or to make it easier to fit in a simple and precise slot.

19
Ellipsis753 13 hours ago 0 replies      
The site is down for me. Here's a cached version:https://webcache.googleusercontent.com/search?q=cache:MXZRQ7...
20
MrBlue 12 hours ago 0 replies      
No power button?
Anonymouth: Authorship anonymization framework drexel.edu
8 points by programmernews3  4 hours ago   discuss
EFF and Eight Other Privacy Orgs Back Out of NTIA Face Recognition Talks eff.org
118 points by jdp23  14 hours ago   62 comments top 6
1
ChuckMcM 11 hours ago 3 replies      
Hmm, an alternative strategy might be to start an EFF program to photograph and add to a facial recognition database every law enforcement officer. Have volunteers take pictures of people showing up at the police academy, and people in uniform doing their job, and people going into and out of employee only entrances to law enforcement facilities. They can make this data base for DefCon 'spot the fed' sessions and for other interested parties.

That will certainly keep the conversation going although in a less civilized way. Perhaps it could bring enough pressure to bare on NTIA to get them to reconsider basic privacy safeguards.

2
mVChr 12 hours ago 3 replies      
> Communities such as San Diego, California are using mobile biometric readers to take pictures of people on the street or in their homes and immediately identify them and enroll them in face recognition databases.

In their homes? I'm pretty sure this is illegal. Or is it not if they are visible from a public area?

3
arca_vorago 13 hours ago 6 replies      
This brings up something I've been wondering about for a while, the legality of masks and other devices for identity protection. Supposedly due to protests and anon activity I have heard of an increasingly push to make publicly wearing masks illegal, but I think it's a slippery slope to go down. If in a few years time I have to assume that if I'm in public my face is being input into recognition systems, why should I not have the right to wear a mask or something similar?

Of course the knee-jerk argument is that wearing masks (both digitally in offline) creates a different personality which is more willing to engage in illegal activity, but for me, a staunch Constitutionalist, that still doesn't justify making it illegal.

This is, at the bottom line, about the reduction of anonymity equally in the real world as in the digital.

One more tool of control in the belt of the oligarchy.

4
rlvesco7 9 hours ago 0 replies      
Who are the companies that are pushing this? The article, surprisingly, doesn't say.
5
kefka 13 hours ago 4 replies      
Yes, and?

With OpenCV, I already have face detection, face recognition, eye detection, cascade creation (for detection of any feature I wish), a retinal model in conjunction with retinal scanning via webcam, and other power tools in that bag.

And it's all open source. It's easy enough, I made it myself: https://github.com/jwcrawley/uWHo

6
stox 6 hours ago 0 replies      
Actually, you can change the dimensions of your face. It is very painful, and I do not recommend it to anyone voluntarily.
A New Theory of Distraction newyorker.com
6 points by danboarder  2 hours ago   2 comments top 2
1
nyc_cyn 1 minute ago 0 replies      
A helpful tool to combat distraction: http://focusr.co
2
danboarder 16 minutes ago 0 replies      
I find a type of creative freedom in allowing myself to be 'distracted' and interested in many things -- this is opposite of focus and intentional attentiveness, but I think giving oneself intentional time for distraction is a valuable creative tool, leading to new ideas.

The article touches on this, concluding with a brilliant observation that I think is spot-on:

"... distraction is scary for another, complementary reason: the tremendous value that weve come to place on attending. The modern world valorizes few things more than attention. It demands that we pay attention at school and at work; it punishes parents for being inattentive; it urges us to be mindful about money, food, and fitness; it celebrates people who command others attention. As individuals, we derive a great deal of meaning from the products of sustained attention and concentrationfrom the projects weve completed, the relationships weve maintained, the commitments weve upheld, the skills weve mastered. Life often seems to be about paying attentionand the general trend seems to be toward an ever more attentive way of life. Behind the crisis of distraction, in short, there is what amounts to a crisis of attention: the more valuable and in demand attention becomes, the more problematic even innocuous distractions seem to be."

and then this insight toward the end:

"as I read Crawfords solemn prescriptions for the elimination of distraction, it occurred to me that we might have everything backward. What if, in fact, were not very good at being distracted? What if we actually dont value distraction enough? It may be that, with our mobile games and Twitter feeds and YouTube playlists, weve allowed distraction to become predictable and repetitive, manageable and organized, dull and boringtoo much, in short, like work."

Clojure Repl in Excel github.com
63 points by sea6ear  10 hours ago   13 comments top 8
1
escherize 7 hours ago 0 replies      
If you need to produce some spreadsheets from Clojure, there's a wonderful library by Tom Faulhaber lets you define a spreadsheet as a template, then simply turn Clojure data structures into an .xlsx file.

He gave a talk on it at Clojure West 2015 [1].

[1] https://www.youtube.com/watch?v=qnJs79W0BDo

2
JadeNB 8 hours ago 0 replies      
This may be a nave question, but, if it is possible to install without administrator privileges (which we should be trying to get everyone accustomed to doing anyway), then why is it set up to install with them by default?
3
mraison 5 hours ago 0 replies      
Is there any chance to have this working with the OSX version of Excel?

I don't know much about the Excel environment and ClojureCLR so I don't know if it's feasible at all.

4
sytringy05 7 hours ago 0 replies      
Wow. whamtet must have had a very particular itch that needed scratching....
5
detaro 8 hours ago 0 replies      
Very cool!

kinda funny: Visual Studio project. Excel on Windows. "__MACOSX"-folder in .zip-file ;)

6
reilly3000 9 hours ago 0 replies      
I was just lamenting about this not existing and how useful it would be. Woot!
7
mikerichards 6 hours ago 0 replies      
Good to see ClojureCLR getting some cool usage. If only Visual Studio had some awesome integration like the Intellij Clojure plugin Cursive.
8
hacker_9 9 hours ago 2 replies      
Cool concept. Still don't think it's worth learning the lisp syntax for though.
Becoming a contractor programmer in the UK github.com
193 points by medwezys  19 hours ago   145 comments top 26
1
monkeyprojects 17 hours ago 4 replies      
As a contractor of many years standing (I started contracting when the internet appeared in 1994) I started reading the article hoping it was correct and finished it with my fingers covering my eyes in fear. Sadly I don't have time to correct many of the misconceptions and total inaccuracies it contains.

Its a shame really as good articles are hard to find and I'm sure many americans would find the differences between the UK and US markets very interesting...

However if you really want to be a contractor https://www.ipse.co.uk/advice/articles/starting-out has a lot of advice on starting out.

http://www.contractoruk.com/first_timers/ also has a lot of advice although as a new starter I would avoid the general part of their forum...

And there is a reason why people use limited companies. You can work as a self employed person but many clients stop when HMRC come knocking while agencies have been legally barred from employing people as self employed since the 1970s...

2
ed_blackburn 1 hour ago 0 replies      
What this does not touch on is be prepared to be a grease monkey, roll your sleeves up and get your hands dirty and most importantly learn how to make recommendations but not take it personally if theyre ignored for seemingly irrational reasons. Organisational dysfunction is the norm, not the exception.

Broadly speaking Ive seen most of my work fall into these categories:

a) Help we need someone competent to aid us in a murky projectb) We are a dysfunctional organisation, who require transient developers to put up with their modus operandi. c) We need your experience and expertise for a gap in our project

Ive found (c) is best but (b) pays best though it can be stressful if your passionate about quality, engineering practices or process and (a) is often relatively short term but can garner kudos and create better opportunities.

Most companies dont hire contractors because theyre doing swimmingly. Often its because they have some degree of dysfunction. For example large institutions in the City regularly operate as a parody of the Mythical Man Month. Expect Waterfall, PMO, silos of BA, Dev, QA; UAT (manual), Cookie Cutter templates to everything. Expect most business interaction to be via a PM and scrums to be lengthy tortious ordeals. (This is why companies like Thought Works do so well and why I expect some serious disruption in the coming years from Startup targeting City companies).

Expect people to ask you your advice and for you to mentor less experienced developers. Do not expect your advice to be implemented, or rather expect it to be watered down with compromise by non-technical councils.

I really like contracting. I enjoy the flexibility, variety and the challenges. I enjoy the people and skills I learn and now my network has expanded and I have earned a reasonable reputation I enjoy the better projects.

I second the sentiment about going IPSE and of hiring a decent accountancy. Don't worry about their portals or how shabby a website may look, pick them based on their competency.

Bite the bullet. Go for it!

3
jackgavigan 18 hours ago 4 replies      
> The fixed rate VAT does not let you reclaim VAT, but you pay a lower rate than you charge your clients. So if you're a developer contractor you'll be adding 20% VAT to your client invoices, but only pay 14.5% to HMRC, keeping the remaining 5.5% to the business.

This is not correct. The way the flat rate VAT scheme works is you add VAT to your bill, then pay 14.5% of your "flat rate turnover" to HMRC. Flat rate turnover includes the VAT.

For example, let's say you did 100 worth of work for a client. You invoice them for 100 + 20% VAT = 120 in total. You must then pay 14.5% of 120 to HMRC (i.e. 17.40).

4
rossriley 18 hours ago 3 replies      
In terms of the recruiters section, whilst I appreciate you may not have had a good experience, this is probably the way a lot of people get contracting work, at least until you have a fairly big network of contacts.

I'd be more interested in knowing what agencies do have a good reputation / good developer experience along with the note to beware of the cowboys.

5
Nursie 18 hours ago 6 replies      
Why bother with a third party company registration service? You can do it yourself pretty easily through companies house.

Do companiesmadesimple have any value-add?

While I do share your cynicism about recruiters, I have also got most of my contracts through them. There are bad recruiters (vague job descriptions, never call you back) and there are good ones (We need someone here, you fit the bill, can we arrange a time to talk to my client?).

Learning to swallow your distaste and listen to them as if they were worthwhile human beings is a useful skill. Some (few) of them actually are.

(--edit-- good guide in general! I don't just want to harp on the negatives!)

6
hunglee2 17 hours ago 1 reply      
Good to see recruiters getting some love on the comments here. If there remains a place for 3rd party recruiters its in the contract market. A few reasons:

Payments processing - a lot of employers will not want to deal with processing payments for contractors. Indeed, its often the main reason why they might go for contract vs permanent resource in the first place. I've seen situations where contractor and employer have discovered each other, only for the employer to then ask the contractor to 'go through agency X'. No one likes admin load and we'd all get rid of it if we can.

Reduce assessment load - job opportunities cycle much more frequently on the contract market - typically 3-6 months. This means a lot more time involved in opportunity sourcing / vetting, potentially a hugely time consuming exercise. A good recruiter will be able to filter these opportunities for you, and only get you the most suitable gigs

Reduce downtime - going without agencies entirely means relying on your own market gravity as a developer of renown to secure job opportunities. This is do-able for high profile developers, of course, especially those who live in metropolitan areas and are prominent on the open source / community / events scene. However, if you work on proprietary software, have heavy family obligations and live outside of a big city, you're probably going to find agents very useful indeed.

Salary / Rate negotations - they are going to take their 15-20%. But they may end up earning you more by negotiating hard with the end employer. Certainly an inexperienced contractor is at risk of being exploited, but that's true in whatever of the type of contract you sign. A good relationship with a trusted agent can really help you make more on your rate, especially if you are not a naturally comfortable at negotiating.

And I say all this as a maker of a tech hiring platform that doesn't allow 3rd party recruiters on it. They have their place - just a smaller one than they currently occupy.

Great article in all other areas

7
gadders 18 hours ago 1 reply      
I'd also mention joining IPSE (http://www.ipse.co.uk/) which a is trade organisation for freelancers. They provide various benefits including IR35 insurance that covers investigations.

Freeagent is a good online accounts package as well, but you need to find an accountant that uses it.

8
laverick 7 hours ago 1 reply      
Finding a good accountant isn't easy, but I would recommend looking for something better than Crunch. They were the default option "just pick Crunch", but I had to do a lot of tax research myself and recommend favorable approaches to their accountants on certain non-standard things. I also felt their first line of support is spread very thin and they put a lot of bookkeeping work on me. Then a lot of services cost extra each month, making their total pricing uncompetitive. In the end I got everything done and they did their best to rectify issues, but it was a big burden on my time and significant amounts of stress.

Maybe I would have had this issue with any accountants as a first time contractor, but I know it could have been better and I would have happily paid more to make it so. Do yourself a favor and pay extra to get an accountant that handles more things on your behalf (bookkeeping, VAT filing, etc) preferably with a simple non-proprietary interface.

I haven't tried these two services but if I go back to a UK accountant they'd be at the top of my list.

http://www.3wisebears.co.uk/ contractor)http://www.proactive.uk.net (more startup focused, very helpful via phone)

9
kybernetyk 15 hours ago 1 reply      
>A service like www.companiesmadesimple.com (aff.) will make the incorporation process easy. It usually takes up to three business hours.

Yeah, better do it yourself. I'm from Germany and I set up a LTD directly with Companies House myself. It's really (I mean really really) straight forward and they even accept PayPal to pay the 15 GBP fee. It took me ~20 minutes and the company was incorporated the next day.

Those formation companies usually just are an unnecessary middleman.

10
Wintamute 15 hours ago 0 replies      
Uh oh, if it's one thing us brits like to do its wrangle over the finer points of tax, employment and contractual law. This thread is going to get long, involved and possibly slightly testy :
11
celticninja 18 hours ago 0 replies      
one thing I would say is that i have had good fortune with recruiters, providing they are a decent recruiter they can ensure 0 downtime between contracts and if financial stability is a concern then this should notbe overlooked. One thing i have found is that finding a good recruitr and sticking with them has ben more useful than calling a handful of agencies and using the shotgun approach.

I would note that none have ever told me the name of a client prior to me accepting to be represented by them and I have been introduced to a client company who then asked me to work directly through them and to bypass the recruiter (and any fees the client company would be paying to them). I refused as it was a single contract for 6 months and burning bridges with anyone isnt worth it for that sort of duration. Plus the recruiter had got me the interview within 7 days of me getting in touch with them.

YMMV just my 2cents.

12
martinald 15 hours ago 1 reply      
Do not ever be a sole trader. It means you are personally liable for anything that goes wrong.

Say you end up with a dreadful contract and everything goes wrong and the client sues. If you're a ltd company you can dissolve the business and pay them out of whatever assets your company has.

If you're a sole trader you're liable for all the costs. Until you're personally bankrupt. Sure, there may be clauses in the contract etc, but if someone is mean enough they can make your life very difficult and they will exploit the fact you are a sole trader.

13
kaolinite 14 hours ago 1 reply      
I recently started working as a contractor (just for a few months to bring some cash in so I can continue to work on my business).

As I have a business already, I was considering doing the contracting under the business to reduce liability. Does anyone know whether this will increase the amount of tax I have to pay? I figured that I would have to pay corporation tax on any money I bring into the company as well as personal tax on any money that I pay myself as a salary. Is that the case?

14
Keats 8 hours ago 0 replies      
I read that book http://www.amazon.co.uk/Contractors-Handbook-Expert-Guide-Fr... when I started contracting last year and thought it covered quite a bit.

Another thing I'd recommend is finding an accountant that uses FreeAgent or something similar rather than their homegrown system. Also, If you use an accountant they will do the company registration for you

15
kifler 18 hours ago 4 replies      
This looks incredibly helpful - I wish someone would do one for the US/Canada
16
flog 4 hours ago 0 replies      
17
asherkin 17 hours ago 0 replies      
> As far as I understand, it is important to have a contract that allows you to:

> [...]

> If the client insists on including the clauses above, be ready to move on.

These lines appear to contradict themselves, am I missing something?

18
tkyjonathan 13 hours ago 1 reply      
You guys need to learn some accounting. Don't be afraid of book-keeping and be keen on any rule that helps you save on taxes. Overall, a limited company can save you around 20% on taxes over a salaried worker. That alone makes it difficult to go back to full-time employment.
19
new299 18 hours ago 3 replies      
It's probably worth noting that you only need to register for VAT if you're expecting to make > 82KGBP in VATable sales per year.

If you're contracting is mostly remote and outside the UK, you don't need to register, or charge VAT (as I understand it).

20
boothead 18 hours ago 5 replies      
Good advice, especially the bit about crunch. First time I contracted I let my accountant talk me into using their horrible spreadsheet. I couldn't make my self use it it was that bad and my accounts were a mess. Now my accountant (nimble jack) bundles freeagent (which when I looked into it was better than crunch and has an API) and keeping on top of things is much easier.

One additional thing: If you're married make sure your spouse is a director of the company as well (especially if they're not working). You can be hugely tax efficient in this manner and can extract nearly 80k per year from the company with no personal tax to pay (there will still be corporation tax to pay on profits).

21
ticksoft 17 hours ago 3 replies      
I notice that business bank accounts are always mentioned in these sorts of lists. To me they seem like an extra overhead for no reason. You essentially have to ask a bank's permission and pay them just so they can accept your payments? So weird.

I could understand it if you had a business with employees and you sold products on a daily basis with loads of transactions, or if you plan to get into debt... but for invoicing someone every month? Personal account seems fine, and I'm sure people have several of them already.

22
stefek99 13 hours ago 1 reply      
I would all the formalities after landing the first contract.

It takes like 15 minutes online to establish limited company.

And when you buy insurance - how do you know which one? My current contract requires me to have 10 million employer liability insurance (I don't hire anyone) but I have to have it anyway.

I guess I'll add some comments in this thread... (if time allows)

23
zimpenfish 18 hours ago 3 replies      
My advice would be to use an umbrella company and let them deal with all the tax hassles - they have an entire staff purely for this. You will take home slightly less (no opportunity for tax "optimisation" but I disagree with that on principle anyway) but then also have no exposure to HMRC coming after you 6 years later saying "Where's our 10,000?" (has happened to several people I know.)

(edit for spelling)

24
thruflo 10 hours ago 0 replies      
Tax.

You invoice 4k. It gets paid into your bank. Congratulations. You just earned 3k.

25
M8 17 hours ago 2 replies      
The salary ceiling is very low in UK.
26
ForHackernews 16 hours ago 2 replies      
I've heard that in the UK, it's much more advantageous to work as a contractor than a full-time employee. Is there any truth to that? What advice would HN offer to a US-based developer looking to move to the UK?
Emacs IPython Notebook tkf.github.io
72 points by sea6ear  14 hours ago   7 comments top 2
1
eeZi 34 minutes ago 0 replies      
Augmented Reality Software Test Bed htmlfusion.com
8 points by jessev  7 hours ago   1 comment top
1
jessev 7 hours ago 0 replies      
For a video demo checkout https://youtu.be/psW_osCT2nw
How Apples Transcendent Chihuahua Killed the Revolution longreads.com
10 points by bootload  10 hours ago   2 comments top 2
1
swombat 25 minutes ago 0 replies      
Sorry to be so critical, but this article really feels like it's trying to hard to make some elaborate point. It's seems kind of like a long bitching session about modern technology that somehow manages to blame everything on Apple (because that's a popular target, I guess). Or maybe it changes target halfway through - I don't know, after about 5 minutes of reading I started skimming.

I think the eventual point is that we are so busy with Apple's gadgets that we don't have the time to consider whether the world is going the way we want.

If so, again that's a cheap shot at Apple, who's hardly the main driver or benefactor of modern society's obsession with vacuous and constant entertainment over substance.

Apple didn't kill the revolution (if the revolution has indeed been killed). Society killed the revolution. We're heading for a Brave New World type society of sated indifference, and most people seem to be ok with that (even though a few vigorously disagree). That's hardly something to lay at Apple's feet.

2
cmsj 18 minutes ago 0 replies      
Well that was pretty miserable.

I sort of feel like the author has entirely surrendered to the ennui he describes, but he doesn't have to. He doesn't have to tend to his Vine profile, he doesn't have to accept the implicit delegation of tasks to him by email.

Particularly in the social media areas, the claimed obligation is really nothing more than vanity. It doesn't matter in the slightest if I am popular on Instagram - to attempt such a thing would only be an exercise in self-gratification.

I also have a fundamental problem with claims of planned obsolescence (all of the devices that run last year's Apple OS upgrades will run this year's. Talk to some Apple engineers about how much time they spend trying to make things work for users on older devices - this is done not for evil reasons, it's done because they care).

Looking back and panning the original iPhone as being crude and slow seems somewhat unfair given the vast increases in hardware performance that have happened since. Yes, the iPhone was pushing the hardware limits in 2007, and yes it was a primitive product compared to what we have now, but all phones back then were slow - the difference was that the others were ugly and ill-conceived, as well as being slow.

It seems very strange to me to claim that the purpose of the iPhone was to teach us how to accommodate treating a tiny device carefully. The only way to make a networked, general-purpose computer fit in your pocket, is to make it the size of your hand, which means it's small, its components are small, its case is packed tight with hardware, and its input surface is small. If the author feels this can be fixed, he stands to make a considerable amount of money, presumably by inventing holographic UIs, or direct brain interfaces. Otherwise, I will continue to think that the purpose of the iPhone was to put a computer in my pocket. That it is fragile and needs to be used precisely, is a necessary compromise for its form factor.

Is it possible to unwittingly make yourself a slave to the technology? Of course, but it's possible to unwittingly make yourself a slave to almost anything. I think that is the key failing of this piece, it seeks to place the technology at the centre of the argument, with Apple standing above us, herding us into digital stables. Instead, we are at the centre of the argument. We control how obligated we feel towards any ephemeral, abstract collection of bytes.

So, delete your Facebook profile and go for a hike. Or, don't. Either way, own your choice and never submit to ennui. You chose, not someone/something else :)

Apples Bitcode Telegraphs Future CPU Plans medium.com
154 points by 127001brewer  18 hours ago   82 comments top 13
1
drfuchs 14 hours ago 3 replies      
I managed to ask Chris Lattner this very question at WWDC (during a moment when he wasn't surrounded by adoring crowds). "So, you're signaling a new CPU architecture?" But, "No; think more along the lines of 'adding a new multiply instruction'. By the time you're in Bitcode, you're already fairly architecture-specific" says he. My hopes for a return to big-endian are dashed. [Quotes are approximate.]
2
monocasa 16 hours ago 7 replies      
I thought that just being LLVM bitcode wasn't enough to guarantee portability like the author assumes that it is.

There's ABI specific pieces that are still not abstracted in the bitcode like struct packing rules.

3
pilif 3 hours ago 1 reply      
I really hope this bitcode feature isn't going to cause a lot of trouble for app developers. Up until now, the app you built on your machine (and tested on your devices) is the app you submitted and which is running on your customers machines.

In the future, when your app crashes on customer's machines and doesn't on yours, how are you going to debug much less explain this to apple and have them fix the issue for you?

This is especially scary when you consider the turnaround time of ~2 weeks before your new build becomes available in the app store for you to test.

4
nickpsecurity 7 hours ago 0 replies      
This is a smart move. It's essentially what the System/38 (later AS/400 & IBM i) did. They had an ISA or microcode layer that all apps were compiled to. Then, that was compiled onto whatever hardware they ran on. When IBM switched to POWER processors, the just modified that low layer to compile to POWER processors. That let them run the old apps without recompilation of their original source. They used this strategy over and over for decades. Always one to keep in mind.

Going further, I think a team could get interesting results combining this with design-by-contract, typed assembly, and certified compilation. Much like verification condition generators, the compilation process would keep a set of conditions that should be true regardless of what form the code is in. By the time it gets to the lower level, those conditions & the data types can be used in the final compile to real architecture. It would preserve a context for doing safe/secure optimizations, transforms, and integration without the original source.

5
exelius 15 hours ago 1 reply      
How does dynamic linking work in a scheme like this? Would any pre-compiled libraries need to be distributed as Bitcode as well?

Due to the concerns above, IMO Bitcode is less about compatibility and more about app thinning. It's pretty easy to go from Bitcode to 4 different variants of ARM; but another entirely to go from Bitcode to x86 and ARM. Currently, developers have to ship binaries compiled for multiple architectures, which increases app sizes. I suspect Apple is just building a workflow that creates a device-specific version of each app, and having developers compile to Bitcode simplifies app submission.

6
Ruud-v-A 13 hours ago 0 replies      
Microsoft has been doing a similar thing with [.NET Native](https://msdn.microsoft.com/en-us/vstudio/dotnetnative.aspx) for a while now, though MSIL is much higher level than LLVM IR. With .NET Native, you _can_ submit your app once an run on ARM and x86, 32-bit or 64-bit.
7
Corrado 2 hours ago 1 reply      
When I read something like this my mind wanders and I imagine a future MacBook Pro that, instead of a single Intel processor, contains many ARM processors. 10-15 ARM processors acting as one could offer a whole lot of performance when you needed it (use all of them), and a whole lot of energy saving when you didn't (use 1 of them). With the current trend of multi-core CPUs, I see this as the ultimate form of the architecture.

Now, whether Apple will do something like this is or not is anyones guess, but its nice to dream of the possibilities. :)

8
stcredzero 15 hours ago 3 replies      
With Bitcode, Apple could change OS X into something like the old TAOS jit based OS. Except for a small kernel, all TAOS executables were intermediate representation files. This IR could be translated to real machine code at the same speed as disk access, and resulted in code running at 80-90% native speed on most platforms.

With software like that, Apple could become independent of any particular software architecture.

(TAOS dates from the 90's and is hard to google, but is mentioned in some papers. And yes, the JIT translator could do that even on 90's machines.)

9
chuckcode 13 hours ago 1 reply      
Note that gcc considers their monolithic design a feature to encourage companies to contribute back code rather than a painful lesson to be learned from...

http://gcc.gnu.org/ml/gcc/2004-12/msg00888.htmlhttps://gcc.gnu.org/ml/gcc/2007-11/msg00460.html

10
legulere 12 hours ago 0 replies      
I wonder how Bitcode will play with profile guided optimization. Will you also provide pgo information to apple or will they generate it.
11
jarjoura 11 hours ago 0 replies      
My suspicion is that bitcode allows the App Store team to provide Watch/iOS specific binaries for an individual device. Right now the solution is to create fat binaries that eat precious space. The watch being even more constrained can use all the help it can get.
12
serve_yay 14 hours ago 0 replies      
Yeah, and their patent applications telegraph the iMac with the fiber-optic shell that's been just around the corner for 10 years.
13
gojomo 8 hours ago 0 replies      
I wonder if this could also be a way to protect compiler-tech and silicon trade secrets, even after they're widely used in the field? Perhaps only Apple ever compiles the final, deployed versions of apps.
Gabriel's Horn wikipedia.org
97 points by wz1000  20 hours ago   26 comments top 5
1
dubin 15 hours ago 4 replies      
Also check out the Menger sponge, which has infinite surface area and zero volume: http://en.m.wikipedia.org/wiki/Menger_sponge
2
jveld 9 hours ago 4 replies      
I've thought it was paradoxical that infinitely long curves could have finite integrals ever since I first took calculus. For example, the integral of 1/x is |ln x|.

I wonder why it takes three dimensions before people start getting upset.

3
mgob 7 hours ago 0 replies      
I always describe this to "non-mathy" people when they ask what could possibly be fascinating/beautiful/etc about math. I'd like to think I've changed at least a mind or two.
4
TeMPOraL 11 hours ago 2 replies      
I wonder what sound it would make. Can we model such a horn assuming a from, say, 1cm to some value p and simulate it?
5
davidrusu 6 hours ago 2 replies      
This reminds me of a Putnam problem from a few years back, was something along the lines of:

Construct a set of discs in R^2 s.t. no infinite straight line can be drawn without intersecting at least one disc, and the sum of the areas of all the discs is finite.

       cached 17 June 2015 10:02:03 GMT