hacker news with inline top comments    .. more ..    8 Feb 2015 News
home   ask   best   3 years ago   
Samsung Global Privacy Policy - SmartTV Supplement
points by tscherno  5 hours ago   52 comments top 13
1
imgabe 3 hours ago 6 replies      
> You may disable Voice Recognition data collection at any time by visiting the settings menu. However, this may prevent you from using all of the Voice Recognition features.

from here: https://www.samsung.com/uk/info/privacy-SmartTV.html

So, disable it. I don't understand everybody's fascination with voice recognition. I don't find it more convenient at all. I'd much rather just push a button. It's really not that complicated.

2
amluto 1 hour ago 2 replies      
It seems to me that, if you have one of these, you live in a two-party consent state (e.g. California), and you invite a guest who hasn't clicked the EULA over, then someone is committing felony wiretapping.

I would love to see a TV vendor prosecuted for this.

3
patcheudor 1 hour ago 0 replies      
I recently collected a bug bounty from Samsung on a crypto implementation flaw I found in some of their software. The fix is still being rolled out and given the impact I'm not going to disclose right now, rather I'll let Samsung handle that when the time is right. Anyway, the team at Samsung was responsive and they seemed like they genuinely cared about security. However, based on what I've seen in their products and those from their competitors the first thing I would do is pen-test the voice recognition feature, then turn it off no matter the outcome. The fact is, if it must communicate with a back-end server to work, then it becomes incredibly hard to lock the solution down. Even if the TV is properly validating the public cert of the server when doing the TLS handshake, there's got to be a mechanism on the TV for updating the trusted root store because at the end of the day, certs need to expire and thus must be updated. On a few non Samsung smart TV's I've looked at over the years, updating the trusted root store on the TV is as "easy" as man in the middling (MitM) the network the TV is on so that web traffic goes to a site I own which has a link to the my.cer root CA that I generated and am using in my TLS MitM solution. From there I just bring up the web browser on the TV, click on the my.cer link and go through the prompts to install the root CA. After that point all traffic from the TV can be decrypted on the wire.

Now it is fair to say that the attack I just described requires the ability to MitM the network and have physical access to the device, however, remember that these TV's use an IR remote & all an attacker needs is visual access to the TV. If it can be seen through a window it can be controlled through a window and these things typically don't require a password to modify the WiFi settings. Some smart TVs also have proxy settings which again, typically don't require a password to modify.

Given what I just covered, think hotel. From a risk perspective that's what I'd be most worried about. I wonder how many are installing smart TVs with voice recognition? For all other scenarios basically the situation in many cases on the ground is that you are secure because no one is targeting you. In the case of a hotel, someone could be targeting everyone. Such an attack could prove valuable, especially if done in executive suites near financial centers.

5
frik 27 minutes ago 1 reply      
It's not only Samsung Smart-TV but all cloud-based speech recognition products, right?

(Nuance/Apple Siri, Microsoft Cortana, Google Now, IBM Watson Speech, Amazon Echo, LG-Smart TV, etc.)

From a consumer perspective you want an offline speech product like Nuance Dragon NaturallySpeaking: http://en.wikipedia.org/wiki/Dragon_NaturallySpeaking it's the same technology that powers Nuance cloud based products like Apple Siri, IBM Watson, etc.)

6
hughlomas 2 hours ago 1 reply      
I think Amazon's Echo device is doing this the proper way, which "uses on-device keyword spotting to detect the wake word. When Echo detects the wake word, it lights up and streams audio to the cloud". It seems like a technical or design failure on Samsung's part to not feature similar functionality.
7
jsilence 56 minutes ago 0 replies      
Given that voice recognition is possible offline on a RaspberryPi Version 1 [1] I'm wonderung why they have to send the recorded audio to the cloud in the first place.

[1] https://jasperproject.github.io/

8
brianpetro_ 1 hour ago 0 replies      
This immediately brought to mind Orwell's telescreens.

http://en.wikipedia.org/wiki/Telescreen

9
_asummers 1 hour ago 3 replies      
As far as networking is concerned, what should I google for separating a device like this onto its own internal private network? I have devices that I want to whitelist traffic for while not affecting other devices in my home.
10
aw3c2 3 hours ago 1 reply      
If you submit things from aggregators, please try to find the actual source and submit that instead.

Submitted: https://netzpolitik.org/2015/samsung-warnt-bitte-achten-sie-... which links to http://martingiesler.tumblr.com/post/110325577280/samsung-wa... which links to http://mostlysignssomeportents.tumblr.com/post/110300533107/... which links to http://boingboing.net/2015/02/06/samsung-watch-what-you-say-... which links to http://www.reddit.com/r/technology/comments/2uuvdz/samsung_s... which references https://www.samsung.com/uk/info/privacy-SmartTV.html

On the other hand, the HN rules suggest doing things like this if you want to cherry pick a certain aspect of a page...

11
teapowered 2 hours ago 0 replies      
It's about targeted advertising - arguing with your spouse? Next ad break we show you adverts for lawers.
12
Havoc 3 hours ago 0 replies      
Has been in the news before. Voice recognition is done on a server farm meaning it needs to get sent there & possible get intercepted.

Not ideal but doesn't strike me as a big risk

13
shmerl 41 minutes ago 0 replies      
A good lesson why one shouldn't use any systems with DRM.
Ecuador becomes the first country to roll out its own digital durrency
points by prostoalex  1 hour ago   5 comments top 4
1
unwind 1 minute ago 0 replies      
Epic typo in the submission ("durrency"), that is not on the original page. Somebody please fix.
2
nawitus 6 minutes ago 0 replies      
It seems to be a stretch to call this a "digital currency", since it is "directly tied" to the local currency. Besides, digital currencies are not really interesting, as one can make digital transactions of traditional, "non-digital" currencies.
3
nhaehnle 32 minutes ago 1 reply      
Quite frankly, this type of system is the future. Moreover, this type of system already almost exists in many places (consider, for example, how regular people can easily transmit money within SEPA, the Single European Payments Area).

Bitcoin, by contrast, is interesting technology but with weaknesses that make it unsuitable as something that regular people interface with every day (security implications, mostly). It is also economically problematic (widespread use of Bitcoins would mean regressing back to gold-standard times).

Ideally, though, Bitcoin can play a useful role by putting enough pressure on other payment systems to remove any remaining suckiness (mostly the fact that existing payment systems are very bad at international and cross-currency payments).

4
failed_ideas 6 minutes ago 0 replies      
I spent a month in Ecuador last year, and the frustration over the dollarization was palpable. But they got rid of the Sucre because the exchange rate fluctuated too much to be reliable, and I haven't seen anything mentioned on the strategy to stabilize this new digital currency, so I'm not sure how this would differ from the failed Sucre. Digital currencies have been interesting, but not stable enough to rely upon.
Cheapshot A map-based multiplayer shooter game for iPhone
points by borodich  35 minutes ago   3 comments top 3
1
linkeex 2 minutes ago 0 replies      
Gamer's arguments have always been that shooters are artificial environments with no personality involved.In Counterstrike you're fighting against stereotypical terrorists that are in no way related to your personal life.

Here, you "kill" real people with a face. Objective? Motivation?Kill him before he kills me?

I'm deeply concerned about this and find it disgusting.

2
kikki 15 minutes ago 0 replies      
I mean, I get it, but it just looks like it would get really boring fast. Also, this should be submitted as a 'Show HN'.

https://news.ycombinator.com/showhn.html

3
thom 11 minutes ago 0 replies      
Um, does one of the demo avatars have to be Ian Brady?
A Xenon flash will cause the Raspberry Pi 2 to freeze
points by voltagex_  17 hours ago   113 comments top 27
1
tdicola 15 hours ago 4 replies      
Oh neat, I just reproed it with a Pi 2 and a Canon Speedlight flash. I'll put my scope on the power lines and see what's happening when you flash the board. Sounds like from the thread one of the power ICs is photo sensitive.

edit: Wow yeah, here's a look at the 3.3V power line when you flash the board, it drops almost down to 0V and then wildly fluctuates for about 100 nanoseconds: http://imgur.com/hG86pRy

edit 2: Another interesting measurement, with the board _totally unplugged_ and flashing it you can see a big voltage spike on the 3.3V rail. Up to 6-7 volts or so for a few nanoseconds: http://imgur.com/td262QK

I guess not only can you learn about electronics but also Einstein's photoelectric effect with the Pi 2!

2
exDM69 5 hours ago 1 reply      
This reminds me of an old Finnish engineering legend from the early days of Nokia. The guys had just built an important prototype of some network equipment (early GSM base stations IIRC), which was going to be demonstrated for the press. All tests and previous demos had gone fine.

But as soon as the demo for the press started, the machine crashed. The management was upset. Later, the reason was found to be some old EPROM chips that are erased using UV light, and the photographers' cameras had strong flashes that went through the tapes covering the "window" on the chip. This caused the program memory to be corrupted when a photograph was taken.

3
teddyh 3 hours ago 0 replies      
From The Devouring Fungus, Karla Jennings, 1990, chapter 10, The Monster Turns and Falls to its Knees, p. 211:

Another legendary debacle triggered by light hit at a highly publicized affair thrown by IBM, ironic considering that IBM is the master of the seamless image. D. E. Rosenheim, who helped develop the IBM 701, the first mass-produced modern commercial computer, recalled the famous faux pas, which occurred when the company held a dedication ceremony for the 701s installation at its New York headquarters. Top-level executives, the engineering team, and a gang of reporters crowded the ceremony room

Things went pretty well at the dedication, said Rosenheim, until the photographers started taking pictures of the hardware. As soon as the flash bulbs went off, the whole system came down. Following a few tense moments on the part of the engineering crew, we realized with some consternation that the light from the flash bulbs was erasing the information in the CRT memory. Suffice it to say that shortly thereafter the doors to the CRT storage frame were made opaque to the offending wavelengths.

Those who do not know their history are doomed to repeat it.

4
ChuckMcM 15 hours ago 0 replies      
That is fun, reminds me of the 'yelling at the drives slows them down' video.[1]

[1] https://www.youtube.com/watch?v=tDacjrSCeq4

5
Johnythree 13 hours ago 1 reply      
One of the standard tests to gain an EMC Compliance Certificate is a spark discharge test.

Any experienced engineer will have a Spark Generator (Car Ignition coil, spark gap and short Dipole) to test to see if his latest project misbehaves when confronted with Impulse Interference.

As an EMC Investigator I would always carry a spark generator to demonstrate to newby engineers why EMC Compliance is so important.

I've seen a spark from 50ft away crash or reset a microprocessor system. Just the static discharge from walking on carpet is often enough.

6
Animats 11 hours ago 1 reply      
There's nothing mysterious about this. Semiconductor gates are light-sensitive. There's usually carbon black in the plastic of plastic-packaged ICs to prevent interference from light. The opacity isn't perfect, though. For that you need ceramic or metal-encased ICs. Still, this is a rare enough problem that IC data sheets don't specify a maximum tolerated illumination level.

Try some laser pointers, especially towards the blue end of the spectrum where the photons have more energy. You may be able to trigger this effect by pointing at a specific IC.

7
dietrichepp 16 hours ago 1 reply      
Reminds me of old EPROMs. You can buy special "light sensitive" transistors, but they're really just ordinary transistors with a window in the case, since ordinary transistors are light-sensitive. You can even use an ordinary 1N4148 diode as a solar cell, it just doesn't generate much power.

The fix is simple: apparently, you just have to cover U16, which controls the power supply.

8
mholt 16 hours ago 1 reply      
Shortcut to a video of this phenomenon: http://youtu.be/wyptwlzRqaI?t=1m29s
9
swamp40 15 hours ago 6 replies      
A xenon tube is a spark gap.

If there's anything in this world noisier than a spark gap, I don't know what it is.

I think the first radio transmitters were spark gaps.

The energy flies thru the air, and is coupled onto the power line.

The power supply doesn't cope well with the oscillations, and hiccups.

I see the notes about U16 being photosensitive, but if it is a black epoxy like most IC's, I'm not buying that light gets into it.

It's possible that blue tack shields the EMP a bit.

10
wolfgke 3 hours ago 1 reply      
Question to HN: Does also the Raspberry Pi 1 B or B+ have this problem or is the Xenon flash problem specific to the Raspberry Pi 2 B? Is somebody willing to do this experiment/has done it?
11
agumonkey 4 hours ago 0 replies      
The new wave of single board computers really exposed me to the amount of failure that can happen at the electrical level. Growing up with large ATX boxes I'd never expect so many things to go wrong.

btw: anyone tried to light-freeze other devices (banana, orange, cubie, etc) ?

12
mikerr 3 hours ago 0 replies      
Here's a pic of the chip in question(so you can cover it up)https://pbs.twimg.com/media/B9Ut_QwIQAACrp_.jpg
13
sqren 12 hours ago 0 replies      
Video demonstrating the issue: https://www.youtube.com/watch?v=wyptwlzRqaI
14
tonteldoos 14 hours ago 0 replies      
A computer with actual strobe-induced epilepsy. Looks like the singularity is closer than we thought.
15
mikerr 4 hours ago 0 replies      
The problem IMO that the chip in question doesn't have a plastic cover,look how shiny it is in this video:https://www.youtube.com/watch?v=c7p2OcQ7G58

A laser (no EMP!) shone on that chip will also crash the Pi.

16
thought_alarm 16 hours ago 3 replies      
Why would a switched-mode-power-supply chip be photosensitive?
17
yuhong 14 hours ago 0 replies      
As a side note, the power supply chip directly uses the 5V from USB, right? Wonder if it is tolerant of 3.3V as common when running from batteries.
18
pervycreeper 8 hours ago 2 replies      
Slightly OT: is it now possible to run a completely free OS on this new version? I've been getting contradictory info on this so far.
19
arnie001 12 hours ago 0 replies      
It's interesting this was not caught before..
20
nh 16 hours ago 1 reply      
Good find OP! I wonder how many electronic devices would have similar problems if we took out the covers?
21
bitwize 7 hours ago 1 reply      
Someone commented "Fire photon torpedoes..."

In light of Heartbleed and Shellshock, I propose calling this the Photon Torpedo vulnerability.

22
ozy23378 16 hours ago 0 replies      
That Pi isnt very photogenic.
23
Alupis 14 hours ago 3 replies      
I just ordered two of these, and they will arrive on Tuesday.

Makes me sad because I'm imagining a Raspberry Pi 2.1 release in the near future now...

24
nacs 16 hours ago 0 replies      
This was clearly designed to sell more opaque cases. /s

Or to look on the upside, the Pi now comes with a free photodetector.

25
thrownaway2424 16 hours ago 1 reply      
That comments thread contains a head-smacking quantity of ignorance. "Is it the light or the EM pulse?" What?
26
psgbg 14 hours ago 0 replies      
27
pmalynin 16 hours ago 2 replies      
Explanation:Camera's have capacitors that charge up in order for the flash to happen. They are usually quite powerful. Now during the discharge (aka flash) what you have is very high energy electrons flowing across the wire creating aa magnetic field, coupled with the electric field of the electron you get a mild EMP.

And if it is light sensitivity then it should be tested with a bright continuous light

ISRO to launch Google satellite
points by unmole  10 hours ago   27 comments top 11
1
_nedR 8 hours ago 4 replies      
This is a great example of how investing in space can have returns for India. You often find comments in such articles saying that India should be focusing on poverty alleviation, healthcare, and infrastructure instead of investing in a space program. The problem with this strategy is that India (and developing countries in general) will always be playing catch-up to other countries; And without finding new sources of wealth, India will be hard pressed to obtain the necessary resources to uplift itself from poverty. Another thing these commenters fail to point out is that most of the countries that are rich today got where they are not by funding massive welfare programs, but by expanding into new frontiers in search of wealth.

So the strategy today's developing countries should be following is to find new frontiers in science, technology, entrepreneurship to create wealth while in parallel trying to provide basic facilities to their people. Developing countries are in some ways like startups- Perpetually strapped for cash and resources, struggling to stay afloat and facing tough odds. The key for them is not to try to compete in areas where others already dominate, but to disrupt them (by trying drastically different approaches) or to seek new fields. Microsoft didn't try to compete with IBM in mainframes, they went for the then-burgeoning PC market. Apple is the world largest corporation not because it competes head-on with Microsoft in the PC market, but because it disrupted mobile. Similarly, space is a good avenue for India to compete in, where there are few incumbents and where India can exploit its natural advantages (such as it's eye for cost-saving and huge, inexpensive talent pool).

Updated: Edited to removed lines that detract from main point.

2
kartikkumar 7 hours ago 0 replies      
Antrix has done a great job of marketing secondary payload opportunities. Many university-satellites have been launched by PSLV; they've become the de-facto small-satellite launch provider in a lot respects. My alma mater has launched a couple of satellites successfully [1], with the first one, Delfi-C3, launched from Sriharikota in 2008 (still operational!).

Europe has been trying to push Vega [1] as the European offering in this market. It's exciting to see how the launcher space is developing, especially for small payloads. I know a few startups that are targeting this space because of studies, like undertaken by SpaceWorks [1][2], that point at the expected explosion within the coming 5 years.

Given that I'm working on space debris risk mitigation at the moment, I'm looking at this from a somewhat different perspective. Most small-satellites to date have been launched to low enough orbits that they can meet the 25-year de-orbit guideline without too many issues. With the commercial market rapidly expanding though, there are a lot of applications that require higher orbits, and that's when space debris becomes a huge issue. Keeps me in a job!

All in all, great news for ISRO, and hopefully a sign of more international collaboration and commercial expansion in the years to come.

[1] http://www.delfispace.nl

[2] http://en.wikipedia.org/wiki/Vega_%28rocket%29

[3] http://www.sei.aero/eng/papers/uploads/archive/IAC-14.E6.1.3... PDF)

[4] http://www.sei.aero/eng/papers/uploads/archive/SSC14-I-3_v1.... (PDF)

3
ardahal 4 hours ago 0 replies      
A couple of months ago, I met a brilliant scientist[1][2] who runs a space startup in India that specializes in brokering deals to launch non Indian space payload on ISRO's launch vehicles. Her company is called Earth2Orbit[3], although I am not sure whether this deal was brokered by them.

[1] http://travel.cnn.com/mumbai/susmita-mohanty-indias-own-moon...

[2] http://www.earth2orbit.com/people/people.html

[3] http://www.earth2orbit.com/index.html

4
paulsutter 9 hours ago 2 replies      
Skybox can sell live satellite video, I saw a demo last week that was pretty dramatic. They are currently very limited by having one satellite.

Suddenly I realize the importance of the Google investment in SpaceX to launch 700 internet service satellites. Surely those could include cameras. Will we get realtime Google Earth?

5
vardhanw 5 hours ago 0 replies      
There are some good developments giving an international exposure to Indian space research. Another commendable achievement is the Team Indus winning a $1 million prize money [1] as a part of Google's Lunar X-prize, for achieving significant milestones. They are mostly a team of fresh IITians mentored by a few senior guys [2] - entrepreneurs, enthusiasts and technology and industry veterans who started this. This was the only team from India participating in the X-prize, and is one of the 5 teams internationally to be selected for the first round of funding, from amongst other well funded entities. Incidentally my company Sasken [3] has provided the team with space in our Bangalore office for their operations and it is indeed exciting to see them succeed.

[1] http://yourstory.com/2015/01/team-indus-from-india-wins-goog...[2] http://www.teamindus.in/about-us/[3] http://www.sasken.com

6
sudhirj 9 hours ago 0 replies      
Now we're talkin. If ISRO pulls this off the space industry will be officially Bangalored.
7
jpalomaki 2 hours ago 0 replies      
Crowdfunded military intelligence in near future? People interested in what's happening on some specific place putting their money to pool and purchasing live video feed from the region?
8
binoyxj 5 hours ago 0 replies      
Big validation in the age of space madness. Keep them coming team ISRO.
9
panini_tech 8 hours ago 0 replies      
Time for the outer space research to get big at Isro bAngalore,India
10
jcoffland 7 hours ago 0 replies      
Did someone say Skynet?
11
known 8 hours ago 2 replies      
Is it INSURED?
The Ken Thompson Hack
points by yla92  4 hours ago   16 comments top 6
1
brudgers 30 minutes ago 0 replies      
The reference point is [Trusting Trust]. It was his Turing Award lecture in 1982. Honestly, it and similar materials should be required reading somewhere in university CS and Software Engineering curricula.

[Trusting Trust]: http://cm.bell-labs.com/who/ken/trust.html

2
nischalsamji 3 hours ago 1 reply      
Don't think he told that he actually put a bug in the c compiler. He was just explaining how he could have done it. A very interesting read though.
3
lomnakkus 40 minutes ago 1 reply      
I'll repost something another HN poster posted at me in a previous conversation about this. Thompson's attack can be defeated:

   http://www.dwheeler.com/trusting-trust/

4
rectang 2 hours ago 0 replies      
A KTH virus doesn't have to be ubiquitous to do huge damage. All you need to do is land a malicious compiler on a machine used to produce widely distributed binaries.
5
bayesianhorse 3 hours ago 3 replies      
The idea about KTH binaries being virtually everywhere is fascinating and frightening. But I highly doubt that this is the case.

Sure, in theory, a perfect KTH scheme would be undetectable, since it suborns every means of detection. But in practice it often wouldn't. A KTH virus would have to anticipate all tools which may be written to detect it, and given the modern open and closed source software world the complexity would explode.

6
Agathos 2 hours ago 0 replies      
This inspires a major plot element in Ramez Naam's Nexus, a near future transhuman sci fi thriller. It's risky to run other people's code on your brain.

http://www.goodreads.com/book/show/13642710-nexus

Google Acquires Odysee
points by enigami  4 hours ago   11 comments top 5
1
inglor 29 minutes ago 0 replies      
It is very annoying when a startup gets bought like this and shuts down its service in a matter of weeks. If all my photos were there and they'd just down I'd be very annoyed.

I get that they don't care about public opinion now - but it's still very frustrating when this sort of thing happens. It wouldn't kill them to stay open and close registration and then offer automatic migration to the service once it's integrated into Google - the user base gets preserved this way which is another win.

2
pgrote 2 hours ago 1 reply      
Photo sharing is awesome and all, but what about photo organization and backup? I'd like a service/program that takes my "gold" copy of my photos with metadata off my external drive and keeps my albums on Picasa/+ and Flickr in sync.

Is there such a service/program that does this?

3
amirmc 4 hours ago 0 replies      
I'm happy for the team but somewhat sad for the users. I wonder why they sold so quick? Google is the antithesis of using edge devices to their maximal advantage, and this was clearly not a product acquisition.
4
rtpg 4 hours ago 1 reply      
>Like other many other apps, Odysee was built around a freemium model: free for the first year, and then $5/year thereafter. The founders had at one point estimated that they could keep the business sustainable if they reached 3 million users.

How hard would it be to get 3 million people to sign up for something at $5 a year? I imagine that at that point you might as well charge $5 a month or something, since the fixed cost of getting a person to pull out their credit card is so high

5
pearjuice 3 hours ago 1 reply      
At this point it's just a matter of time before exposure.co gets acquired. I thought photo sharing and the like services were out of fashion but they are still being bought up in bushes. There's really a monopoly battle going on for the final answer to the question "Where will I store my photos, online?".
Bash process substitution
points by geoka9  35 minutes ago   discuss
The Story of Mel (1983)
points by thefreeman  4 hours ago   7 comments top 5
1
SixSigma 1 minute ago 0 replies      
Welcome back Mel

https://news.ycombinator.com/item?id=8922844

1 point by SixSigma 18 days ago | link | parent

currentoor > Why is HN so interested in linear algebra lately?

me> It happens to all topics.

One topic gets voted to front page, then people fall down the rabbit hole, posting any links they hit on their way down.

Once every 6 months or so Plan 9 gets a front page hit, probably from someone getting into Go-lang. Then we see all the related papers and websites flood in for a while - Russ Cox' site, cat-v, Rob Pike Interviews, Utah2000, The birth of UTF-8.

It's like the September that Never Ended.

The Story of Mel is on the same cycle.

http://www.catb.org/jargon/html/story-of-mel.html

2
MichaelCrawford 4 hours ago 0 replies      
I've done this, in the Intro to Computer Architecture class at UC Davis during Summer Session 1981, while I was still in high school.

What I was expecting was to learn how to build a computer out of transistors, you know, with a soldering iron, as I wasn't having much luck finding paying work when I was in high school.

What the course actually taught was how to write device drivers for the LSI-11 - a PDP-11 compatible minicomputer - in assembly code, hand-assembling it into octal, then entering with a keypad using ODT, the Octal Debugging Technique.

It was my only college course for which I receive a C. :-(

3
Codhisattva 1 hour ago 1 reply      
I think every programmer should write machine code at least once. It's fascinating to understand how the CPU actually gets the job done.
4
LukeB_UK 1 hour ago 1 reply      
Relevant XKCD: http://xkcd.com/378/
5
thefreeman 2 hours ago 0 replies      

     Mel finally gave in and wrote the code,     but he got the test backwards,     and, when the sense switch was turned on,     the program would cheat, winning every time.     Mel was delighted with this,     claiming his subconscious was uncontrollably ethical,     and adamantly refused to fix it.
I can't decide if its better if he intentionally did this, or it actually was an accident, but this was my favorite part by far.

Could we stop the anti-vaxxers if we said measles contains gluten?
points by plg  49 minutes ago   discuss
The Best Business Book Ive Ever Read (2014)
points by denismars  9 hours ago   20 comments top 7
1
Codhisattva 5 minutes ago 0 replies      
This is an interesting follow up for sure http://www.gatesnotes.com/Development/Great-Books-on-Science...
2
brianbreslin 21 minutes ago 1 reply      
This book was re-released because of this recommendation. Gates I'm guessing coordinated with amazon to get it back for sale, as previously the rare copies were prohibitively expensive.
3
guy_c 6 hours ago 0 replies      
A nice observation from the Xerox, Xerox, Xerox, Xerox chapter:

"In any case, office reproduction began to grow very rapidly. (It may seem paradoxical that this growth coincided with the rise of the telephone, but perhaps it isnt. All the evidence suggests that communication between people by whatever means, far from simply accomplishing its purpose, invariably breeds the need for more.) "

4
alimoeeny 4 hours ago 7 replies      
I feel very uncomfortable that I cannot see what is so brilliant about this book that Bill Gates goes on and on about it. I tried to read it and was bored and stopped half way through it. Anybody can shed some light? Share what you've learned, please.
5
Mahn 3 hours ago 3 replies      
Kind of surprising that Bill Gates would go through the trouble of setting up an amazon referal link for his article.
6
orenbarzilai 3 hours ago 1 reply      
How come everything is so dramatic this days? "The Best Business Book Ive Ever Read". What happened to: "Recommended reading book! you should add it to your reading list"
7
wodenokoto 5 hours ago 1 reply      
I remember reading this when it was posted back in July. I thought it was on HN I saw it, but searching for other posts linking to this article came up empty.

Anyone know if there is a prior discussion?

Measuring Feline Capacitance
points by mmastrac  12 hours ago   13 comments top 6
1
stolio 11 hours ago 1 reply      
Or, say a cat's roughly a sphere of radius 10cm, capacitance of an isolated sphere is C=4(pi)(epsilon_0)(radius) = radius/k_e = .1/(9x10^9) = 11 pF

Ballpark. YCCMV.

2
ChuckMcM 9 hours ago 1 reply      
I am not a cat fan, or a squirrel fan. I do know from observing squirrels crossing power lines that their dielectric constant is significantly lower than that of air, but I've never figured out how to determine it strictly from observation without actually doing an empirical measurement.
3
pvaldes 2 hours ago 0 replies      
I'm feeling the desire to build a cat scratcher with multimeter right now... maybe with some nice bulb leds aligned forming the meow! word.

muahahah... is alive!!!

4
amelius 4 hours ago 0 replies      
Next question: how much energy can it store, until the capacitance breaks down?
5
walshemj 5 hours ago 1 reply      
I imagine getting the kitty to hold still while you attach the clips from your ESR meter might be the tricky part
6
tonteldoos 5 hours ago 0 replies      
I imagine this experiment may not have a high level of repeatability.
Show HN: vim-hackernews
points by ryanss  18 hours ago   32 comments top 16
1
guillaume8375 10 minutes ago 0 replies      
Does anyone think it would be complicated to port it to Sublime Text? I'd like to, but I'm learning to program.
2
fabiofzero 14 hours ago 1 reply      
Well, it certainly looks better than the current Hacker News design.
3
evilduck 14 hours ago 2 replies      
The Emacs community embraced vim through evil-mode, the vim community is now implementing the rest of the OS.
4
sagarjauhari 13 hours ago 2 replies      
Pretty awesome! My default browser is Chrome and I have Vimium installed - and its a really good to be able to press 'O' on a HN link and continue on Chrome with the same navigation (j, k, ..)

Based on the way I read HN, some customization that I would definitely want to do are:

1. Headline navigation (mapped to 'j') - move cursor to the next headline instead of the next line

2. <Enter> / O opens the link in browser instead of the HN thread

3. Opened links get blurred

4. Quick page reload mapping and Auto reload

But this is purely based on my style of reading HN.

5
atmosx 17 hours ago 2 replies      
I didn't play with the HN API but I wonder if it's possible to post comments using the API. I haven't seen any program support user comments, it would be neat to be able to post comments using vim :-
6
roylez 4 hours ago 0 replies      
Reading comments in this thread reminds me how talented people are and how much superfluous engery there is in them.
7
ponytech 16 hours ago 0 replies      
It will be very useful at work for reading HN and pretending I am working :
8
ecthiender 6 hours ago 1 reply      
Super cool stuff!

I also agree with few others here. Adding comments support would be so awesome.

9
yzh 8 hours ago 0 replies      
This is gonna so reduce my productivity dude.
10
kansface 15 hours ago 0 replies      
No, Vim is more or less incapable of dealing with JS because it lacks support for asynchronous processes with two way data flow.
11
Killswitch 16 hours ago 1 reply      
I dig your vim theme, can we get some info on that?
12
alexbardas 17 hours ago 0 replies      
Great and very useful plugin. Good job!
13
aceperry 17 hours ago 0 replies      
LOL, so cool. I prefer to read HN in a browser though.
14
owly 14 hours ago 0 replies      
Fun!
15
tunnuz 16 hours ago 0 replies      
Amazing!
16
myrandomcomment 8 hours ago 0 replies      
Ugh! If you want stuff like this then switch to EMACS.
AudioQuest Diamond RJ/E Ethernet Cable: 6899
points by jasoncartwright  5 hours ago   106 comments top 35
1
raldi 3 hours ago 1 reply      
There might not actually be anyone who buys these cables. The point might be to use their existence (or claimed existence) to make the $200 Ethernet cable seem like a good "middle" choice.
2
bsaul 5 hours ago 4 replies      
I onced interviewed for a job at a software company developping virtual instruments plug-ins. At some point in the interview, the guy told me this great story :

"We made some test once, and we changed the skin of the software, to a new color. Every people said the software sounded better with that new skin. Yet we changed absolutely nothing except the color."

I'm still wondering if some graphical configuration ( such as a bright color) wouldn't stimulate the brain more, making it more receptive in general, and to sound in particular, letting people "hear" better.

3
goodmachine 52 minutes ago 1 reply      
IANAL, so maybe someone who is can comment: surely this is flat-out illegal?

The vendor claims that

1.'All audio cables are 'directional'

2.'When insulation is unbiased, it slows down parts of the signal differently, a big problem for very time-sensitive multi-octave audio.'

That's two verifiably untrue or misleading statements with reference to a data cable, putting all subjective sound-quality fluff aside.

Puts me in mind of the claims of homeopaths, etc.

https://www.gov.uk/marketing-advertising-law/regulations-tha...

http://www.cap.org.uk/Advertising-Codes/Non-Broadcast/CodeIt...

4
fdb 5 hours ago 2 replies      
The "Wat Hifi?" Tumblr features more of these products: http://wathifi.tumblr.com/
5
Retr0spectrum 5 hours ago 3 replies      
"All audio cables are directional. The correct direction is determined by listening to every batch of metal conductors used in every AudioQuest audio cable. Arrows are clearly marked on the connectors to ensure superior sound quality. For best results have the arrow pointing in the direction of the flow of music. For example, NAS to Router, Router to Network Player."
6
surreal 5 hours ago 5 replies      
Without commenting on whether the product is worth the price (I don't know enough about HiFi quality):

There is a bell curve of purchasing mentality - from a minority who buy solely based on what's cheapest, through varying degrees of cost/benefit tradeoff, through to a minority who will tend toward whatever is most expensive. It often pays to offer something to that latter group.

7
ksec 3 hours ago 4 replies      
I have always wanted to ask this but never found the right place to do so.

Why, in the world of Digital Audio with ones and zeros, would any cable, be it silver or gold or what ever super conducting cores make any difference to sound quality?

Yes it would properly make 0.0001% ( Wild Guess Only ) speed difference due to better conductivity and less Error correction. But if everything gets decoded at the Chip level, then the cable should in theory makes absolutely NO difference in sound quality what so ever.

Please correct me if i am wrong.

8
buro9 5 hours ago 3 replies      
Great, I've been looking for an ethernet cable, I recently purchased a subwoofer cable for only 9,049 ( http://www.audiovisualonline.co.uk/product/8401/audioquest-w... ) and I was worried that I wasn't getting the best out of the sound as the cables earlier in the system weren't at this quality.

Edit: just found a 2,199 mp3 player: http://www.audiovisualonline.co.uk/product/8288/astell-amp-k...

9
tilt_error 4 hours ago 0 replies      
This is hilarious. You are sold the idea of a "pipe", through which something flows -- of course, having a "smoother" directional pipe would be better! But then again, we are talking about electrons -- alternating currents -- flowing one way _and_ the other for a short while. Having knowledge about how currents move in leads, typically on the surface of the lead, explains why quality cables consists of a multitude of small leads.

On these leads, electrons are flowing a little bit in one direction and then as the polarity of the source shifts, a little bit in the other. The whole idea of having a "direction" in a cable is just...

At one time we really had analog equipment "all the way down", so it kind of made sense to minimise distortion and loss in every individual link. Nowadays, we use digital systems where the music information is conveyed as "symbols". In this realm, a lot of the concepts from analog systems (or even the "pipe" system mentioned above) just makes no sense.

It is true that the digital information is still run over analog cables, in this case I assume we are talking about an Ethernet layer, but it has very little resemblance with the idea of a loudspeaker cable as information is packaged and translated in various ways before actually appearing on the cable.

Do you remember that funny marker pen that you could use to paint the edge of CDs to minimise effects of laser light running back and forth in the CD. That was a try att connecting a common understanding of the turntable with the new CD medium -- the idea of having a pickup that could be disturbed. I wonder onto what common understanding the idea of these cables are trying to connect, the water pipe idea is visual and easily taken but jumps a generation or two of reality.

I choose to see this as a taxation of... less gifted but wondrously more wealthy people. Actually a tax that kind of makes sense.

10
shin_lao 2 hours ago 0 replies      
Some Hifi equipments are in the range of hundreds of thousands, it is easy for the reseller to slip in a couple of cables in the range of thousands. It's 1% of the final price.

Of course there will be absolutely no difference with a 5 Ethernet cable.

Most interesting is that expensive analogical cables are as much as useless. During a blind tests listeners are unable to tell which cable is the "expensive" one.

11
mavhc 2 hours ago 0 replies      
A 100Gbps metal twisted pair ethernet cable? that explains the price, it came from the future where such standards may have been defined.

I asked them a question on their website to see if that's true.

12
huhtenberg 5 hours ago 1 reply      
Earlier on - the classics of A/B testing in "Monster cables vs Coat hangers"

http://forums.audioholics.com/forums/threads/speakers-when-i...

Scroll to the 2nd from the bottom paragraph staring with "We gathered up ...".

13
yk 5 hours ago 2 replies      
"Silver plated plugs" Can someone remind me of the electric properties of Silveroxide? ( Or do I need to hire someone who polishes the plugs before each track?)
14
ryanlol 5 hours ago 0 replies      
These cables are truly wonderful for deconstructing the Mozartian spacial qualities of the sound and to perpetuate the plateau-spreading fluidity of the music.
15
cbg0 5 hours ago 0 replies      
That's actually quite cheap compared to this: http://www.monoandstereo.com/2014/12/250000-eur-schnerzinger...
16
sanoli 5 hours ago 2 replies      
You know the price is wrong when there's a "Fiance Available" on the sale page of an ethernet cable :-)
17
Aardwolf 5 hours ago 0 replies      
It says the length is 12m. At this price, surely that must mean 12 miles? or 12 megaparsecs?
18
jkot 1 hour ago 0 replies      
There are shoes or bags sold for this price. This thing at least has some silver inside, so its not complete waste.
19
pan69 5 hours ago 2 replies      
Well, luckily it comes with a 5 year warranty.
20
monkeymagic 5 hours ago 0 replies      
One of my hobbies is arguing with reviewers about these sort of things on amazon. Its amazing the crap they come up with.
21
jakobegger 5 hours ago 1 reply      
To be fair, that price is for the 12 meter (40 ft) cable. The 75cm (2 ft) cable is much more affordable at just 600.

The best part about that cable is that someone took the time to determine which direction it sounds best, and the cable is marked to make it straightforward to attach it in the best direction!

22
23
kristaps 5 hours ago 0 replies      
I sometimes wonder if these outrageous "audiophile" products are just trolls and no real person has actually bought any of the crap.
24
geijoenr 5 hours ago 0 replies      
this can only be some money laundering scheme
25
silverwind 5 hours ago 0 replies      
100000% the price for 5% more conductivity compared to copper? Thanks, but I'll wait for graphene cables.
26
ai_ja_nai 5 hours ago 0 replies      
Audiophiles are truly a sustain for economy
27
iptel 5 hours ago 1 reply      
Daylight robbery
28
Matthias247 5 hours ago 0 replies      
Directional Ethernet - nice :-)
29
pron 5 hours ago 0 replies      
For best results have the arrow pointing in the direction of the flow of music.
30
q_no 5 hours ago 1 reply      
I bet not even NASA would deploy such expensive cables on a spaceshuttle.
31
viggity 2 hours ago 0 replies      
"financing available". wow. some people have no shame.
32
bencollier49 2 hours ago 0 replies      
Decimal place error?
33
alexchamberlain 4 hours ago 0 replies      
What a waste of silver!
34
fivedogit 5 hours ago 0 replies      
I wonder if this is an automated pricing battle, similar to the $23.7 million dollar book about bugs on Amazon.

http://www.cnn.com/2011/TECH/web/04/25/amazon.price.algorith...

35
cpplinuxdude 5 hours ago 2 replies      
Why?
At some startups, Friday is so casual that its not even a workday
points by petethomas  1 day ago   196 comments top 32
1
Jemaclus 22 hours ago 3 replies      
Back when I first got into the start-up scene, I used to work long hours because everyone else did. At some point, I realized that literally nothing has to be done RIGHT NOW OH MY GOD RIGHT NOW. Almost everything can wait until tomorrow morning. Sure, there are some high-priority bugs that are breaking the site that need to be fixed ASAP, but during normal operating procedures, once that clock hits 5pm, I should start wrapping up my work so that I can pick it up fresh in the morning.

I don't take my work home with me, I don't check my work email when I'm at home. It's just not worth the stress to me.

I love my job, I love my work, I feel like I'm contributing to making the world a better place -- it's just not 100% of who I am. I have a dog, a girlfriend, a handful of close friends, a few engaging hobbies, and a ton of books to read and miles to run. I'm more than my job, and once I can pay the bills, the rest of the money is just a nice to have -- but not nice enough to give up my health and sanity.

Then again, I'm extremely lucky to be in this situation, and a lot of people aren't. Some of my coworkers work long hours still, but they seem happy about it. As long as that's true,... well, whatever floats your boat, right?

2
falcolas 1 day ago 10 replies      
If you're getting your work done, on time, and to the quality specifications, who the hell cares how many hours in the week you work?

We're working on computers, doing work which does not benefit from typing for N hours straight; there is no meaningful correlation between quality/quantity and hours worked.

I wish more people realized this.

3
not_a_test_user 1 day ago 6 replies      
I can't believe how negative the article's comments are. Is everyone so addicted to work?

I would understand if I could work at top performance 10-12 hours a day, 5 days a week but that's just not possible for me. In the end driving developers to exhaustion is worse for everyone, with subpar code that'll probably require refactoring Monday morning.

4
jstoiko 1 day ago 4 replies      
I feel like some people have built this fantasy that working at startups is like vacations.

These people probably work their ass off during their 40, 60, or maybe 80 hrs on the job. So they dont understand when they ear that startups' work schedule is more relax because they cannot relate to it. However, when they leave their desk, it's over, they're up to something else and they probably even force themselves not to think about work anymore.

Startups take a relaxing approach to work hours because the (right) person who works there lives and breathes startup 24/7.

It's easy to say when you're a founder (disclaimer: I am one). But it is something I have witnessed in (good) startup employees as well. They think about it all the time.

@falcolas is right, who the hell cares how many hours in the week you spent executing your tasks? Shouldn't the time "thinking" about work be valued as much as "executing" the work? Don't we all "think" better outside of execution time?

5
morgante 14 hours ago 1 reply      
While I certainly commend them for being able to make this work (we need more innovation in management practices across the board), it does seem like there's a bit of a holier-than-thou trend in this comment thread.

As the founding engineer at my current startup, I have tremendous flexibility in setting my own hours but I willingly and intentionally work 60+ hours a week. Not because any manager pushes me to. Not because I even have to. Simply because I genuinely enjoy it.

Indeed, work is probably the most enjoyable thing in my life. On a given Friday, I'd rather be building products at work than watching a movie or engaging in some other leisure activity. Some of us don't have wives, children, or friendswe just want to spend our time executing.

Would Treehouse be accepting of that? If not, they're just choosing to enforce a different paradigm of work rather than giving their employees true freedom.

6
blahedo 22 hours ago 3 replies      
Two things in the article that I found interesting but were not highlighted:

> "But he soon found himself working that same intense pace until his wife asked him why he was working more and making less. She suggested taking Fridays off."

So the central concept of this workplace format, around which this entire article is based, was the idea/inspiration of Ryan Carson's wife, whose full name is not even mentioned. (Her first name is Gill, but is her last name Carson? Unclear from the article.) Not that it's a purely original idea---other companies have done four-day workweeks before---but it was obviously one that hadn't occurred to this particular founder. Three cheers for Gill possibly-Carson!

> "With Treehouse, Carson said he hopes to, again, buck conventional start-up culture, and not cash out by selling the company, the brass ring for most start-ups, but continue to run it as a sustainable business."

Let's hope that also starts a trend. I'm so heartily sick of companies building a great product and actively recruiting user bases to use and love that product, only to shutter it and throw all the users under the bus when the founders achieve their real goal, which is getting the attention of Google or Facebook or whoever and getting acquihired or otherwise bought out. I know that individual founders and other startup workers will often (indeed almost always) say that they really do care about their users, but as a collective structural pattern in the way that SV startup culture seems to work, it sure doesn't look that way from afar. So three cheers for (the currently-stated intentions of) Ryan Carson!

7
ripberge 23 hours ago 3 replies      
Treehouse is actually in a very luxurious position right now. They've raised a bunch of VC and this is a fairly new niche they operate in and more and more of society is recognizing how valuable these skills are. They can work minimal hours, see a lot of growth and everyone is happy.

Fast forward five years from now. There are going to be a ton of tough competitors in this space and eking out revenue growth month over month is going to be much harder. However, in five years they probably have the added pressure to start thinking about something called profitability.

The going is going to be a day of reckoning here when the harsh realities of cut throat competition set in. That just hasn't happened yet.

8
commondream 1 day ago 5 replies      
I'm Treehouse's CTO and cofounder. I'll try to answer anything I can.
9
woodchuck64 1 day ago 2 replies      
Fatigue is such a killer of creativity and innovation. When I'm tired I feel my brain deliberately shying away from anything but the familiar and rote. How many great ideas have been sacrificed to stay an extra hour at work instead of using that hour for rest and replenishment?
10
heynk 1 day ago 1 reply      
At my last job and now at my current job, I negotiated from full time work to less than full time work. Last time, I didn't work Fridays and now I work 20 hour weeks. In each case, I am absolutely more productive (per hour) that I honestly don't know if I get any less work done. On top of that, I have much more creativity and energy. From this experience, I'm always on the side of pushing for less work hours per week as a standard.
11
xivzgrev 1 day ago 0 replies      
I've been waiting for an article like this. There really is an ethos of working yourself to death, and on surface it can make sense. If you put in 80 hours per week and your competition puts in 60, you'll win because you'll learn more quickly than your competition. But I don't think that accounts for efficiency. If you work 80 hours per week, is every hour equally productive? And if so, are you working on the most valuable things? (Eg can you delegate, outsource, etc?). People like to think so but it's far from a universally held belief.On the flip side, if you work 32 hrs per week, you're pretty much forced to be focused and productive. You'll still have same goals, how do you achieve them in half the time each week? You cut out things.I just graduated from one of the many bootcamps, and about half of students "worked" about 45 hrs per week, vs other half who worked 60+ hrs. And there's been zero difference thus far on who has gotten jobs more quickly. Ok I'm done with my soapbox but I wish more people in valley would consider worldview espoused in this article.Also with the Michael Arrington comment, I don't think most investors give two shits how long you work as long as you are delivering that up and to the right growth.
12
unimportant 1 day ago 0 replies      
Some startups are so casual, that work is not considered work and more of a paid hobby, with unpaid overtime being insisted upon...
13
vjeux 1 day ago 1 reply      
> These days, on Fridays, he gets his two young sons off to school and spends the day hanging out with his wife, Gill. Its like dating again. We go to coffee shops. We read books together. I really feel like Im involved in my kids lives and my wifes life,

This assumes that your wife is not working. I've tried taking some days off like this and, in the middle of the week everyone works, so you don't get to hang out much

14
Sir_Substance 4 hours ago 0 replies      
The stories vary in quality and details, but as far as we can tell the 40 hour week was implemented by Henry Ford in 1926, after careful record keeping and analysis revealed that the ratio of worker output per week to wages paid per week peaked at about 40 hours per week of work.

Now, that's fine for factory work, but as far as I know, relatively little effort has been put into testing that theory in knowledge jobs.

15
epberry 1 day ago 1 reply      
"...as a thunder lizard, the tech worlds name for the tiny handful of start-ups that actually become $1 billion businesses." I thought we were calling these unicorns? Maybe I'm behind the times terminology wise.
16
colmvp 1 day ago 0 replies      
> As far as Im concerned, working 32 hours a week is a part-time job, Arrington, said in an interview. I look for founders who are really passionate. Who want to work all the time. That shows they care about what theyre doing, and theyre going to be successful.

Efficiency is key, not some arbitrary limit of working hours.

Chances are yes, as a founder you aren't going to work just 32 hours a week. But it also depends on the state of the company.

And quite frankly, sometimes you can't solve problems by sitting at your computer or even talking to others in the office. Sometimes it involves taking a break and chilling out or exercising.

17
kvcrawford 19 hours ago 2 replies      
I, for one, immediately checked for Treehouse's open positions. In a world where retention and recruiting are huge challenges for tech, a strong work-life balance policy is very powerful.

Too bad they don't have a need for a front-end engineer right now. I would be all over that.

Keep up the good work, guys!

18
AndrewKemendo 1 day ago 1 reply      
My question is, what do you tell customers that demand responses Friday through Sunday? I mean if something breaks I am sure people come in/do remote work, so that's not the cases I am talking about. I assume this only works for companies that have non-critical or fully automated products where users don't have any person to person interaction built in anywhere.

I ask because I would love to implement something like this, but we get requests for service or user questions every day - and a three day turn around time on a user issue is terrible customer support - especially if they have other work riding on it. I realize treehouse is different in this respect.

It seems like the more employee focused you are the less responsive to customers you can be.

19
bdcravens 1 day ago 0 replies      
I really don't get much "work" done in the office; most of my work gets done at 2am or on the weekends. (We talk alot and strategize, so technically that's work I suppose, but the actual coding usually happens elsewhere)
20
stillsut 20 hours ago 0 replies      
Pretty simple: if a developer can be 10x-100x as productive than the average developer, you don't really worry about only getting 80% of their required time.

So if this perk gets Treehouse talent that is +30% more productive, even if they lose -20% of productivity from Fridays off, they still win.

One caveat, so much of programming is loading things into your head, I think three days off every week would be difficult for anything sophisticated being developed.

21
rajacombinator 19 hours ago 1 reply      
I love reading about work style experiments like this and think they're great in some situations. But they make more sense for serial founders who have cashed out before or established cash cows like Google/Apple/Facebook. New founders who are all in on a business can't afford to work 4 days a week because the clock is ticking.
22
itbeho 1 day ago 2 replies      
Interesting how the companies discussed are outside of SV.
23
arturnt 22 hours ago 0 replies      
An average work day isn't filled with 100% development. You have breaks for lunch, coffee, people asking you questions, meetings, ping pong, etc. For a good workplace a chunk of your time is a social experience like any other. That means if you spend about 2-3 hours a day total socializing, then the 5 hours a day you spend working. For startups, sometimes you have time sensitive releases so that number goes from 5 to 10, but it's still only about 50 hours of actual development per week even though it's 65 with all the other stuff included.

Treehouse has managed to make a 4 hour week work since everyone is working remotely, so that social aspect is not as prominent and consumes less time. For people who have kids spending time for the kids becomes more important than the social experience at work as it should. The 4 day work week all of a sudden makes sense since they have bundled those 3 hours / day of a work social time into one day of a kids time.

24
cubano 1 day ago 3 replies      
Wasn't there a thread recently here that discussed how everyone was expected to work 60-hour weeks by their managers or face heaps of wraith?

So what is it...32 or 60?

The only answer can be "it shouldn't matter!", if you work in an industry where you can just as easily work from home as work from your desk.

I am speculating, but I would think that most of the IT developers at Treehouse work well over 40 hours a week.

25
free2rhyme214 20 hours ago 0 replies      
This is a nice way for Treehouse to differentiate itself for talent but this is blown out of proportion like Tim Ferris's 4 Hour Work Week.

Employee culture is important but to be honest I only care about how well the founders are executing their original vision then all the yoga classes, free food, Friday's off, beer pong, maid service and other things companies are offering.

32 hours a week is nice for some but that doesn't always equate to marketplace monopolization.

Then again since Treehouse is competing with others this may not be their goal anyways.

26
fndrplayer13 1 day ago 0 replies      
Its good that places like this exist. My experience thus far has shown me that different developers might go through different phases of their careers in terms of how much they like to work. I think the article touches on this a bit, noting that most of these people are married and have families. I'm married, but I still totally feel the urge and drive to work on software all the time. And its not that I love work, its that I love writing software. I could see that drive tailing off with kids and those kinds of deep commitments, though.
27
spiritplumber 21 hours ago 0 replies      
We work five days a week, one of which is shared so we can talk. Which five days is up to the person.

Of course since I'm a cofounder I work pretty much 24/7 but such is life...

28
varunjuice 21 hours ago 0 replies      
This is just recognition of the fact that productivity is divorced from # of hours at the office, or # of hours spend "working'
29
sandworm 20 hours ago 0 replies      
In my work (legal) i often find myself overdressed and overstressed about decorum and timetables. But corporate decorum, working 9-5 m-f, has a place.

I remember one incident where a thursday meeting at a startup was canceled because a department head wanted to turn an already long weekend into a 4day holiday. I put my foot down. Fridays are not weekends. If they are, then thursdays become fridays and you'll start skipping them too. That meeting consisted of me in a suit, in an empty office, talking to two people via skype. I call that a victory because the meeting at least happened. (The truth is that all the low level employees on the first floor were there and working. They cannot afford to skip out on work.)

Casual is all well and good until it creates unpredictability and disorder. Contrary to popular myth, things actually get done in meetings. Not every decision can be made while scaling the in-office climbing wall. Some decisions require people sitting down at a table to hammer through a series of points.

Does that thing that happened last night on the server qualify as a breech? I don't care that tomorrow is a friday. Neither will your backers, nor the FBI, when they haul you in to explain why you couldn't be bothered to take a decision until after your ski weekend.

30
JohnLen 15 hours ago 0 replies      
Productive works that matters. Not the working hours.
31
zaroth 1 day ago 3 replies      
I wonder if they pay part-time salaries to reflect the work hours. Certainly an interesting trade-off. If you have kids and a stay-at-home spouse I can certainly understand the appeal! Otherwise, perhaps not so much...
32
monsterix 1 day ago 4 replies      
Now this could be an early sign of a bubble in the making. Here's why:

1. The bay believes that solofounders are a bad deal - mostly - because starting a company is a lot of work. And so it is - a lot of work!

2. Now here we have a handful of _startups_ that confess there's isn't enough work to keep everyone in the nimble team up on toes for even forty hours a week! This contradicts with 1.

Sure it means team happiness and all that. Fine.

3. For each _startup_ that has confessed situation at 2. there should be at least 'X' times the number of start_ups who do not accept this reality. I don't know what that number 'X' would be but let's take it 10.

Which means what - a bubble?

[Left open]

In Praise of Idleness by Bertrand Russell (1932) [pdf]
points by jacobsimon  21 hours ago   39 comments top 8
1
fenaer 16 hours ago 8 replies      
I have attempted to portray similar ideas, in regards to a universal wage, to friends and family. Each time I'm met with "What stops some people from not working" and they refuse to move past that. They see people who work less, or don't work as a detriment to society.

What sort of changes can be made to change people's viewpoint on hard work as a virtue?

2
chernevik 57 minutes ago 1 reply      
The moral basis of the work is its repudiation of various parasite classes: priests, warriors, rentiers, even Party apparatchiks (albeit only in a footnote).

And yet it never stops to wonder why these parasites keep recurring, or how they might use these very arguments to recur again, or what might be done about that.

3
myth_drannon 16 hours ago 0 replies      
I like "How to Be Idle"/ Tom Hodgkinson

"From the founding editor of The Idler, the celebrated magazine about the freedom and fine art of doing nothing, comes not simply a book, but an antidote to our work-obsessed culture. In How to Be Idle, Tom Hodgkinson presents his learned yet whimsical argument for a new universal standard of living: being happy doing nothing. He covers a whole spectrum of issues affecting the modern idlersleep, work, pleasure, relationshipswhile reflecting on the writing of such famous apologists for it as Oscar Wilde, Robert Louis Stevenson, and Nietzscheall of whom have admitted to doing their very best work in bed"

4
arvinjoar 6 hours ago 1 reply      
While I wholeheartedly agree with the sentiment I think a lot of the economic thinking is a bit too simplistic, especially in our current age. For example, whether you invest in your government or not, your government will still be able to find funds for the war chest, by printing money if nothing else (or as it works nowadays, the central bank buying government securities). Another problem I can identify is that some hard work requires a lot education, we need nurses for example. What would happen if nurses only worked 20 hour weeks? Maybe there's a clever answer for this too, but I think we really need to think hard about this before we advocate anything politically. It makes a lot of sense to promote idleness as a virtue though, so go on and praise play (it is the hacker way, after all)!
5
jdmoreira 13 hours ago 0 replies      
If anyone is interested in reading more about 'Refusal of Work' - http://en.wikipedia.org/wiki/Refusal_of_work -I would recommend

The Right To Be Lazy (1883) by Paul Lafarge

The Abolition of Work (1985) by Bob Black

6
nazgulnarsil 8 hours ago 1 reply      
BI is a massive political hurdle. A much smaller one is:

1. Dismantling the disincentives to hiring more people for fewer hours each

2. Dismantle the "40 hours is full time" as a legal fence that prevents people from wanting to drop under it (sharp benefit cut offs instead of gradual phase outs)

7
bernardlunn 15 hours ago 3 replies      
Bertrand Russell worked hard to write that. Work that we love to do falls into a different category.
8
rvern 14 hours ago 0 replies      
In Praise of Idleness is not in that book.
In search of the perfect JavaScript framework
points by jsargiox  22 hours ago   70 comments top 22
1
danabramov 19 hours ago 5 replies      
>We want to apply values to variables and get the DOM updated. The popular two-way data binding should not be a feature, but a must-have core functionality.

Strongly disagree. I find one-way bindings and one-way data flow much easier to reason about. A little less boilerplate code is not worth mental overhead, cascading updates and hunting down the source of wrong data in my experience.

What is important is not updating the DOM from the code and instead describing it with a pure function. React, Cycle, Mithril, Mercury do it, and it's time we get used to this. This is the real timesaver, not two-way bindings.

`Object.observe` is the wrong way to approach this problem. If you own the data, why invent a complex approach to watch it, if you could update it in a centralized fashion in the first place? Here is a great presentation on that topic: http://markdalgleish.github.io/presentation-a-state-of-chang.... I strongly suggest you read it ("Space" to switch slides) if these ideas are still alien to you.

Even Angular is abandoning two-way bindings. http://victorsavkin.com/post/110170125256/change-detection-i...

I, for one, welcome our new immutable overlords.

2
ef4 14 hours ago 3 replies      
> "Abstraction is dangerous"

The fact that Javascript people keep saying this with a straight face is getting really absurd.

You do realize Javascript is also just an abstraction, right? And that the browsers that run it also abstractions, and the operating systems, and the kernels, and even the hardware itself has multiple layers of abstraction?

"Abstraction is dangerous" is just fundamentally wrong. Abstraction is the only way we get anything done.

What you really mean to say is that bad abstractions are bad. But stated so clearly, it becomes obvious that it's a tautology. Well-designed abstractions that leak as little as possible are essential to everything we do.

This stuff matters, because instead of having stupid arguments over "how much" abstraction we want (which really boils down to 99 layers vs 100 layers) we should be debating exactly what abstractions we want.

3
carsongross 20 hours ago 9 replies      
My theory is that, for much of the web, the perfect javascript framework is no javascript framework.

Get rid of all the abstraction, local state, dependency injection, symbol management and so on. Take HTML/HTTP seriously and think about REST in terms of HTML rather than JSON.

That's intercooler.js:

http://intercoolerjs.org

Here's an image I tweeted trying to explain how to get there mentally:

https://pbs.twimg.com/media/B9QNU-ZCQAECP-K.png:large

Yes, it's a simple model. And no, it doesn't work for every app. But many apps would be infinitely simpler and more usable in a browser by using this approach, and almost all apps have some part of them that would be simpler to implement using it.

4
tel 1 hour ago 0 replies      
The whole "abstraction is dangerous" spiel is so wrong (imo) that I don't even know how to respond to anything that follows.

The primary complaint appears to be that abstraction eliminates your ability to operationally trace the meaning of a program. This is true, but sacrificing operational denotations only hurts if you replace it with nothing elseand abstractions of general purpose languages are almost always more interpretable than the operational denotation of the base language itself!

Of course, there are always places for poor abstractions. I am not talking about these. Abstractions which are intentionally opaque, have confusing action-at-a-distance, etc---you're bringing down the name of abstraction in general. "Leaky" is insufficiently demeaning.

A good abstraction will have its own semantics. These can be equational, denotational, operational, what-have-you but, essentially, these semantics must be easier/simpler/more relevant than the semantics of the base language they're embedded in. Otherwise why abstract?

So what does React give you? It gives you, more or less, a value-based compositional semantics. Components have some "living" nature (an operational semantics w.r.t. to state) but they're mostly defined by their static nature. Because you can build whole applications thinking only about the static, compositional nature of components you can take massive advantage of this abstraction.

Ultimately, you do not want operational semantics for React. This is what gives us React Native, background rendering, and probably what will lead to sensible animations (in time). To define operational semantics, especially ones which have to look like or (worse) be identical to those of Javascript, would destroy almost all possibility of extension. At the cost of making things more complex and harder to reason about.

All so that you can just stick to "obvious" Javascript base operations.

5
drawkbox 19 hours ago 0 replies      
As long as your javascript framework is a micro framework and not a monolithic one, the abstraction does not make the project foggy.

Building the core and then using micro frameworks or components like react, jquery, etc leads to less walls as swapping is easier as time progresses.

You don't want to be caught high and dry stuck in years of monolithic to cleanup when the fad dies and at that point having abstracted away everything you need to know.

Outside of javascript, .NET WebForms and Drupal are classic examples of too much abstraction in monolithic fashion (those poor bastards stuck there - dead man walking), Angular might be another. The whole time you spent building addendums and machinations to a framework, not building the core of what needs to be known.

If the framework changes everything you do and abstracts core logic or the systems you are building doing things without you being aware, it might be easy to start 90% but there are gonna be problems and eventually walls and walls against you.

The only thing that should be monolithic and the base is programming languages and platforms. Everything else should be micro components or messaging.

6
beat 19 hours ago 0 replies      
It's a thought-provoking article, but I would also like to see something about React in it. It seems to me that React is a very pragmatic way to get around the global complexity-driven performance issues with DOM manipulation.

When we're coding, we're optimizing for a couple of different things, really. First is real-world performance (represented by slowpoke DOM manipulation). Second is programmer performance (represented by inappropriate abstractions). A lot of things we can do in Javascript to make programming less difficult and complex result in poor real-world performance, and vice versa.

But what do I know? I'm not by any stretch of the imagination a Javascript expert.

7
RehnoLindeque 6 hours ago 0 replies      
> We all like simple tools. Complexity kills. It makes our work difficult and gives us much steeper learning curve. Programmers need to know how things work. Otherwise, they feel insecure. If we work with a complex system, then we have a big gap between I am using it and I know how it works.

One answer to this problem of opaqueness in abstractions is having a well defined denotational semantics. This makes it clear that something can work in one way & only one way (without the need to dive into library internals). I feel that Elm is doing a pretty good job of tackling this for GUIs and signals.

8
mathgeek 1 hour ago 0 replies      
I was slightly disappointed that this didn't point to a 404.
9
iEchoic 7 hours ago 0 replies      
The Knockout example in this article is a bit strange - Knockout is not a framework (it is explicit about this) - but besides that, Knockout components actually do allow the "framework" to decide when things are instantiated.
10
hippich 17 hours ago 0 replies      
I yet to find "perfect" JS framework. I bet, it will never happen.

Nevertheless, I have a favor to ask any framework developer out there - please, make it disassemblable and usable piece by piece outside of framework.

OP was right - sometimes i find some aspect of framework nice, but more often than not it is monolith part of the whole framework, which as a whole I dislike.

ps: current combination it seems to fit my mind workflow is Backbone (models + collections) + Ractive.js (Views) + Machina.js (for routing and defining "controllers"/states.) Although I am looking to use something else besides Machina.js in next project, as I want to have hierarchy now. And since it is all loosely coupled, I can replace parts.

11
djabatt 18 hours ago 0 replies      
Where does react.js fall in this discussion? Or does it? Just reading a lot about react.js this week.
12
wwweston 19 hours ago 1 reply      
> Abstraction is dangerous

True statement. Of course, it's more or less true, depending on how much the abstraction you're using leaks. Few (if any) abstractions completely encapsulate complexity, almost all will leak. But there's a range. Some abstractions elegantly cover a modular portion of your problem space and do it so well you only rarely have to think about what's going on under the hood (and will even produce effective clues as to what's going wrong when something does go wrong). Some abstractions awkwardly cover only part of a modular portion of your problem space, require a high intellectual down payment to even start to use, have gotcha cases that chew up performance or even break things, and require continual attention to what's going on just to keep development going.

Most are probably in between.

I think this is what JWZ is talking about in his famous "now you have two problems" assessment of regular expressions. I don't read him as saying "regular expressions suck," I read him as saying anything but tools from the high end of the abstraction quality spectrum means now you have two problems: (1) the problem you started with (2) the problem of keeping the model/details of how the tool works in your head. Regular expressions are arguably in the (maybe high) middle of the spectrum -- they may not cover your case well (ahem, markup) and they can send your program's performance to hell or even halt it if you don't know what you're doing.

Now, they're also broadly useful enough in all kinds of development that the benefits go up with the costs and so they're probably worth investing in anyway, as part of a suite of other parsing tools/techniques. So I'm not bringing the topic up to bash them.

But to take us back to the topic, I might be bringing it up to question the ROI of popular JS frameworks, which, as far as I can tell, are generally not at the the high end of the abstraction quality spectrum, don't have the broad usefulness of regular expressions to recommend them, and may not even survive longer than a handful of years.

13
jcoffland 7 hours ago 0 replies      
Vue.js meets most if not all the criteria outlined in this article. I've been have great luck with vue.js after a nightmare of fighting with writing a big SPA in Angular.
14
closetnerd 17 hours ago 0 replies      
This article reminds me quite a bit about Vuejs. Its got an interface similar to Backbone but with the addition of two way data binding while also allowing you to define web components style tags, attributes.
15
akrymski 17 hours ago 0 replies      
One super simple framework is https://github.com/techlayer/espresso.js/

It's a bastard child of React and Backbone.

16
itsbits 13 hours ago 0 replies      
I don't agree with DOM event handling: setting events at every node comes with a cost. And I think you forgot to mention the performance issues with that approach and so now a days almost all frameworks prefer to event delegation.

I think author except in 1,2 points didn't bother to take side with performance aspects.

17
mark_l_watson 17 hours ago 0 replies      
A difficult article to write - I would not have tried. There are so many good alternatives and choice depends on the application and available skill sets.

I spent time today working in Clojurescript which wraps the Closure library. In the last month I have used Ember.js, Clojure with hiccup, and meteor.js. I really like all of these tools and frameworks. I used to use GWT a lot, and almost committed to Dart. So many good choices.

18
thekingshorses 17 hours ago 1 reply      
I like the direction author is going. I have used similar methodology designing my applications (for mobile), simple, micro libraries, one way data binding.

http://hn.premii.com/

http://reddit.premii.com/

* I have bunch of helper functions (UI and non-UI). Each function define in its own file and independent (easy to unit test). Personal library like jQuery but not a jQuery replacement.

* App is route based. One route to many controllers. Each controller is a page/screen on mobile.

* There is only one model (API) that interface with 3rd party library. API layer talks to 3rd party library to get data or gets data from server directly, caches data, etc. Provides sync (Cached data) and async (Cached data or fresh from server) interface to controllers.

* There is a app class or I call it a page manager. Responsible for managing pages like ordering, loading, unloading etc (Kind of big and complex 200+ lines of logic).

- Decides which page to animate in which direction on mobile (Loading new page or going back).

- Order of pages (Back button)

- Passes events to its controllers

- Decides which pages to keep in DOM, and which to remove.

--- If you go from homepage to comments to profile page, all pages are in DOM.

--- When you go back to comments page from profile page, profile page will be destroyed and controller will be notified. Same happens when you go from comments to home page.

--- If you go to same comments page again, it will be loaded as a new page.

* Controller:

- Each controller may have multiple CSS and templates

- Controller uses its template to render

- Using sync API to get data to renders page.

- If sync API returns no data, renders empty page with loading, and makes async API call.

- Controller are idle when transitioning (animating) from one page to another on mobile. (Very important for smooth animation)

- Simple but fat controllers

- Controller handles events, UI logic

- Self cleaning so that browser can collect garbage when necessary

I package app using node/gulp. Anything that is not specific to page/app related, it becomes part of a helper library. Each app has its own model (Data layer), and controllers. I use micro templates, precompile using node for faster performance.

19
mperret 10 hours ago 0 replies      
I like the structure of google closure, thoughts?
20
acdlite 19 hours ago 0 replies      
Bad abstractions are dangerous. Good abstractions are empowering.

cough React Native cough

21
jbeja 16 hours ago 0 replies      
The perfect JS framework won't exist until 2079.
22
rhapsodyv 19 hours ago 0 replies      
I LOVE ExtJS!
Ground Collision Avoidance System Saves First F-16 in Syria
points by aerocapture  18 hours ago   50 comments top 12
1
Animats 11 hours ago 0 replies      
The USAF finally went for that? It was first demonstrated in 1998, and has been used by the Swedish air force for years. Their slogan is "You can't fly any lower". Here's the 1998 writeup, copied from Aviation Leak: http://www.f-16.net/f-16_versions_article8.html

The technical paper: http://www.icas.org/ICAS_ARCHIVE/ICAS1998/PAPERS/182.PDF

The actions this system takes are drastic. Roll rates to 180 degrees/sec to get to wings-level, then a 5G pull-up. The pilot's helmet may be banged against the canopy. It's so drastic because flying 150 feet off the ground in mountainous terrain is normal procedure for fighters. If the system has to act to avoid a collision, that action has to be very aggressive.

Here's what it looks like to a pilot:

https://www.youtube.com/watch?v=aPr2LWctwYQ

Here, the pilot puts the plane into an insane bank and, as he says, "goes to sleep" and releases the controls.

2
WalterBright 16 hours ago 2 replies      
Reminds me of a description of air combat where the pursued would dive, and then pull out at the last possible moment, hoping the pursuer would misjudge and pull out too late. One pulled out so low he raised dust on the ground.

My father liked to attack ack-ack positions by diving vertically on them, as the gun crew obviously was reluctant to fire straight up. Of course, you gotta keep a real close eye on your altitude and airspeed doing that.

3
themodelplumber 17 hours ago 3 replies      
This is really interesting. I found a PDF with some neat details and imagery describing the thinking that goes into Auto GCAS:

http://www.sfte2013.com/files/78993619.pdf

I don't know if it's just coincidental, but a lot of the examples seem to be concerned with keeping jets from flying into mountains, as opposed to flying into level ground, which was the first thing that came into mind when I saw "Ground Collision."

4
afterburner 17 hours ago 1 reply      
"According to the Air Force, 26% of aircraft losses and 75% of all F-16 fatalities are caused by such accidents."

Wow.

5
naz 3 hours ago 0 replies      
The Stuka had a slightly less sophisticated version of this in 1935:

http://en.wikipedia.org/wiki/Junkers_Ju_87#Diving_procedure

6
dankohn1 15 hours ago 4 replies      
Can you imagine the testing rigor you would need for this sort of system, knowing that if you incorrectly engaged or steered the aircraft, you would be responsible for killing the pilot?
7
derwildemomo 16 hours ago 5 replies      
That's great.

Maybe someone here can enlighten me as to why the systems, specifically TCAS and GPWS in modern civilian planes are only ever used to issue warnings/recommendations, but never take control? it would have at least prevented the berlingen mid air collision and probably some other CFIR incidents in the past years.

I thought about this for a while and couldn't come up with a really good reason.

8
ricardonunez 13 hours ago 1 reply      
This is fascinating stuff. Does anybody know the whole cost of this system? I only was able to find a pdf and it says that the seed money was $2.5 millions. If anybody is interested here's the link: http://www.nasa.gov/sites/default/files/337112main_Auto-GCAS...
9
kator 17 hours ago 1 reply      
I just finished the book "Skunk Works: A Personal Memoir of My Years of Lockheed"

http://www.amazon.com/Skunk-Works-Personal-Memoir-Lockheed-e...

It was a great book that almost feels like the pre-scrum manifesto applied to building aircraft.

10
ForHackernews 16 hours ago 1 reply      
"a 5g pull"

Yikes!

11
kingkawn 11 hours ago 2 replies      
Why are we bombing Syria?
12
throwaway8898 15 hours ago 3 replies      
hmm ... as a pilot and student of WW2 combat, I think this might be ok in peace-time, but definitely a problem in war time.

- to avoid radar, you have to fly low - tree-top high- to save ammo, you have to shoot near ground-targets- can the software be fooled in mountainous terrain?- what about off-field landings?- Japan's most effective bombers were kamikaze, and American pilots also considered ramming other aircraft after their ammo ran out- if the software is wrong, does it roll inverted and pull down at 5G? how do you stop it?

Stress-Strain Plots as a Basis for Assessing System Resilience (2008) [pdf]
points by wallflower  6 hours ago   discuss
How I stumbled onto a security flaw in Box Sync for Mac
points by zdw  10 hours ago   8 comments top 5
1
dperfect 8 hours ago 1 reply      
> ... heads up to fellow Mac Admins and anyone else who uses or deploys Box Sync to ensure that the 4.0.6035 update is applied ASAP. There is no way of knowing who else has been aware of the exposed information before me and whether or not it may have been used to access Box customer data. This is especially important in environments that use a managed software update workflow which may be holding back automatic updates until specific action is taken by an admin.

Looks like that second-to-last sentence was inserted after an initial writing, since the last line refers again to the urgency to update. In my opinion, there's far too little discussion (just the one line) about the implications of what might have been exposed here. In other words, this sounds much more like evidence of a possible data breach than just a client security bug that is fixed with an updated version. Of course, that discussion/clarification should come directly from Box.

If information like S3 credentials was exposed, I assume Box's response was to immediately change all the relevant credentials (and be sure the new ones aren't exposed in later versions). If that's the case, then the client update itself probably isn't the critical thing to worry about at this point, right?

It's a bit like saying "oops, we accidentally pushed secrets to our public GitHub repo and didn't know about it until someone else pointed it out" and that person saying "quick, everyone pull down the latest revision that doesn't include the credentials."

2
Nexxxeh 1 hour ago 0 replies      
Well that doesn't inspire confidence. I'm not a Box user; is data usually encrypted with a user-supplied key as well before transmission to Box's systems? Is access to Box's "internal" S3 storage going to potentially yield unencrypted (or encrypted with a known key) user data?
3
kartikkumar 6 hours ago 0 replies      
I removed Box Sync because the client was so annoying. Every time I booted up without internet, it logged me out and required me to log back in. This security flaw only adds to my view that they have a poor product, at least for Mac.
4
dsacco 8 hours ago 0 replies      
The tl;dr of the vulnerability is this: the Box sync application had sensitive information (e.g. application secret keys) exposed.

Fairly widespread problem, which is almost inevitable given enough binary digging and reverse engineering work, unless you do real work to segregate the authentication process to a serverside PKI or something similar.

If the author comes across this, good work, nice writeup! However, if you're going to have a tl;dr section at all, you should put a brief description of the vulnerability in it. In this case, the vulnerability is simple enough that it can be briefly expressed in a tl;dr.

5
Someone 5 hours ago 1 reply      
"On February 6th I was notified that an updated version 4.0.6035 had been released which is supposed to resolve the issue."

It would have been useful to check whether that 'supposed' is true and if so, how they fixed this. Worst-case, they did the easy thing and obfuscated the strings.

Meet the Man Who Finds Your Stolen Passwords
points by sasvari  6 hours ago   discuss
Tail-call optimization added to 6to5 compiler
points by insertion  18 hours ago   21 comments top 4
1
jarcane 17 hours ago 1 reply      
I was literally complaining today about the fact that no-one seems to have yet implemented that part of the ES6 standard yet[1], and yet now here it is.

As someone chiefly interested in .js for it's 'functional curious' side, the new features in ES6 have me really excited.

2
sebastianmck 11 hours ago 0 replies      
Available as of 3.5.0. Happy to answer any questions if anyone has any!
3
jabbrass 11 hours ago 0 replies      
4
smrtinsert 16 hours ago 1 reply      
I wonder if this implementation idea is general enough that it could be added to clojure anr script.
Demoscene news and downloads
points by tonteldoos  13 hours ago   6 comments top 3
1
skrebbel 9 hours ago 1 reply      
Now, scene.org is hardly a decent news archive these days. It's basically just kept alive with no active development.

Scene.org's core function is to act as an archive. A very large share of all demoscene productions, ever, are hosted on scene.org and its mirrors. It's been fullfilling this service for many many years now, and I don't think it's going to stop anytime soon. It's excellent for a community to be able to rely on such an excellent file host for such a long time.

For a better accessible and searchable database of demoscene productions, it's better to go to http://pouet.net. Don't be scared away by it's, well, "impressive" look and feel, it really is the central hub of the demoscene and the design is chiefly maintained because of nostalgic reasons.

Another more detailed archive of roughly the same productions is http://demozoo.org.

2
skrzyp 13 hours ago 1 reply      
3
AceJohnny2 12 hours ago 0 replies      
Over a year since the last news item. What happened in 2014? Pouet.net is much more active.
Neil Armstrong's purse: First moonwalker had hidden bag of Apollo 11 artifacts
points by antr  19 hours ago   11 comments top 4
1
ChuckMcM 14 hours ago 1 reply      
"In September 2012, one month after Armstrong died, President Barack Obama signed into law a bill that confirmed the Mercury, Gemini and Apollo astronauts had legal title their mementos."

Which explains why he never told anyone about them, their ownership was unclear and while deeply significant to Armstrong the artifacts could easily have been confiscated by NASA and put in some dusty vault to rot away like the suits they wore mostly did before being rescued.

2
zck 2 hours ago 0 replies      
It's interesting to compare the waist tether (http://www.collectspace.com/images/news-020615h-lg.jpg) from this article with locking carabiners used by rock climbers (http://www.rei.com/product/722360/black-diamond-rocklock-twi..., http://www.rei.com/product/722353/black-diamond-rocklock-scr...). I wonder what the design requirements were -- it seems surprising to me that the only locking part of the waist tether is a button. It seems like that button could get accidentally hit, causing the tether to unlock and become unsafe; that is, unless the button has a spring forcing it to the "lock" position.

Interestingly, there is a new carabiner on the market whose locking mechanism is more like a button (http://www.rei.com/product/840193/black-diamond-magnetron-ro...): the difference being that the mechanism must be activated from both sides (via pinching the purple parts in the image), and has magnets forcing the carabiner into the "locked" state when not being held unlocked.

Also interesting is that the waist tether is adjustable. That could be a point of failure -- imagine floating off the end of your tether. Although I can't tell whether the waist tether is designed to attach astronauts to the spaceship, or just tools to astronauts. Howstuffworks.com implies it attaches astronauts to the spaceship (http://science.howstuffworks.com/spacewalk4.htm), but brighthub.com implies it's for tools (http://www.brighthub.com/science/space/articles/126178.aspx).

3
visarga 10 hours ago 0 replies      
Interesting story. They shy away of actually saying he stole the objects. I wonder how nostalgic he was about the trip to the moon for the rest of his life...
4
jcoffland 6 hours ago 0 replies      
Could have been titled, "Neil Armstrong purse snatcher."
The Computer Science Handbook: First Draft [pdf]
points by StylifyYourBlog  23 hours ago   30 comments top 17
1
jimmahoney 10 hours ago 1 reply      
The title is much broader than what's listed in the table of contents, which is primarily what you'd find in one course (algorithms) from a CS undergrad course of study.

For an online text that covers similar stuff, see http://interactivepython.org/runestone/static/pythonds/index... .

The last "interview" chapter is about getting a job, not about CS itself.

A good starting spot for the topics in "computer science", at least at the undergrad level, is the ACM curriculum ( http://www.acm.org/education/CS2013-final-report.pdf ).

2
cechner 10 hours ago 0 replies      
This is algorithms, which while useful is not even the majority part of a computer science education, as I understand it.

My CS degree involved image processing, graphics, operating systems, systems programming (low level programming), programming language theory, discrete math, linear algebra and statistics, just off the top of my head.

Interestingly programming is actually not a big part of a degree (again, as I understand it.) It takes many years to become a good programmer, and it would be a waste to dedicate an entire 4 year degree to just that.

3
xyclos 13 hours ago 1 reply      
> easy to read without any math or computer science background

> you are already familiar with Java or C++ syntax

not sure you will have too much success hitting your target demographic of "people who are ignorant of computer science, yet are experienced programmers"

4
suyash 10 hours ago 1 reply      
Pretty average content and not academically strong enough. It lacks the depth of proof and detailed technical explanations. I wouldn't call it a computer science book - it seems like a data structure and alg concise guide. There are tons of books if you're serious about Algorithms or Data Structures alone and that won't make you a computer scientist.

Computer Science is a big field that spans many areas of programming, theory and research.

5
jsalit 19 hours ago 0 replies      
Not sure if OP is author, but this seems like a decent start at a useful compilation. It's obviously highly focused on data structures and algorithms, so the title is a bit misleading.

Strange - not a single citation/reference?

6
oneeyedpigeon 7 hours ago 1 reply      
Looks good so far. I would have loved this as an intro. before starting my CS degree; it might've even been useful for my UK A-level course (age 17-18).

I think there's an error here:

string[1..3] = abcstring[1..1] =

7
ivan_ah 11 hours ago 0 replies      
This theory + a tutorial about testing best practices + 20 weekend projects would be a really good way to learn how to code.

Thinking about stacks, trees, and graphs can go a long way to build up learners' ability to simulate what the computer will do, e.g., getting the steps right for breadth first search in a graph is a rite of passage.

8
journeeman 12 hours ago 0 replies      
I like it. It's written by students so it's pretty easy to read. This seems like it could evolve into a students' version of 'Foundations of Computer Science' by Aho, Ullman - http://infolab.stanford.edu/~ullman/focs.html
10
acbart 12 hours ago 1 reply      
Research says that interactive feedback is more important than static. I recommend the OpenDSA project.

http://algoviz.org/OpenDSA/

I do appreciate accessible text though - worth looking into.

11
thirdtruck 19 hours ago 0 replies      
I could use a resource along these lines, especially with all the programming tests I'm facing during my job hunt. Thanks.
12
csstudnt 8 hours ago 0 replies      
I appreciate the effort, but I think Foundations of Computer Science covers the same material, only better and more rigorously:http://infolab.stanford.edu/~ullman/focs.html
13
CharlesMerriam2 18 hours ago 1 reply      
I'm sorry. If I am going to teach this, it must be a comic book.
14
GroSacASacs 17 hours ago 1 reply      
page 9: I think there is a mistake "However, the human brain can contain much morememory than humans."
15
chrys 16 hours ago 0 replies      
Have you read "Head First Java" by Kathy Sierra? Data Structure & Algorithms book written in "Head First" format maybe pretty cool.

I wonder if you could re-format your book in that manner?

16
dcgoss 18 hours ago 0 replies      
Is it supposed to be missing practice problems at the end?
17
McUsr 17 hours ago 0 replies      
You should really consider something other than LaTeX as a publishing platform, or make it look better, when taking it from draft format to publishing format.

This doesn't communicate that algorithms are fun. An algorithm book, should be like a Magicians show, really. With fun problems to apply the algorithms on.

I also note that there aren't any links for backreferences to topics, and that at least one topic is missing, heaps.

I am actually very fond of Robert M. Sedgewicks books (Second Edition), and Donald E. Knuths monumental accomplishment.Those books are fun, most books concerning algorithms are not as fun as they should be.

I am picky I guess, I want fun excercises, or presentations, but also accurate details, and minituous explanations.

Game Modification: 60 FPS Hacks in Dolphin
points by Pxl_Buzzard  18 hours ago   15 comments top 7
1
docx118 17 hours ago 1 reply      
Last night I set up Dolphin and it ran both Super Monkey Ball 2 and Mario Kart Double Dash in 1080p at 60fps. It was amazing, and super easy
2
miander 15 hours ago 0 replies      
Man I love these guys. They strive for accurate emulation first and foremost, but they still care about giving the emulator the power to enhance games in such amazing ways. They really have their priorities straight.
3
CountHackulus 17 hours ago 0 replies      
Wow, 60fps really does look fantastic in Super Mario Sunshine. Amazing that this kind of thing is possible now.
4
namuol 6 hours ago 0 replies      
For those interested in a more technical explanation for the Super Mario Sunshine 60FPS romhack, go here: http://jul.rustedlogic.net/thread.php?id=17475

The hack seems like it was possible because the engine was designed to run at arbitrary framerates, and the romhacker (ehw?) found framerate and vsync-related functions in the demo version of the game, which actually has debug symbols baked in!

Romhacking sounds really fun.

5
benguild 14 hours ago 1 reply      
Tutorial on setting up Dolphin on a rMBP that I wrote. Although 60fps is unlikely in most games. http://benguild.com/2013/07/18/how-to-play-nintendo-wii-game...
6
simlevesque 16 hours ago 0 replies      
I am really interested in reading more about the 60FPS hack for Gauntlet: Dark Legacy but I could not find anything online.
7
archagon 15 hours ago 0 replies      
Wow, this makes me want to play Sunshine again. I only played it after Galaxy and was put off by the framerate (though I still beat most of it).
Show HN: I made a command-line iMessage interface
points by camhenlin  22 hours ago   51 comments top 15
1
striking 21 hours ago 2 replies      
Please note that this isn't actually using the iMessage protocol, this just uses AppleScript to fire messages via Messages.app. However, it's nice to finally be able to iMessage over SSH. :)
2
jmduke 20 hours ago 2 replies      
## Why did you make this?

Why not?

Regardless of the actual repository (though don't get me wrong, this is still super cool and I like seeing AppleScript in action since I feel like it's criminally underused), this kind of thing in a Readme always brings a grin to my face. Tinkering for tinkering's sake is the best.

3
BillinghamJ 3 hours ago 0 replies      
Have you looked into using MessagesKit.framework? (In /System/Library/PrivateFrameworks)

If it's written in Obj-C, you can extract fully usable headers from it. If C++/C/etc., it will be difficult (but not impossible) to understand it.

After extracting the headers from that, you may well be able to use it instead of AppleScript.

4
justinmayer 20 hours ago 1 reply      
For those of you who, like me, might want to use this on older hardware, I got an error on launch on Mac OS X 10.8.5 Mountain Lion: https://github.com/CamHenlin/imessageclient/issues/2

Just thought I'd mention it here in case others are thinking of trying this out on Mountain Lion.

Edit: CamHenlin has already committed a fix for this problem. Nice work!

5
adieth 22 hours ago 3 replies      
If it could be ingrated with [WeeChat] [1], it could be a way to provide iMessage support as a [web browser chat client] [2] and to [Android devices] [3].

[1]: https://weechat.org/[2]: https://github.com/glowing-bear/glowing-bear[3]: https://github.com/ubergeek42/weechat-android

6
lindbergh 20 hours ago 1 reply      
Really cool. Now I must figure a way to add this as an emacs mode. Good job!
7
asdf0 16 hours ago 1 reply      
ive played with this similar before capturing incoming. as well you can write events for messsages in javascript just now with yosemite the same as you would applescript. you can use osacompile and osascript command tools to run and compile javascript/applescript. the same things applescript can do. i find easier to write. doesnt have to be complicated to run for personal use inside ssh.
8
akrymski 17 hours ago 2 replies      
Awesome! Is there a way to create a new conversation?
9
LargeCompanies 21 hours ago 2 replies      
ANyone know of way or an app to search your iMessage threads?
10
blogle 22 hours ago 1 reply      
This is amazing, I have been wanting something like this for some time now. Hopefully the protocol can be demystified such that this is less clumsy.
11
vhost- 18 hours ago 0 replies      
macmini-server-in-a-closet = iMessage on linux over ssh.

I should label it "iMessage server".

12
alexose 21 hours ago 2 replies      
Cool project! Though, the OSX requirement makes its utility pretty limited.

Could someone host iMessage as a service without being sued into oblivion?

13
djabatt 18 hours ago 0 replies      
Yeah why not?
14
justinmk 20 hours ago 1 reply      
Anyone know of a CLI client for Microsoft lync?
15
rian 21 hours ago 0 replies      
Bitlbee!!
Raft Refloated: Do We Have Consensus? [pdf]
points by mrry  21 hours ago   4 comments top 3
1
kuujo 10 hours ago 0 replies      
The proposed optimizations are pretty interesting, but I think my favorite such optimization has been proposed by Ayende Rahien. As with the optimizations outlined in this paper, his is related to leader election.

One of the issues with leader election in Raft is the potential for split votes. Raft tries I guard against this by randomizing election timeouts in order to discourage two candidates from requesting votes at the same time. What Ayende suggests, though, is to add a pre-vote stage wherein followers poll other nodes to determine whether they can even win an election prior to actually transitioning to candidates and starting a new election. This ensures that only up-to-date followers ever become candidates and thus prevents nodes that will never win an election from ever transitioning to candidates in the first place.

2
endergen 14 minutes ago 1 reply      
Has anyone seen their JS visualizer or tool source?
3
SEJeff 12 hours ago 0 replies      
I wonder if the consul or etcd teams will consider adding this into their respective raft implementations (assuming it does actually improve things as the paper alludes to).
Moreutils
points by signa11  1 day ago   33 comments top 11
1
dredmorbius 23 hours ago 2 replies      
This is a great set, but a request to Joey:

There are two conflicting implementations of a parallel utility. And from what I can tell, the GNU parallel utility is much more useful than the one in moreutils. Which meant that when I was 1) doing processing which benefited greatly from parallelization and 2) found that the moreutils version wasn't doing what I wanted nor could I figure out how to make it do so (compounded by confusion over online searched providing GNU parallels syntax which didn't work), I had to remove the entire moreutils set to install GNU parallels under Debian.

The two versions aren't even a candidate for /etc/alternatives resolution as the commandline syntax and behavior differs.

Either a name change or refactoring to a different package for the 'parallel' utility would avoid much of this.

And I'd really like to see numutils packaged.

Also: 'unsort': sort -R | --random-sort

(using GNU coreutils 8.23)

(I'm not familiar with a seed-based randomized sorting utility though.)

2
anon4 1 hour ago 0 replies      
As I see no one has mentioned it, let me pipe in with one more text processing tool that is invaluable in our modern world - jq https://stedolan.github.io/jq/ the commandline JSON processor. Consumes JSON input and its power is somewhere between sed and awk.
3
robmccoll 1 hour ago 0 replies      
Shameless plug but I think it might be useful to others: https://github.com/robmccoll/bt

bt (between)

counts the time between occurrences of the given string on stdinstdin is consumed.output will be the times in floating point seconds, one per line

4
falcolas 22 hours ago 0 replies      
I find it entertaining that bash (zsh, ksh, fish, etc) itself provides ways to do what many of these utilities do. The anonymous pipes, named pipes, and process substitution mechanisms can replace many of these tools.

For example:

    pee ->     some_process | tee >(command_one) | tee >(command_two) [...]    # This one might need a bit more magic with named pipes to consolidate the output without race conditions, since command_N will be executed in parallel. Or take a note from the chronic replacement below and use a temporary file to execute them serially.    chronic ->    TMPFILE=$(mktemp) some_process 2>&1 > $TMPFILE || cat $TMPFILE; rm $TMPFILE    zrun ->    command <(gunzip -c somefile)
Still, having a utility to abstract away the pipes makes sense.

5
pjungwir 21 hours ago 1 reply      
These are cool, and I use chronic all the time. But is there any more documentation beyond this page? I can't find any, and I'd love to read more about pee and see some examples. It seems there is more documentation for the rejected utilities than the accepted ones!
6
boon 23 hours ago 1 reply      
If you're good with vim (and particularly with vim macros), `vidir` is indispensable.
7
akkartik 22 hours ago 2 replies      
Wait, is sponge just another way to redirect to a file? What's the benefit of:

  $ echo hi |sponge y
over:

  $ echo hi > y?

8
davexunit 23 hours ago 1 reply      
One weird thing about moreutils is that the source releases are only available via the source package on debian.org.
9
stevekemp 20 hours ago 1 reply      
Relatedly you might enjoy this collection of sysadmin-tools:

https://github.com/skx/sysadmin-util

10
jcoffland 22 hours ago 1 reply      
I created just such a utility several years ago. It's called rlimit and is basically a command line interface to the standard getrlimit() and setrlimit() unix calls. You can find it here http://freecode.com/projects/rlimit. I'd be happy to move the source to GitHub.
11
hk__2 22 hours ago 0 replies      
If youre on OS X with Homebrew you can install it with `brew install moreutils`.
       cached 8 February 2015 17:02:03 GMT