hacker news with inline top comments    .. more ..    18 Feb 2014 News
home   ask   best   4 years ago   
1
The Erlang Shell medium.com
48 points by strmpnk  2 hours ago   10 comments top 4
1
rdtsc 12 minutes ago 0 replies      
> When systems have faults due to concurrency and distribution, debuggers will not work.

Those are very tricky indeed, mix in threads with pointers and a system becomes haunted. "This one customer noticed a crash, on this hardware, after running for 10 weeks, but ... we couldn't reproduce it". "Oh I suspect there is a memory overwrite or use after free that corrupted the data". People start doubting their sanity -- "Oh I swear I saw it crash, that one time, or ...did I, maybe I was tired...".

Someone (could have even been Jesper Andersen, the author) said that the biggest performance gain an application gets is when it goes from not working or being crashed to working. And the biggest slowdown is also when it goes from working to crashing unexpectedly.

There was talk of 60 hour weeks here before, one of the things that happens at 8pm at night is people huddled over a keyboard debugging some of these hard to track bugs. Managers and some programmers see it as great heroism, pats on the back for everyone, but, others see it as reaping the fruits of previous bad decisions and it is a rather sad sight for some. It all depends on the perspective.

I guess the point is one of the main qualities of Erlang is not concurrency but fault tolerance. Many systems copy the actor concurrency patterns and claim "we have Erlang now but it is also faster!", and that is a good thing, but I haven't heard too many claim "We copied Erlang's fault tolerance, and it is even better than Erlang's!". [+]

[+] you can do the same pattern up to a point using OS processes, LXC containers, separate servers, having watchdog processes restart workers, for example.

2
brickcap 1 hour ago 2 replies      
> As an Erlang programmer I often claim that You cant pick parts of Erlang and then claim you have the same functionality. It all comes together as a whole.

I am guilty of this. I am relatively new to erlang programming and I have skipped on some of the features that make the language great. For instance I have neglected learning both supervision and hot code reloading.

3
tel 36 minutes ago 1 reply      
I find it funny that the author willingly gave up static typing to have Erlang process controlI made the opposite switch as I grew tired of the overhead and attrition of modeling a complex domain in Erlang. I think the Erlang VM is easily my favorite place to live as a programmer, but I do with Erlang-the-language gave more.
4
rurban 58 minutes ago 1 reply      
Oh nice. They have now almost a Common Lisp environment. Getting closer to Greenspun's tenth rule.
2
Show HN: I redesigned the Microsoft employee badge alp.im
190 points by aalpbalkan  7 hours ago   81 comments top 26
1
kayfox 6 hours ago 10 replies      
* The circle makes it hard to see at a glance if the face matches the badge, this is a big deal.

* The employee number should be on the front, because this is often needed for identifying people who security cant stop (for whatever reason), but are doing bad things.

* Printing on the back is expensive, the badge printers that do this cost often twice as much. Printing color is even more expensive, your talking increasing the cost of the badge by about a third. This also leads to other problems like heavy head wear because of the smart card contact, having to define avoidance areas because of the same and jamming issues with the added complexity of using the card flipper.

* Employment classification (Employee, Intern, Vendor Name, Partner, etc) should be printed in text on the front.

* Smaller companies would be encouraged to avoid printing the company logo or name on the badge, as this tells people where it will work.

* Same with the address, and the cost of replacement and expedience means returning the badge is useless. This wasn't true when Motorola Flexpass badges were first rolled out at MS, but its true now.

* Badge photos need to be standardized for various security reasons.

* Your current badge does already emphasize your first name, its not as prominent on yours as it was on mine, but it changes from time to time as they much with the access control software.

Where I'm coming from: I am a security engineer, I previously worked on physical security management and had started out in the industry at Microsoft. I work on systems that print hard cards for a paying hobby.

PS, I was fired from MS for posting an image of myself online where my badge was clear enough to copy. Might be something to check on.

2
mpyne 6 hours ago 2 replies      
Well I'll be the constructive voice. I like it.

You should be able to find email address in the corporate directory services, it's not like people are going to memorize them from looking at a badge. We already have business cards or mobile devices w/ NFC if it's necessary to transfer the email address in a persistent form.

Plus having names instead of email serves the more-important purpose of allowing people to more easily socially interact in meetings, social gatherings, etc.

Since it would almost be impossible to completely anonymize the purpose of the badge (especially with the request to return to Microsoft) using the current visual branding certainly beats using the 1988 visual branding.

I can't speak to "Former Metro" branding but it certainly looks pleasing enough.

3
gilgoomesh 6 hours ago 2 replies      
I'm fascinated by a separate point that this badge raises: the "Microsoft" logo is now the "Windows" logo.
4
pshin45 53 minutes ago 0 replies      
Remarkable how much the old badge [1] resembles the original Nintendo Game Boy [2].

[1] https://ahmetalpbalkan.com/blog/static/images/2014/02/old-fr...

[2] http://upload.wikimedia.org/wikipedia/commons/c/c6/Nintendo_...

5
DanBC 6 hours ago 0 replies      
The photos have two examples of employees with their face at an angle - you can see only one ear of the woman with red hair and one ear of the man with grey hair.

Since these photos serve a purpose (identifying the bearer, not making the bearer feel good about the photo) they probably need to be standardised and use something like passport photo criteria. (Although perhaps gently relaxing those standards).

There's no accessibility or diversity information either. It'd be nice to at last think about the needs of visually impaired users, for example.

But the cards are nice! Nicer than the original example.

6
reddiric 6 hours ago 3 replies      
Great job putting together a prototype. Although I'm going to list specific complaints, I appreciate the effort in creating and risk in sharing, so good job and thanks.

- I don't follow the circle photo fad. It seems like an unnecessary complication (implementation and design element)

- By moving information to the back, you're assuming that the facilities which create these badges have the ability to do double-sided prints on the badges, and if they have the technical ability that it won't increase the time or work required to print a badge.

- You're assuming that the badge printer can print completely to the edge.

- Removing the "Employee" text and relying on the blue color is an accessibility problem (color-blind people need this information)

- Customizing your badge photo adds security policy complications.

7
dlevine 53 minutes ago 0 replies      
I was an intern at MS in 2001, and this weekend when I was going through my box of old memorabilia, I found a badge identical to the "current" badge.

Probably time for a redesign...

8
Greenisus 5 hours ago 0 replies      
I love it, but I have one suggestion: show the employee's name on both sides.

For some reason, I have a hard time remembering names (but never forget faces), so I often glance at badges to try and remind myself what the person's name is. It's always a bummer when the badge is flipped around and I can't tell who it is.

9
pmorici 6 hours ago 3 replies      
Cool, but they probably won't be able to adopt that design. Badges like this are made with special printers which have a minimum margin which is why most/all badge you see out there have that ugly white margin around them.
10
Scaevolus 6 hours ago 0 replies      
First names aren't necessarily the 'most important' -- especially for names that aren't of European origin.

Employee number isn't sensitive information.

11
zaidf 6 hours ago 0 replies      
Did you consider left aligning the name? By centering the name, you're not letting the eyes get trained on where to look instinctively. Someone named "Jim" has a much different starting point than "Mohammed".

Also, I'd have made the last name much smaller.

12
MrHeartBroken 6 hours ago 1 reply      
On the note of minimalism the actually Apple badge looks like this. http://cdn-static.cnet.co.uk/i/c/blg/cat/mobiles/jordan-id.j...
13
dashster18 6 hours ago 0 replies      
The colorful window logo isn't the Windows logo. Microsoft recently changed their logo to that in 2012.
14
icambron 4 hours ago 0 replies      
The only thing I don't like here is the way the last name is printed. I'm all for emphasizing first names, but there's something about the way it's printed that make me read it as a title. Like Ahmed is a Balkan at Microsoft.
15
Eleutheria 34 minutes ago 0 replies      
Nice! Now put some QRCodes on it so people can get bitcoin donations.

Oh, and NFC tag for auto check-in, computer log in, cafeteria, snacks, etc.

16
b2themax 6 hours ago 0 replies      
I don't like it. Its look is too reminiscent of Google's design language, especially their 'circles' in Google+. The look is very soft, while Microsoft's design language (formerly metro) is much more modern.
17
richardwhiuk 6 hours ago 0 replies      
So what you've changed is to center the information (not a big deal, move a whole load of useful information to the rear and use a more up to date logo.

Seems pointless. Microsoft have updated their logo four times in the past four years. People update their favourite photo once every couple of months. It also require that corporate directory services allow updating of photos significantly more often, just so people can have a photo of themselves they like. Finally you've ignored practicalities of printing logos. Sounds like a typical design with no understanding of the limitations involved or requirements.

Most of the offence here seems to be because you didn't like the photo (because it's passport / security style instead of Faceboook / Instagram esque?) and the 8 bit colour printing. Both of these are intrinsic to the requirements caused by printing badges.

By the way, the logo was valid in 2012, not just 1998 - http://en.wikipedia.org/wiki/Microsoft#Logo

18
urs2102 6 hours ago 0 replies      
I'm a fan of the colors and typeface, but doesn't Microsoft's design style push for more of a rectangular/angular look all around?
19
codex 6 hours ago 2 replies      
- The photo is the most important feature of the badge; for security reasons, it should be as large as possible.

- First and last names are not as important as one's email address

- The logo is a security risk; should a badge go missing, it's a clue as to where to (mis)use the badge.

20
willcodeforfoo 6 hours ago 0 replies      
This makes me think large companies like this should invest just a tad bit more in taking quality photos. Ditch the DMV backdrop, on-camera flash and low quality photo and invest in a couple umbrellas and just an entry-level dSLR. As often as they are seen, you should make people feel good about it.
21
exo_duz 2 hours ago 0 replies      
Great job! I'm a big fan of minimalistic design which unfortunate to say most of MS products aren't. Looking forward to more design ideas from you :)
22
codex 3 hours ago 0 replies      
This particular design doesn't take a lot of skill to create, and I'm not sure the author knows what problem to solve. The triviality of the redesign should be embarrassing to the creator.
23
magic_haze 6 hours ago 6 replies      
I don't understand why a badge is necessary in the first place. Won't NFC on a phone suffice?
24
waxy 6 hours ago 0 replies      
Just as an off topic, not even microsoft employees use outlook.
25
greatsuccess 1 hour ago 0 replies      
The badges wont look like this with the standard security camera mugshot that security offices use and the picture is probably too small to make them happy as well.

Other than that nice job. I don't think it will be implemented.

26
joncp 4 hours ago 0 replies      
It's a pretty badge, but unfortunately it's a security risk. Putting identifying information on there is an opening for social engineering attacks. The employee name and anything tying it to Microsoft shouldn't be on there. Really, just the photo and badge id (not the employee ID) should be there. If there's a "return to" address, it should be a nondescript PO box that's not in Redmond.

Edit: Also, the employee number shouldn't be on there for the same reason.

3
As much Stack Overflow as possible in 4096 bytes danlec.com
242 points by df07  10 hours ago   47 comments top 20
1
mberning 9 hours ago 3 replies      
Very impressive. I wish extreme performance goals and requirements would become a new trend. I think we have come to accept a certain level of sluggishness in web apps. I hate it.

I wrote a tire search app a few years back and made it work extremely fast given the task at hand. But I did not go to the level that this guy did. http://tiredb.com

2
jc4p 9 hours ago 3 replies      
Some of the workarounds he mentions at the end of his Trello in 4096 bytes[1] post seem really interesting:

- I optimized for compression by doing things the same way everywhere; e.g. I always put the class attribute first in my tags

- I wrote a utility that tried rearranging my CSS, in an attempt to find the ordering that was the most compressible

[1] http://danlec.com/blog/trello-in-4096-bytes

3
cobookman 5 hours ago 0 replies      
First off, nice work.I've noticed that St4k is loading each thread using ajax, where-as stackoverflow actually opens a new 'page', reloading a lot of webrequests. Disclaimer I've got browser cache disabled.

E.g on a thread click:

St4k:

GET https://api.stackexchange.com/2.2/questions/21840919 [HTTP/1.1 200 OK 212ms]18:02:16.802

GET https://www.gravatar.com/avatar/dca03295d2e81708823c5bd62e75... [HTTP/1.1 200 OK 146ms]18:02:16.803

stackoverflow.com (a lot of web requests):

GET http://stackoverflow.com/questions/21841027/override-volume-... [HTTP/1.1 200 OK 120ms]18:02:54.791

GET http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min... [HTTP/1.1 200 OK 62ms] 18:02:54.792

GET http://cdn.sstatic.net/Js/stub.en.js [HTTP/1.1 200 OK 58ms]18:02:54.792

GET http://cdn.sstatic.net/stackoverflow/all.css [HTTP/1.1 200 OK 73ms]18:02:54.792

GET https://www.gravatar.com/avatar/2a4cbc9da2ce334d7a5c8f483c92... [HTTP/1.1 200 OK 90ms]18:02:55.683

GET http://i.stack.imgur.com/tKsDb.png [HTTP/1.1 200 OK 20ms]18:02:55.683

GET http://static.adzerk.net/ados.js [HTTP/1.1 200 OK 33ms]18:02:55.684

GET http://www.google-analytics.com/analytics.js [HTTP/1.1 200 OK 18ms]18:02:55.684

GET http://edge.quantserve.com/quant.js

....and more....

4
timtadh 33 minutes ago 1 reply      
funny, his compressor must do a better job than mine:

    $ curl -s http://danlec.com/st4k | wc         14      80    4096    $ curl -s http://danlec.com/st4k | gzip -cd | wc         17     311   11547    $ curl -s http://danlec.com/st4k | gzip -cd | gzip -c | wc         19     103    4098

5
Whitespace 8 hours ago 1 reply      
I'm curious if a lot of the customizations re:compression could be similarly achieved if the author used Google's modpagespeed for apache[0] or nginx[1], as it does a lot of these things automatically including eliding css/html attributes and generally re-arranging things for optimal sizes.

It could make writing for 4k less of a chore?

In any case, this is an outstanding hack. The company I work for has TLS certificates that are larger than the payload of his page. Absolutely terrific job, Daniel.

[0]: https://code.google.com/p/modpagespeed/

[1]: https://github.com/pagespeed/ngx_pagespeed

edit: formatting

6
stefan_kendall 2 hours ago 0 replies      
Maybe part of the story here is that gzip isn't the be-all-end-all of compression. A lot of the changes were made to appease the compression algorithm; seems like the algorithm could change to handle the input.

A specialized compression protocol for the web?

7
TacticalCoder 1 hour ago 0 replies      
In a different style, the "Elevated" demo, coded in 4K (you'll have a hard time believing it if you haven't seen it yet):

http://www.youtube.com/watch?v=_YWMGuh15nE

8
SmileyKeith 6 hours ago 0 replies      
This is amazing. As others have said I really wish this kind of insane performance would be a goal for sites like this. After trying this demo I found it difficult to go back to the same pages on the normal site. Also I imagine even with server costs this would save them a lot of bandwidth.
9
blazespin 8 hours ago 0 replies      
Very impressive! So incredibly fast.

My only thoughts are that search is the real bottleneck.

10
nathancahill 9 hours ago 0 replies      
This is really fast! Love it. I thought the real site was fast until I clicked around on this.
11
nej 9 hours ago 0 replies      
Wow navigating around feels instant and it almost feels as if I'm hosting the site locally. Great job!
12
derefr 6 hours ago 0 replies      
> I threw DRY out the window, and instead went with RYRYRY. Turns out just saying the same things over and over compresses better than making reusable functions

This probably says something about compression technology vs. the state of the art in machine learning, but I'm not sure what.

13
Jakob 7 hours ago 0 replies      
I didnt realize that the original site is already quite optimized. With a primed cache the original homepage results in only one request:

    html ~200KB (~33 gzipped)
Not bad at all. Of course the 4k example is even more stunning. Could the gzip compression best practices perhaps be added to an extension like mod_pagespeed?

14
afhof 7 hours ago 1 reply      
4096 is a good goal, but there is a much more obvious benefit at 1024 since it would fit within the IPv6 1280 MTU (i.e. a single packet). I recall hearing stories that the Google Homepage had to fit within 512 bytes for IPv4's 576 MTU.
15
jonalmeida 9 hours ago 0 replies      
Pages load almost instantly like as if it's a local webserver - I'm quite impressed.
16
masswerk 3 hours ago 1 reply      
And now consider that 4096 bytes (words) was exactly the total memory of a DEC PDP-1, considered to be a mainframe in its time and featuring timesharing and things like Spacewar!.

And now we're proud to have a simple functional list compiled into the same amount of memory ...

17
nandhp 8 hours ago 3 replies      
Code is formatted in a serif font, instead of monospace, which seems like a rather important difference. Otherwise, it is quite impressive.
18
dangayle 8 hours ago 1 reply      
I'd love to see a general list of techniques you use, as best practices.
19
iamdanfox 5 hours ago 0 replies      
The simpler UI is quite pleasant to use isn't it! I wonder if companies would benefit from holding internal '4096-challenges'?
20
jpatel3 9 hours ago 0 replies      
Way to go!
4
REPL to any browser in the cloud github.com
22 points by woloski  2 hours ago   3 comments top
1
yeukhon 2 hours ago 1 reply      
Interesting. So I can access the web console right?

I guess if people turn this into a server would be awesome.

Sometimes I want to test cross-browser compatibility. For example, Firefox uses textContent instead of innerText. I can easily verify this without having to have Firefox and Chrome (and IE, Opera, etc) on the same machine at the same time.

5
TCP/IP over audio anfractuosity.com
69 points by deutronium  6 hours ago   29 comments top 8
1
WestCoastJustin 5 hours ago 1 reply      
There have been several threads on HN about ultrasonic networking in recent months. One was a simple chat client, called Quietnet [1]. The other was malware called BadBIOS, which has the ability to communicate over hi-def audio [2].

[1] https://news.ycombinator.com/item?id=7024615

[2] https://news.ycombinator.com/item?id=6654663

2
chrissnell 3 hours ago 0 replies      
Amateur radio folks have been doing this for a long time, since the early 1990s at least.
3
erjiang 5 hours ago 3 replies      
As an aside, the tun/tap interface in Linux is a fantastic way to muck around with lower-level networking without getting into the kernel or hardware. Essentially, it creates a virtual network interface, except instead of hardware, data goes to your userspace program. You can then just do a read() to grab packets and do whatever you want with them, using Python, Ruby, or anything other programming language.

I wrote a prototype proxy in Go that can split network traffic over two Internet connections using some hacked-together tun code[0] and everything happens in userspace.

[0] https://github.com/erjiang/tuntuntun

4
yeukhon 2 hours ago 2 replies      
Side question: UDP seems to be the preferred protocol for VoIP-type of services. While the size of a UDP message is generally much larger than that of TCP, and some missed UDP messages are acceptable (analogous to cellular communication some signal weakness/loss is acceptable), any other strong reasons people prefer UDP over TCP in VoIP?
5
jijji 26 minutes ago 0 replies      
In the old days, they used to call this a "modem". Long live the Hayes 1200 baud modem of the early 1980's.
6
Cogito 4 hours ago 1 reply      
I couldn't see any information on the linked page - anyone know what the latency and throughput characteristics of this set up are?

Would be very interesting to understand how audio attenuation impacts the TCP/IP connection.

7
rafavega 4 hours ago 3 replies      
How is it that they receive and transmit ultrasonic frequencies with a computer sound card? Is there not a low-pass filter at around 22KHz on inputs and outputs of all sound cards?
8
flibertgibit 2 hours ago 0 replies      
This to me is incredibly awesome. The question is, could you use radio frequencies meant for speech/music for it? I think so, since morse code is sent on these same frequencies. What about streaming video over short wave radio?
6
Introducing Bing Code Search for C# msdn.com
204 points by vwilson  10 hours ago   174 comments top 24
1
maresca 9 hours ago 4 replies      
C# and .NET get a bad rap for being created by Microsoft. But one thing that can't be ignored is how polished their development tools are. I absolutely love coding in Visual Studio.
2
bruceboughton 7 hours ago 1 reply      
It's interesting that the example code shown is so clunky. It uses try-finally to manually dispose the resource when the idiomatic way would be to wrap it in a using block:

  using (var file2 = new StreamReader(file))  {    while ((line = file2.ReadLine()) != null)      Console.WriteLine(line);  }
It's possible this is a badly picked example but it shows one big downside of this -- the lack of discussion about the sample code that you would normally get at e.g. Stack Overflow or a blog.

3
rjzzleep 7 hours ago 0 replies      
mandatory crossplatform cli alternative howdoi [1]

vim howdoi plugin [2]

one of the few emacs plugins [3]

not sure if there are any other plugins, but that should cover a decent portion of interest here.

[1] https://github.com/gleitz/howdoi

[2] https://github.com/laurentgoudet/vim-howdoi

[3] https://github.com/arthurnn/howdoi-emacs

EDIT: sublime version https://github.com/azac/sublime-howdoi-direct-paste

4
guiomie 5 hours ago 1 reply      
Everytime something C# is posted it's all about MS and why some like or dislike C# or .Net, rarely are the comments related to the actual article.

Personally, I think this new feature is cool, but I've come to realise that my visual studio freezes way more then initially, I think this might be because I've got a few addons installed (ex: Demon, Resharper ...etc) I wonder what will be the overall performance impact of this.

5
codygman 18 minutes ago 0 replies      
This is very cool, but I fear that many will label themselves as experienced programmers with nothing but the knowledge of using this tool to piece snippets together.
6
seanmcdirmid 5 hours ago 1 reply      
Science fiction becomes reality:

http://en.wikipedia.org/wiki/A_Deepness_in_the_Sky

> The Qeng Ho's computer and timekeeping systems feature the advent of "programmer archaeologists":[2] the Qeng Ho are packrats of computer programs and systems, retaining them over millennia, even as far back to the era of Unix programs (as implied by one passage mentioning that the fundamental time-keeping system is the Unix epoch:

> Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.

> This massive accumulation of data implies that almost any useful program one could want already exists in the Qeng Ho fleet library, hence the need for computer archaeologists to dig up needed programs, work around their peculiarities and bugs, and assemble them into useful constructs.

7
AlaShiban 9 hours ago 6 replies      
If you want more info about the extension let us know. There's alot of cool contextual and semantic pieces in there that makes it a smarter search
8
curveship 6 hours ago 4 replies      
As someone who has recently been hiring .NET engineers, I have to admit that this inspires mixed feelings. On the one hand, I can see huge power from combining AI and search with the structured context of programming. On the other hand, a disappointing number of the people we interviewed weren't software engineers, they were IntelliSync engineers. We'd give them a problem, and their first instinct was to hit a period and start hunting through the method options that IntelliSync gave them to see if one got them closer to their goal. Instead of stepping back and thinking about the problem generally, they'd try to solve it by stringing together IntelliSync suggestions, like stepping stones across a pond.
9
nrao123 9 hours ago 2 replies      
What a co-incidence!

Fred Wilson posted on this very topic this morning.http://www.avc.com/a_vc/2014/02/inspired-by-github.html

From his post:

"I was at a hackathon up at Columbia University last weekend and one of the hacks was a development environment that automatically queried StackOverflow and GitHub as you are writing code so that you always have in front of you the answers to the questions you are most likely to ask. The developer who did the hack introduced it by saying something like "programming these days is more about searching than anything else". That reflects how collaborative the sharing of knowledge has become in the world of software development as a result of these cloud based tools for developers."

10
Nate630 9 hours ago 1 reply      
Visual Studio sure has lots of neat extensions that add tons of value.http://vswebessentials.com/ is my fav.
11
gesman 9 hours ago 1 reply      
If for nothing else - it shows MSFT's commitment to the language, to the platform and to the framework.
12
ykumar6 2 hours ago 1 reply      
One problem with this approach is it requires a change in user behavior. Unless Visual Studio can get a developer to an answer every single time, it may not be sticky enough to form a habit.

Google search reliably produces an answer each time, regardless of what the question or problem is

This is why search is very sticky (and habit forming). MSDN (and even Stackoverflow or Github) suffer from this problem because they only have a subset of content that developers want/need. Google brings all these sources together into a single search.

13
forgotAgain 8 hours ago 1 reply      
Visual Studio could be so much more for Microsoft. Why do they need a phone? Let them make Visual Studio cross platform and developers would come back to Windows as their base platform. All other IDE's pale in comparison. Too bad that advantage is being wasted as far as new developers go.
14
adventureloop 7 hours ago 0 replies      
This is really cool, I find the microsoft documentation particularly terrible.

I always had a chuckle when the first result for a simple C# concept and the result isn't a MSDN site. I also chuckle when the result is a forum post from 2005 that drops me into a link loop.

Thankfully I won't have to write C# for a long while. I can't say I will miss the MVC stack or the legacy burden you get with forms.

15
arikrak 5 hours ago 0 replies      
This is the beginning of a practical StackSort:

http://gkoberger.github.io/stacksort/

A lot of programming can involve Searching, but it doesn't have to involve searching with plain-text. One would have thought Google would work on this, but they closed down their code search and haven't offered anything else.

16
rl3 9 hours ago 2 replies      
It will be quite nice if this ends up yielding results for individual JavaScript frameworks some day.
17
asdf3 9 hours ago 1 reply      
Having this for Monodevelop and Unity3d would be great. Even better if we have community curated suggestions.
18
josephschmoe 7 hours ago 1 reply      
I really, really want this for IntelliJ/Android Studio. Actually, everything, can I get this for everything?
19
Navarajan 1 hour ago 0 replies      
I will wait for "Google Code Search for C#"
20
banachtarski 6 hours ago 1 reply      
This sounds like a great way to enforce bad programming practices.
21
arnie001 9 hours ago 0 replies      
Would like to see this for C++ soon enough. Looks great in the demo.
22
k_bx 6 hours ago 0 replies      
I want this for elisp/emacs!
23
leonidr 5 hours ago 1 reply      
We no longer need developers we need good searchers.
24
kyberias 9 hours ago 0 replies      
But... that is not test driven!
7
I Quit My Job To Teach People About Hardware chrisgammell.com
85 points by ChrisGammell  7 hours ago   12 comments top 10
1
noonespecial 6 hours ago 0 replies      
I'm right where you are, brother. I was more or less content until last year when I taught a beginners robotics and electronics class to local kids. I decided then that this was something I had to do and not just on a "free-time" basis. I'm trying to figure out how to make the jump myself.

I'll be watching your adventure with great (and also selfish) interest. Very best of luck.

2
dccoolgai 2 hours ago 0 replies      
That's awesome - I have been a fan of Gammel's work (ChipTV, etc.) for a while. I've been more and more convinced that the next billionaire will be from hardware.
3
mathattack 1 hour ago 0 replies      
Bravo!!!

The one thing you may find is that the old workaholism is hard to shake, but at least you own the calendar.

4
janineyoong 3 hours ago 0 replies      
Chris has been a longtime supporter of our mission to open up part data for hardware hackers at Octopart, so we know how much of a passion project this is. Congrats!!
6
derwiki 7 hours ago 0 replies      
I've known Chris since undergrad and this is going to be great. Congrats and good luck!
7
mmilano 6 hours ago 0 replies      
It's neat to open HN and see very relevant topics. I'm taking Chris' Contextual Electronics course. It's structured around an 8 week period, but fortunately for me, it's still effective going at my own pace since life & work has been busy. Congrats Chris, and Thanks!
8
seddona 5 hours ago 0 replies      
There are so many people dipping their toes into hardware now, it's great to see a course designed around the practical aspects for them.
9
contingencies 2 hours ago 0 replies      
Congratulations on the best decision of your life.

I sent you a message inviting you to participate on project over here in Asia.

10
VLM 6 hours ago 1 reply      
Looks like a good plan. Only slightly off topic I LOLed at the electronics / milling machine connection. In our generation those two go together just like ham radio and amateur film photographer went together in my dad's generation. Sherline mill with Geckodrive powered steppers here.

This is probably some kind of trend or whatever that a startup could bounce off of.

8
Why we left AngularJS sourcegraph.com
69 points by route66  5 hours ago   61 comments top 23
1
dchuk 4 hours ago 7 replies      
A lot of people seem to think that Single Page App frameworks like Angular/Ember are suitable for use on the public facing client side. I've always believed that SPAs are meant to be behind a login, where you don't have to also deal with spiders and other sub-optimal browsing devices, and you have a little bit more wriggle room when it comes to routing and web history.

Just look at Blogger...their client-side rendering is annoying as all get out. It's just a blog post, render it server side and give me the content, then sprinkle on some gracefully degrading JS on top to spice it up.

I say this as a huge proponent of Angular who uses it for all his web app projects who also wouldn't ever use it on a public facing application.

2
randomdrake 4 hours ago 2 replies      
This mayaswell be titled: "Why we're paying for re-discovering client-heavy apps are hard or bad." Angular, or <insert hot new JavaScript framework> doesn't particularly matter.

Twitter learned it[1].

Lots of us learned it when we were experimenting as Web 2.0 was being born. Things were far more obvious, far more quickly then, as bandwidth and resources weren't anywhere near what they are today. Back then, we quickly realized that just a couple of delayed asynchronous calls could cause your app to slow to a halt and feel sluggish.

That's not to say it can't be done[2], it's just to say that, thus far for the most part, folks end up discovering reasons why they didn't "do it right" too late over and over. I could be wrong, but I feel like there's been a few posts to Hacker News within the past couple months with similar sentiment.

When people start suggesting client-side rendering, I usually ask something along these lines:

Why on earth would you leave something as simple as textual document model creation up to the client's 5 year old machine that is busy playing a movie, a song, downloading a torrent, doing a Skype call, and running 15 other tabs, when your server is sitting around twiddling it's thumbs with it's 8 cores, scalable, SSD and memory-heavy architecture?

[1] - https://blog.twitter.com/2012/improving-performance-on-twitt...

[2] - http://www.quora.com/Web-Development/What-are-the-tradeoffs-...

3
carsongross 5 hours ago 1 reply      
I've been working on an Angular alternative called IntercoolerJS:

http://intercoolerjs.org/

The idea is to keep a lot of the advantages of the traditional web development model, but, via HTML5-style attributes, RESTful URL design and partial driven UX, achieve a better UX.

It's not for everyone or for every problem, and it is still in pre-alpha (we are going to change from a preamble to HTTP headers for meta-directives, for example) but, if you find Angular too heavy-weight and foreign for your UI, it might be of interest.

Please contact me if you are interested in contributing.

4
akbar501 3 hours ago 0 replies      
This is really a case of picking the wrong tool for the job. __This is in no way a slight of the author__...b/c I have done worse on more than one occasion, so thanks for sharing.

To anyone reading, you really should understand your workload before picking tools. And, you need to understand the difference between Web Application vs. Web Site: Which are you building?

Server-side rending is the winner for content sites (as mentioned by the author). Beyond initial rendering, a server-side solution allows for more caching. Depending on the site you could even push a good amount of file delivery to a CDN. In the end the author switched to Go, but Node.js + Express, RoR, PHP, Java with Play, etc. would all work just as well.

Next, are you CPU bound or network bound or I/O bound. If you're writing an application that requires heavy calculations that utilize massive amounts of CPU, then pick the appropriate framework (i.e. not Node). If you are I/O bound then Node may be a great solution.

Client-side rending (such as Angular/Backbone/etc) really shine when you need a web application (not web site). These frameworks are best when the application code is significant relative to the data such that many small JSON requests deliver better overall performance. Think of a traditional desktop application or native mobile app where the application code is in MB, but the amount of data required per request is in bytes. The same logic applies to web apps.

A few areas where problems such as what the author experienced emerged from blanked statements about technologies:

1. Gulp vs. Grunt: I use Grunt. I may switch to Gulp. But seriously, which one is "more complex", "faster", can be quantified. Lots of people pick the wrong technology because the web is littered with echo'd opinion statements. Exchange "more complex" for project A has a config file with X number of lines, while project B has a configuration of Y number of lines for the same task. Or project A uses JSON for its configuration while project B uses YAML.

2. "Or we could have used a different framework) - with a link to Meteor" - No please do NOT use Meteor for your site. I love Meteor and want it to succeed, but it is not the optimal choice for a content heavy site where each user views a large amount of data. As mentioned above, use a server-side rendering solution (like you did with Go), then cache, then push to a CDN. Problem solved. Meteor is awesome and is a great real-time framework. Use it when you need real-time capabilities...but not for a content heavy, static site.

> but they just werent the right tools for our site.

This could have been the title or first sentence and would have delivered 100% of the message if the reader read no further.

A lot of these articles about why we changed from technology A to B could be much improved if the original decision making was documented (not just the switch). As in we picked project A because we thought it would deliver A, B and C benefits based on our applications required capabilities. However, our application really needed capabilities M, N and O, which project A was not a good fit for. So, we switched to project B and experienced the following improvements. Therefore, it can be concluded that if your application needs M, N and O then project B will be a better fit.

5
shirro 4 hours ago 0 replies      
I need to stop clicking on the "why we left x for y" articles on HN. Mostly people have picked the wrong tool for the particular job and the articles are just an embarrassment.

Obviously SPAs take a lot of extra work to make search engine friendly and are probably going to be the wrong tool for the job for any site which requires that. Much of the web isn't searchable and doesn't want to be searchable. If you are writing a web app to solve some business problem which sits behind a login angular really isn't a problem.

Think of the millions of poorly maintained and inflexible VB and Java business apps out there that are due to be replaced and the employees who are wanting to do business on the road with their macbooks, chromebooks and tablets. There is your market for Angular.

6
andyl 4 hours ago 1 reply      
We've transitioned from Angular to ReactJS with great success. Much smaller learning curve. Using Backbone to handle the models and React for the view is a great combination.
7
danabramov 4 hours ago 1 reply      
We're using Backbone+React so this may not be applicable.

However...

You can separate your dev and production build pipelines to improve dev speed, but thats going to bite you later on.

In my experience, you must separate dev and prod pipelines. It has never bitten me because I make hundreds dev (local) and dozens kinda-prod (staging server) builds a day.

For dev builds, Grunt just compiles LESS but doesn't touch the scripts so there is literally no delay there. In dev environment, we load scripts via RequireJS so there is no need to maintain a properly sorted list of scripts too.

For production, we concat them with grunt-contrib-requirejs with `almond: true` so RequireJS itself is stripped out completely. Production build takes time (10 to 15 seconds; we're also running Closure Compiler), but it's never a problem.

Even adding JSX (Facebook React) compilation didn't cause a slowdown for dev builds because grunt-contrib-watches compiles them on change and puts into a mounted directory.

8
dlau1 4 hours ago 1 reply      
Have you tried react.js [1] ? If you use node to serve your content, you can pre-render the initial state of your app. When everything loads up, react will take a checksum of the rendered portions to ensure that it doesn't re-render the same DOM. This should come close to solving your SEO/test issues with minimal work.

In my opinion, a setup like this is close to what the next big wave of frameworks will use.

You can break your layout up into parts and have a site that is partially dynamic and partially static. You just pass the html that react renders to your templating engine.

Getting everything setup correctly can be a little hassle, but gulp is fast enough when doing a watch on the compilation step. Of course, because everything is javascript you share the exact same component code between client and server.

This is a good example that helped me a bit[2]

[1] http://facebook.github.io/react/[2] https://github.com/mhart/react-server-example

9
cies 4 hours ago 0 replies      
I'm currently playing with Fay (haskell2js compiler)... It's awesome.

It type checks like haskell and allows code sharing between serverside and clientside of the app. This means i can use code to generate a complete HTML site (for SEO purposes) when the URL is hit directly and modify the DOM from there once the app is loaded... with the same code!

Obviously this is code sharing is mostly interesting to app written in haskell. But I'm so excited about it that i had to share... :)

G'luck! The "javascript problem" (try google for that) is a hard one.

[edit] i call it "playing with Fay", but im certain this will end up on production for me.

10
m0th87 2 hours ago 0 replies      
Try prerender [1]. We use it in production with backbone. This combined with keeping the most content not-client-rendered has alleviated most of our issues.

In the long-term I'd love to see a web framework that uses react on the server-side, kind of like how rendr uses backbone on the server-side [2]. Seems to make sense because react works against a virtual DOM, so it would allow you to avoid the hacky ways of working with an actual DOM in node.

1: https://github.com/prerender/prerender2: https://github.com/airbnb/rendr

11
jbeja 5 hours ago 1 reply      
I am curious to know why you go for a full js app approach from the begining, knowing that your app would be very dependable from content that needed be indexed by search engines overall?
12
wheaties 1 hour ago 0 replies      
I just want to say thank you to the author for showing me backbone.anayltics. Absolutely fantastic and everything I had been looking for recently. Funny how one thing teaches about something else.
13
kirushik 4 hours ago 0 replies      
For sake of correctness recent versions of Phantom.js are not dependent on Xvfb or any other variant of X, and there are grandmotherly prepared binary builds on the official website (kinda statically linked, so no dependency on WebKit as well).Not that it changes the author's arguments that much, but just worth pointing out.
14
ben336 2 hours ago 0 replies      
Maybe I'm missing something but I don't get "4. Slow, flaky tests". I understand that Selenium et al can be a pain, but how does server side generated code excuse you from using it? Are you only validating the html structure and not testing any of the interactive capabilities?
15
jaunkst 4 hours ago 1 reply      
I have to disagree with most of this article. 1. Bad search ranking and Twitter/Facebook previewsDon't force your public side to strictly angular. Serve normal pages and use angular for your interactive components. Let Google index a well formed dom. Use a full angular stack for your non public facing application(a SaaS application). You don't want to index this anyways.

2. Flaky stats and monitoringUse event driven metrics from your api and or client side. Track everything in the sense of user, controller, action, params. Blacklist sensitive data. Derive metrics with funnels, user did x actions, returned and subscribed. Conversion! It's all there just understand your events.

3. Slow, complex build tools.Your not limited to grunt, or node. For example we use rails and use our own buildscripts and generators to build fullstack angular apps. Easy Breezy.

4. Slow, flaky testsThere is room for improvement. But jasmine and phantom can get the job done. But let's not forget were also testing our api. Use your goto testing framework and let jasmine phantomn do the client frontend testing.

5. Slowness is swept under the rug, not addressedPrecompile your angular templates, only wait for api responses. Don't fragment your page load into seperate request. Resolve all the requires data beforehand in the route provider.

16
Eric_WVGG 5 hours ago 2 replies      
Im as big an Angular evangelist as anyone, but that bit about search rankings is an absolute killer.

You talk about these server-side webkit parsers as tricks that slow things down,which indicates that you at least ultimately got them working. I never got that far.

17
laureny 4 hours ago 0 replies      
Since the author hints that they migrated to Go templates, this article is more about when you should render templates client side vs/ server side than an opinion against AngularJS.
18
jakestl 5 hours ago 0 replies      
It's funny because I find myself going the other direction from server-side page generation to angular.

One of the main reasons is angular forces you to separate your controller logic from DOM manipulation. Without directives I tend to see a pile of jQuery on every page.

How do you address this?

19
ilaksh 3 hours ago 0 replies      
Take a look at the prerender.io module. Also it is not impossible to render some types of angular pages in the server it is just a bit of a pain. Google angular on server. If you really don't need an interactive app then I would consider generating static pages when the data changes or when called and then don't update them more often than you actually need to.
20
balls187 2 hours ago 0 replies      
This isn't about AngularJS in particular. This is about using a client-side JS app framework.

Substitute any other flavor and the same problems exist.

21
kin 3 hours ago 0 replies      
Wouldn't ng-cloak solve the partial content loading issue? Actually, there are a ton of ways to skin that cat.
22
fantastical 3 hours ago 0 replies      
Are there any good guides on how to organize JavaScript on non-SPAs? I have a weird web development background in that I've only ever worked on SPAs, so I haven't ever done JavaScript work on a server-side rendered app.
23
miralabs 2 hours ago 0 replies      
just a case of using the right tools for the right job...move along
9
By The Time You Give Them a Raise, Theyre Already Out The Door quora.com
301 points by ChrisBland  6 hours ago   225 comments top 36
1
tmoertel 3 hours ago 9 replies      
On the other hand, if you're an employee and not getting paid what you think you're worth, don't jump to the conclusion that you're being exploited or disrespected, and don't jump to the conclusion that you have to leave in order to get market rate. If you're happy with the team and with the work, consider just asking for what you want. Founders are crazy busy and, believe it or not, sometimes lose sight of "little" things like comp. Talking to them may be all it takes.
2
GavinB 4 hours ago 4 replies      
I've seen this claim over and over, but never any sort of citation or numbers. Lots of people get competitive offers, get a counteroffer, and then stay for years. I know a number of them personally.

The reason it seems like they never stay is that they don't talk about the offer they didn't accept. The negotiation happens over the course of a few days, and they don't mention it to the other employees. So there's a big bias in how frequently you hear about the negotiation. Whereas if they leave for another company, of course everyone knows about it.

3
Jormundir 2 hours ago 4 replies      
Though not the focus of the article, I have a huge problem with the idea that leaders should pick out their best employees and reward them, while leaving the others treading water on their own.

I am in the middle of watching a team constantly churn, unable to retain many talented developers, specifically because the managers are only rewarding those they think are the best engineers, but are actually rewarding very mediocre employees they trust. My advice to everyone, especially managers, is do not try to pick out your "best" employees and reward them exclusively!

My team has hired 6 engineers over the last 2 years; the distribution has been pretty even: 2 really great engineers, 2 decent ones, and 2 sub-par ones. Going in to the 2 years, the team had 5 engineers 1 really great, and the rest swimming between mediocre and great. The two managers try to follow the advice in the article, and it has been disastrous. Of those 6 engineers they hired, the two really great ones were out the door in 8 months and 1 year respectively, and just last week the 1 great engineer already on the team announced he was leaving. The problem feels like failure to launch. These talented people come in, are doing good work, and then feel there's no room to grow. The source is obvious, the managers have picked out 2 of the mediocre engineers who they feel are their "best", the "talent" doing the most work and attracting other great "talent". The "talent" is rewarded with the big projects, which in turn makes management think they're working harder, while the other engineers are left with the scraps. The result is simple -- genuinely great engineers take a few steps in the door, quickly realize the problem, and turn around to get out the door as fast as they can. It's really sad.

The advice of the article is good for the most part, just really be incredibly careful about choosing your "best" as a manager. It's far better to make sure you're fostering your whole team.

4
krstck 4 hours ago 7 replies      
Can we quit with the "rock stars", "ninjas" and all that? We're not teenage boys, stop talking to us as though we are.
5
JunkDNA 2 hours ago 7 replies      
This thread is both illuminating and depressing for me. I hire engineers at an academic medical center who work on really tough biomedical problems. Let's just say that I would have to move heaven and earth to get annual percentage raise amounts that are being thrown around here. I wonder how industries like healthcare can hope to have the best people with this job market. At some point, even if you are doing work that really matters in a big way, you can't be stupid about your career and leave money on the table. I wonder if this further drives non-IT focused organizations to SaaS offerings since they can't get talent to do things in house?
6
sbt 3 hours ago 2 replies      
If this contains one great truth it's this

> By The Time You Give Them a Raise, Its Too Late.

I have seen this happen often. If you get a higher offer from a second company, you will think that the first employer paid you below par, which leads to a feeling of being taken advantage of. At that point, it doesn't really help if the offer is countered, the feeling of being exploited has already taken root.

Unfortunately, most managers just don't get this, or they somehow believe that they can counter offer. Maybe that works in sales/marketing, who knows, but I think it's different in dev.

7
kabdib 3 hours ago 1 reply      
What drove me from Microsoft was:

- A culture of burnout (really, really bad death marches, mostly because upper management had fucked up by dithering about what products to make)

- A lack of respect for engineering quality (when the build for your project is broken for weeks at a time, you'd expect management to get a clue -- I mean, what are managers for? -- rather than ask about how things are going with the schedule)

- A lack of respect for your time, realized as a reluctance to buy equipment (I got my build times down to two hours, from four hours, by buying my own computers. Yup, I probably spent $5K on computers while I was at MS because my managers didn't see the benefit in buying faster hardware), and by scheduling tons of nonsense meetings.

- Politics (oh God, the politics). MS needs to fire about half of its "partner" rank people. There are good eggs there, really smart people, but then there are the ones who . . . aren't. (When they do get around to axing half of the partners, I'm betting it'll be the wrong half).

A year and a bit later, I'm at a place where they realize that the most valuable thing is your time, and equipment is Not A Problem. It's a great thing when you're the bottleneck.

8
ryguytilidie 3 hours ago 9 replies      
My last boss, when I asked for a raise, said "Explain to me why I should give you a raise". I said nevermind, started looking for a new job and left. Shockingly, when I informed them, there was a higher offer waiting. You seriously wonder what goes on in these people's brains.
9
pcrh 5 hours ago 2 replies      
I don't know if it is the writing style or something else, but this comes across to me as being written by someone who has difficulty empathizing with their employees.

After all, the opposite of the advice presented would be to ignore the ambitions of your most important employees, underpay them, and never speak to them.

The fact that not doing the foregoing is seen as a novel insight is not encouraging.

10
adamors 5 hours ago 1 reply      
> Pay market, or above, as soon as you can. Its a sign of respect.

And just as importantly, the lack of a decent compensation/raise is a sign of disrespect. I also factor here places that expect their developers (especially young ones who advance exponentially at the beginning) to either stay at the same pay grade for years or suck it up with a 5% raise.

11
nahname 5 hours ago 4 replies      
>The thing is you cant counter. Its too late by that point.

Unless the employee is leaving for money. Junior employees HAVE to jump jobs 2-3 times to get to their market rate.

12
fnordfnordfnord 4 hours ago 1 reply      
I've jumped to a greener pasture a couple times. I always gave plenty of signals but I never made threats about leaving. He's right, by the time I've made the decision to look elsewhere and found another job, there isn't much that can be done to reverse that course of action.

Also, if you don't pay severance or have me on contract, you aren't likely to get much notice.

13
pjungwir 4 hours ago 2 replies      
I was very happy to see this one:

> Find a Growth Path for Everyone, Especially the Great Ones.

I've done a few interviews in the last few years, and whenever I ask about career path, they always stumble, even companies with 50+ employees. "We have a flat org chart." I've pretty much decided that it's up to me to advance my career via freelancing, because as an employee you hit a ceiling very fast.

14
scrabble 2 hours ago 1 reply      
I'm not getting paid as much as I would like. I've brought it up with my manager on multiple occasions. I'm now about 18 months with no raise, and I've been told that this round of performance reviews does not come with a raise either. But I'm constantly reminded of just how valuable I am to the company.

So, I've done what I think anyone else would do and have talked to someone at another company about another position. If I get an offer, I'm likely to take it. I'm highly unlikely at that point to accept a counteroffer.

It feels like I'd maintain a better relationship with my current employer by quitting for a better offer than by seeking and accepting a counteroffer and then leaving later.

Accepting a counteroffer would feel like the company would expect me to "owe" them, and I don't owe anything to any employer.

15
arecurrence 5 hours ago 2 replies      
> Pay market, or above, as soon as you can. Its a sign of respect.

This is the best advice I can give any software engineering firm. I've left companies large and small chiefly because they waited until I gave notice to bring my salary in line with my performance. If you have a top engineer and haven't given them a raise in 9 months, they are seriously considering their options.

A lot of people act like compensation shouldn't matter. All the senior people in your company care VERY MUCH about their compensation. If you become good friends with them you will see this very clearly.

You deserve it just as much.

16
alain94040 2 hours ago 0 replies      
Fine, I'll share my true story of how to negotiate a raise when you are an introvert and without being seen as a job-hopper.

Tell your boss: I'm getting unsollicited offers for 20-30% more than I'm currently making here. Can you fix that?

This has worked successfully to go from low $100K to >$130K.

It works for two reasons

  1. It shows you know your market value - this is not a number you are making up  2. You are not being unfaithful to your boss, you didn't go out and sollicit those offers, they just happened

17
yalogin 5 hours ago 4 replies      
First off a link on Quora did not bug me about a login - weird.

Second, all this talk about hiring "rock stars" and retaining them but I heard no one talking about bad hires. Does any one want to share stories about bad hires and why it was a wrong decision? I believe companies put too much emphasis on hiring the correct person. I understand if its the first few employees but after that does it really matter? Unless the person is a real asshole (and he did not care enough to hide it during the interview) does it really matter?

18
mbesto 5 hours ago 0 replies      
This is solid. I constantly remind my co-founder he can walk out and get a $100k+ salary somewhere else (I actually told him to go interview elsewhere - and I would provide a reference - just to prove it). By doing this it opens the discussion for why he wants to stick it out, and I'm quite confident by the aforementioned actions that I know I'm not wasting his time nor is he wasting mine.
19
noonespecial 3 hours ago 0 replies      
I'd go even further. If you're in the mentality where you're trying to jjjuuust time that minimal raise to prevent desertion, you've already lost in the way the author is pointing at.

The best companies see high performing employees as systems(1) that receive relatively small amounts of money as input and produce great amounts of value in output. The more money in, the more value out. The question should be how much money they can shovel in the front end before the "unit(1)" burns out. (Active cooling via free food, daycare, and flexible schedules doesn't hurt either)

(1) Yes they see you as a simple value proposition. As an employee, you are accounted for in exactly the same way as the contract for that big Xerox printer out front. That's not necessarily a bad thing. Any company that leans too heavily on that "part of the family" schtick is a place you want to be wary of.

20
devrelm 5 hours ago 3 replies      
> There are some very tell-tell signs of someone interviewing. Out of the office at weird hours. Talking on their mobile phone on the sidewalk

I've been that employee. As my dad (a farmer) says, "the hired man wants a day off to go look for another job." There are many tell-tale signs that an employee is looking for another job, and the article is absolutely right thatuntil the they put in their noticeit is rarely too late to change their mind.

21
programminggeek 1 hour ago 0 replies      
To be blunt, when you are earning less than $100k, when someone else is offering 10-20k more than you are currently making, you are (and probably should) going to take that deal. I've worked at companies with great culture and companies with crappy culture, and the reality for both was the same. When someone else values you higher, you start to feel like you're getting a bad deal and the things that you normally sweep under the rug start to grate on you.

Having a cool culture seems like the hot thing lately, and it can be a great thing, but if you use cool company culture to be cheap on salaries, you are putting money in the wrong place. I think most people at some point would rather work in shabbier offices and get paid more than have some fancy building that only serves to impress outsiders.

When you get over 6 figures, I'm sure it's the same deal, just takes larger numbers to move the needle. In either case, lack of above market compensation is going to cause talent to move if for no other reason than because they are in demand.

22
memracom 5 hours ago 0 replies      
Feedback, yeah!

It's odd how so many software companies claim to be Agile and when it comes to employer/employee relations they toss the Agile Manifesto out the window. Agile is founded on communications and short feedback loops.

Please apply this wisdom in all of your internal business affairs, not just in development activities.

23
chavesn 3 hours ago 0 replies      
This goes along with my One Universal Truth: "You get what you pay for."

The only real way to win on price is to find employees who don't know any better, and then, well, you have employees who didn't know any better. Did you really win?

24
tjmc 3 hours ago 1 reply      
Just out of interest - where's the best place to find market rates?
25
zallarak 2 hours ago 0 replies      
I believe a handful of motivated, and skilled engineers is more valuable than a large team. I think it is very wise for employers to spend lots of time finding a few great engineers and motivating/retaining them with high salaries and equity exposure. The advantage of having a smaller yet more talented team also has a wide variety of business-level benefits including better cultural direction, less management overhead, more accountability, etc.
26
skittles 5 hours ago 1 reply      
Article says to talk to your talent at least once a quarter. You should do it once a week.
27
31reasons 2 hours ago 1 reply      
How about this algorithm ?Whenever someone leaves, present two choices to the team:1. Divide the salary of the person just left equally among the peers. For example, In the team of 5 developers, Joe was making 100k per year and he left the position. Give $25 raise to each developer in the team.OR2. Hire new person.

There should be a self-stabilizing compensation system where employee don't have to leave purely on the issue of low compensation.

28
Eleutheria 54 minutes ago 0 replies      
If I don't get at least a 10% raise to match inflation without even asking for it, I start looking for greener pastures.

If I deliver, you should reward accordingly without bargaining or brownnosing.

If I don't deliver, just fire me, no questions asked.

29
goofygrin 3 hours ago 0 replies      
I think a harder thing is making the decision to cut someone if they aren't making the cut. A lot of times it's like a bad girlfriend. The thought of being alone is worse than the pain you're going through... Especially if it's early and everyone is drinking from the proverbial fire hose.
30
dangayle 1 hour ago 0 replies      
Great read. Seems to me that the trend of signing people to a specific contract term limit (2 or 3 years @ $X + benes) would solve some of this. When the contract is due for re-negotiating, there's none of this uncertainty over compensation.
31
mindvirus 4 hours ago 2 replies      
A good senior engineer these days costs somewhere around $200k/year (if not more) in a major US tech hub, factoring in overhead costs of salary and benefits. This is something that most seed or series A companies can't really afford.

So my question is, as salaries rise, how will this affect the startup industry? Where $1 million could buy you 8 people for a year before, now it can only buy you 6. This seems to make bootstrapping much more difficult. It also seems like this may end up causing certain startups to be impossible, since they require much more money than they would have normally.

Anyway, thoughts?

32
throwaway-to1 3 hours ago 2 replies      
I'm presently confused about what to make of my renumeration where I am now...

I was supposed to have my yearly review three months ago, and the owners are out of office or busy so much I can't get a moment of their time.

Last year I got an 8% raise and 8% bonus. This year I got a 2.5% raise and a 10% bonus. I don't know why, and feel communication is unattainable to me now. I've been pondering looking for jobs... I know how hard it is to simply find a skilled and well rounded programmer in Ontario, much less one who can write clean complex systems. I just want to know why that was my deal this year. It doesn't help that I'm paranoid I'm grossly incompetent at what I do, and fearful others think that about me despite the fact I have stronger skills than most programmes I meet in this city.

I look at careers sites more and more as my career-paranoia fluctuates.

I'm posting this as a data point in the model of programmers looking to quit.

33
gangster_dave 4 hours ago 1 reply      
Does anyone have good resources that describe what new engineers should expect in terms of salary during the first few years of their career?
34
rrggrr 2 hours ago 0 replies      
Fit. Fit isn't about any one thing: money, manager, work or culture, but any one thing badly out of alignment can destroy fit. Retention means having lines of communication open with important roles and important people (not always the same) and judging the health of the fit.
35
Bahamut 4 hours ago 1 reply      
As the type of employee described here, I think companies would do well paying their employees what they're worth. The only reason I've had to consider switching so far is due to significant pay differences.
36
balls187 3 hours ago 0 replies      
Rather than say "givem a raise" have the constructive dialog: are you happy, and what can I do to keep you happy.
10
Why is the mouse cursor slightly tilted and not straight? stackexchange.com
274 points by attheodo  15 hours ago   93 comments top 24
1
Stratoscope 9 hours ago 2 replies      
It's interesting to see how misinformation propagates.

The second-highest-rated answer on Stack Exchange (46 votes and climbing) claims that another reason for the left arrow cursor in early GUIs was to put the hotspot at (0,0) to save time in the mouse position calculations:

http://ux.stackexchange.com/a/52349/43259

The answer cites this Reddit comment as its source:

http://www.reddit.com/r/explainlikeimfive/comments/1qhzym/wh...

That comment is a direct copy of this Yahoo! Answers comment from 2009, which says that the Xerox Alto worked this way, but cites no source for the claim:

http://answers.yahoo.com/question/index?qid=20090520113724AA...

In fact, the Alto did have multiple cursor shapes, and the hotspot wasn't always at (0,0). For example there was this cross in a circle:

http://www.guidebookgallery.org/articles/thexeroxaltocompute...

and a right-pointing arrow:

http://toastytech.com/guis/saltobravo.png

Let's ballpark the CPU overhead. According to this article, the Alto executed about 400,000 instructions per second, with an instruction set modeled after the Data General Nova 1220:

http://www.guidebookgallery.org/articles/thexeroxaltocompute...

Here's a brief description of the Nova instruction set:

http://users.rcn.com/crfriend/museum/doco/DG/Nova/base-instr...

There are four accumulators, with an ADD instruction that adds one accumulator to another (and a similar SUB). There are LDA and STA instructions that load and store memory, addressed with a displacement and an optional accumulator (e.g. to access into a structure using a pointer).

It seems reasonable to assume that at some point in the mouse refresh code, we will have the mouse's X value in one accumulator, and a pointer to the cursor structure (containing the cursor bitmap, hotspot, etc.) in another.

So to adjust our X value using the hotspot X from our cursor structure, we simply need an LDA to load the hotspot X into another accumulator, and an ADD or SUB to do the calculation. Repeat that for Y, and we've added a total of four instructions.

At 400,000 instructions per second, these calculations would add a 1/100,000 second overhead to the mouse calculation.

A worst case might be that we don't have a free accumulator when we need it. So that would be another STA and LDA to spill one temporarily.

If we have to do that for both X and Y, it would put us at eight instructions total, or 1/50,000 second.

Still probably worth doing it to get the flexibility of custom cursor hotspots. :-)

2
eterm 14 hours ago 10 replies      
I remember windows 3.1 had a utility for drawing custom cursors. I had great fun making cursors (I was around 10 at the time I guess) and had completed forgotten about it until now!

I think that utility was a 16x16 grid, and indeed the easiest to see arrows utilsed the vertical, although actually a cursor which uses the horizontal and diagonal isn't bad either.

3
darkmighty 14 hours ago 6 replies      
Nobody seems to mention a pretty good reason also: standard western text (and content in general) is oriented right-to-left; therefore covering only one side seems to me intuitively less obstructing (we can read perfectly up to the click spot, instead of being confused by what's underneath it)
4
agumonkey 14 hours ago 0 replies      
It's funny how as a kid in the 80s, this was something you'd notice, think and feel about. I have no idea what were the computer system I was using[1] but I vividly remember staring at the cursor with interest.

[1] at my father's office, govt agency, something like an early x window system... can't recall

ps: actually, both physical interface mesmerized me, keyboards were curious creatures for me, here's a similar model of what was used http://goo.gl/gyD7R6 I love the non flat keys and the 0, 00, 000 series )

5
ii 11 hours ago 0 replies      
When an average right-handed person points at something his hand has a very similar shape. Imaging a large screen with some kind of a presentation and you are explaining something to the public and pointing at some object on the screen. The shape of your hand in this moment is the most natural thing for a pointer, immediately understandable by anyone.
6
ZoF 6 hours ago 0 replies      
I always assumed it was tilted slightly in order to have one of the sides of the default cursors triangle be parallel to the side of the screen.
7
memracom 7 hours ago 1 reply      
Jlio Turolla Ribeiro's answer is far better. I guess that young people have lost the ability to think outside the computer. Some of us still remember school classes, and business presentations in which presenters pointed at the board with their finger, or with a two meter long stick called a "pointer". The pointing was almost always in the same angle as the photo that Julio included, either from the left or the right. Of course, from the right is more natural for the right-handed majority.

The fact that some engineer tinkered with the computer representation of the pointer for code efficiency reasons, does not change the fact of hundreds of years of history in which teachers pointed at an angle from the right. I'm sure that if you hunt up old movies (black and white ones) where there is a school/university lesson being portrayed, you will see a pointer in use in this pose.

8
jere 7 hours ago 0 replies      
It's not really a "historical" reason though. A cursor is still a very small icon. It's pixel art. Choosing angles that look crisp is a foundation of making pixel art and I don't think screen densities are high enough to ignore that.
9
ebbv 14 hours ago 3 replies      
I always assumed because the point of the diagonal arrow cursor is located at 0,0 in the image, making the origin location of the mouse cursor image and the click point of the arrow the same. Whereas with any vertical arrow cursor, the click point would no longer line up with 0,0.
10
oneeyedpigeon 14 hours ago 1 reply      
<pedantry>pointer</pedantry>
11
ck2 14 hours ago 1 reply      
We owe so much to xerox, did they ever make money off all that R&D ?
12
cl8ton 9 hours ago 2 replies      
I was told a long time ago by someone who should know.

The tilt had a symbolic hidden meaning... It is pointing to the North-West to MS headquarters in Redmond.

13
Aoyagi 14 hours ago 1 reply      
I tried a straight upwards cursor once. It felt terrible. The tilt gives the cursor "extended hand" feel.
14
GoofballJones 14 hours ago 0 replies      
I remember drawing my own mouse pointer on my Amiga. Made it really small, could barely see it, but didn't take up as much space as the default.

Actually, quite easy to put anything you wanted as a mouse pointer on the Amiga.

15
coley 11 hours ago 1 reply      
I don't know if this went into the decision making process, but with the cursor at an angle the OS can use the x,y coordinates of the cursor to find it's target, instead of having to offset the coordinates to compensate for a straight cursor.

I'm not sure if that's how cursors work.. just a thought.

edit: grammar is hard

16
sidcool 13 hours ago 0 replies      
A very interesting fact! I never even thought about it. Now I cannot stop thinking about all such small things. Great post.
17
gchokov 13 hours ago 0 replies      
There are so many little stuff left for ages due to technical limitations back them. It's fascinating.
18
indubitably 4 hours ago 0 replies      
NOTUSEFULSHUTDOWN
19
dudus 11 hours ago 0 replies      
Have we run out of questions for SE?
20
kimonos 4 hours ago 0 replies      
Great question!
21
acex 7 hours ago 0 replies      
or why is tilted from bottom-right to top-left. ;
22
jokoon 12 hours ago 1 reply      
still no way to change the cursors under mac os x ?
23
rckrd 11 hours ago 1 reply      
I've always been confused as to the success of the computer mouse. It doesn't seem like the ideal solution. Then again, neither do trackpads.
24
lallysingh 14 hours ago 4 replies      
For HN, I expected a much deeper explanation than "it looked better on low res displays.". This isn't worth our time.
11
Jessica Livingston (Y Combinator) at Startup Grind 2014 youtube.com
76 points by pg  9 hours ago   14 comments top 5
1
highCs 4 hours ago 1 reply      
When a founder group from a foreign country come to YC, do they incorporate in the US? If yes, once returned in their country, do they incorporate another company/subsidiary? How do they work?
2
zmitri 3 hours ago 1 reply      
I saw Jessica speak at Startup School in 2012 alongside some of the biggest names in tech, and her talk was by far the most useful/practical for me as a 23 year old entrepreneur.

She is really really good. I suggest watching it.

3
soneca 7 hours ago 1 reply      
My biggest takeaway: at 32:15 she puts her head closer to better listen a strong accent guy making a question. I bet she was thinking "I better understand what this guy is asking or another flame war will erupt on HN as soons as this get there!" lol
4
simonebrunozzi 8 hours ago 2 replies      
Watching it now.What's the most important takeway, pg?
5
smtddr 5 hours ago 0 replies      
This was a lot more informative than I thought it'd be.

I expected everything in YC to be hush-hush.

12
My Startup, a Retrospective rdegges.com
110 points by rdegges  11 hours ago   33 comments top 13
1
wpietri 10 hours ago 0 replies      
Great writeup. I really appreciate when people take the time to do this. And also to risk dealing with the jackass criticism that usually comes from putting yourself out there on the internet.

One thing that was underemphasized: They built in response to actual need, which was great. If you click through to the announcement post [1], he explains that he had spent 4 years doing telephony stuff, and was always bothered by the lack of an API.

Good for them for taking their small proof of need and building small. And then using that to get more proof of need as a way to justify further investment. That's in contrast to a classic startup mistake, which is to jump in and build something you just think other people need.

[1] http://www.rdegges.com/im-working-on-a-startup/

2
nickff 9 hours ago 0 replies      
I think that the author is a bit unfair to himself, in the conclusion. He wrote the following:

>"I think the single largest mistake we made was to not invest more time into OpenCNAM as it was growing. Instead of devoting time to other projects, we should have doubled down and focused on developing the product even more, and made it into the best possible product.

At the time, it seemed like a good idea -- but in retrospect, I believe that if we would have really focused on adding more features to the API service, cleaning up the user dashboard, and fixing some UI elements -- we could have won a lot more potential customers over."

While he acknowledges that this is only apparent in hindsight, he does not seem to give himself credit for diversifying his risk portfolio at the early stage, when success was uncertain. It may be that doubling down earlier would have put the business in a better situation now, but that does not mean the expected outcome of doubling down was better than the expected outcome of maintaining parallel projects.

3
voidlogic 9 hours ago 1 reply      
>>A few weeks after our 'real' launch, we started having issues keeping up with customer demand. The Django site and API service I had built were hacked together quickly, and were not scaling properly

Since they ended up starting their re-write just weeks after launch- This is a great example of how the opposite of over-engineering is not adequately planning for success. Startup teams tend to naturally learn towards one or the other extreme and need to fight for balance. I personally try to remind myself of this fact as much as possible :)

4
carbocation 11 hours ago 0 replies      
This is a great write-up. Due to the title, it took me until 3/4 of the way down to realize that your startup did not fail, but actually is succeeding!
5
habosa 2 hours ago 0 replies      
I really like to hear success stories from honest people with a real value-adding product. This isn't just some social mobile app or a marketplace for X, you solved a real pain point for a lot of people. Congratulations on all the success and good luck going forward! Thanks for taking the time to break all of this down for HN.
6
primitivesuave 9 hours ago 1 reply      
I know there's no formula for a successful startup, but if you had to pick one, this would be it: Build an MVP, get product validation, scale, find paying customers, make outside hires.
7
DjangoReinhardt 7 hours ago 2 replies      
Django n00bs like me owe a great deal of our learning to Randall. His book, 'The Heroku Hacker's Guide' is an excellent resource for anyone looking to deploy a Django project quickly and cleanly to Heroku.

To that end, he also created a fantastic Django template `django-skel`[0] that uses good industry-standard practices. As a newbie, `django-skel` helped me understand code modularity and the importance of organizing your code in more ways than one.

To top it all, Randall is a thoroughly nice guy to chat with. I'm glad OpenCNAM is growing and I wish it continues to grow for a long, long time to come. :)

[0] https://github.com/rdegges/django-skel

8
lumpypua 9 hours ago 1 reply      
Great article, the explanation of rearchitecting the app left me hungry for more!

I currently work on a Django project and I'm pushing tastypie to its limit as well. Working on switching to django-rest-framework.

At a high level, what does your flask service architecture look like? I dug through your blog but couldn't find much. I'm particularly curious how you handled migrating from a django database and schema to whatever you ended up using.

9
Elizer0x0309 1 hour ago 0 replies      
Thanks for the share! Always great to read a post-mortem and glean wisdom and learning lessons!
10
RachelF 7 hours ago 1 reply      
Good article. Maybe I missed it, but where did you get the database of number caller ID lookups from originally?
11
glenntnorton 10 hours ago 0 replies      
Great work! Thanks for sharing.
12
dmilanp 8 hours ago 0 replies      
Your post has a lot of points that lead the way for us who are starting. Very helpful and honest. Thank you.
13
spullara 9 hours ago 3 replies      
This looks like it is probably a great lifestyle business (based on the claimed API requests and pricing) but I'm not sure that I would call it a startup. It sounds like it hasn't even had an employee devoted to it until recently. There isn't anything wrong with that at all but for a "startup" this wouldn't be a great outcome.

http://en.wikipedia.org/wiki/Startup_company

13
Drawing as a programmer gameofworlds.tumblr.com
230 points by cinskiy  17 hours ago   99 comments top 24
1
egypturnash 12 hours ago 3 replies      
Professional cartoonist here.

If you want to move on to the next step of drawing whatever the hell you want to out of your head, in any angle, I strongly recommend you go to http://johnkcurriculum.blogspot.com/2009/12/preston-blair-le..., get the Preston Blair book, and start doing these exercises. You will get a lot better, a lot faster.

You can build on the simple cartoon characters in these lessons and do super realistic stuff, or you can keep on being a cartoonist. Whatever works for you.

2
martin-adams 15 hours ago 2 replies      
I do believe that anyone can draw with enough time. In 2009 I took 8 days holiday, one per week and dedicated it to drawing. I could see the improvement vastly:

http://eightweeksproject.wordpress.com/2008/03/25/projectone...

Then in 2010 my new years resolution was to do a sketch a day. Hard going but very enjoyable:

http://www.youtube.com/watch?v=NFWNlK2H29U

http://www.youtube.com/watch?v=lYtXlhVLYYE

http://martinadams.files.wordpress.com/2010/04/l_640_480_937...

http://martinadams.files.wordpress.com/2010/04/p_640_412_cb0...

I didn't dedicate enough time to each sketch so only got a handful of good drawings. I've fallen out of it again so would have to get right back to basics, but it shouldn't take too long before you start to feel fluid again.

Being able to draw is like a muscle.

3
b0rsuk 2 hours ago 0 replies      
I'm a beginner programmer who is attracted to aesthetic aspects of creativity ('art' is a dirty word for me because of people associated with it). I tried to learn playing a recorder, because I like the way it sounds, and I adore music in general. I couldn't stand it, and I learned something about myself in the process. I'm dreadfully bored by repetitive tasks. For me it leads to routine, and routine leads to terrible errors. I intend to try this book and drawing in general.

Drawing has the potential to suck me in just like playing an instrument failed. I think drawing is to playing an instrument like solving nonograms to solving sudoku. Sudoku is inherently repetitive to solve, you need to check for all numbers in a square, one by one, then all numbers in a line, line by line... In contrast, nonograms usually have non-linear solutions - there is no single way to get to the final result. This makes the process of solving a nonogram vastly more enjoyable for me.

I have no illusion that learning to draw won't require days, months, years of practice. But you can - should - try new things, and you improve in the process. No endless repetition of one piece until you can play it perfectly.

Sounds a lot like Starcraft, doesn't it ? :> I think Starcraft players who like to invoke comparisons to Chess have an inferiority complex and can't enjoy Starcraft for what it is. And it is a lot more like playing guitar than Chess. It's just that Chess much more accumulated prestige.

One of things putting me off Starcraft is that learning to play it violates the DRY (Don't Repeat Yourself) principle. A few years from now you may be vastly better at Starcraft, and I'll be able to draw many /different/ things.

I think it's a wider problem with most games. I know very few that really reward creative thinking rather than memorization of strategies and their counters, and practicing to execute them perfectly. Board games have it easier, in absence of computers they can afford to be less strict about rules, and the focus in boardgame industry is still on developing interesting mechanics rather than building on a few established genres.

4
louischiffre 9 minutes ago 0 replies      
Long time lurker here. I am also a programmer who started to learn how to draw. I even put a blog documenting the process. http://louislearnstodraw.blogspot.ch/So here is my 2 cent.Drawing is definitely more than being able to reproduce a 3d object on a 2d surface, it's about understanding how things are constructed and work. For example if you want to draw a steam locomotive, you have to understand what are the parts of a steam engine, how power is generated and transferred to the wheels, how it is built, why the parts have this shape, ... . If you don't have this understanding, there is no way you can draw a steam locomotive from imagination. Of course you can do a nice copy with beautiful rendering that will look nice, but drawing something that is realistic will be very difficult.Since I started learning to draw, I learned a lot of things on a variety of subjects: entomology, anatomy, marine biology, history, technology,... When I visit a new city the first thing I look up are the museums, where I then go to draw.I could elaborate more on that subject but I have to run. Let me know if there is any interest.
5
frooxie 15 hours ago 7 replies      
From what I can tell, Drawing from the Right Side of the Brain teaches you to draw things you already see, which is nice, and can help you impress your mom if you practise a bit, but as far as drawing ability goes, being a human copy machine is an extremely basic skill.

Don't get me wrong, basic skills are valuable, but reading the book and practising for a couple of months will not make you a skilled artist any more than learning to touch-type and adding an existing Javascript menu to a web page will make you an expert programmer. It can be a first step, but if you want to be really good at drawing, you probably want to to spend years practising composition, perspective, anatomy, the emotional effects of lines and shapes, color theory, storytelling, creating variation/contrast/depth/movement, etc. There's much, much more to drawing than just being able to copy what you see in front of you.

(I'm not writing this to discourage anyone, I just want to put the book into perspective.)

6
krick 15 hours ago 3 replies      
I really don't like how much that book (and other books of the same author) is promoted. I am into drawing for quite a long time already (and I also think it helps me as a programmer etc.) and I've heard about that book like thousand times, so I've finally read it. I understand why it's impressive: because author delivers the material like "so, there are some techniques to use your right side of the brain instead of the left one and woah you see, you draw much better now! It's magic! By the way, I have million students who couldn't draw, but they took my courses and now they are master-artists and own their own design saloons." And you probably actually will draw better than you expect (especially when you don't expect you can draw) after some simple guidance and a few tries.

What I'm saying it's very populistic, but explains many thing the wrong way, which may cause some problems if you'll want to improve your techniques later. If you are learning to draw I'd better recommend you start with Andrew Loomis: "Fun with a Pencil" or even Vilppu Studio tutorials if you have serious mindset.

7
lutusp 11 hours ago 3 replies      
One can't fault a simple pencil and pad of paper, but I think if technologists become interested in drawing (which seems both likely and desirable), over time there will be more ways to do this with a tablet and stencil, with all the advantages. For me personally, notorious for moving lines around in my drawings, that would be very nice -- one would be able to delete lines that didn't work out.

I've always envied people who are actually gifted draftspeople -- people who lay down the exact right line on the first try, and whose drawings are paragons of minimalism. R. Crumb, for example -- there's a video showing him drawing with a pen and never laying down a bad line. Whenever I watch that video, I have an envy meltdown.

My point? With a tablet and stencil, by being able to delete things, I could pretend to have actual drawing talent. :)

One of my old drawings: http://i.imgur.com/hRQY84G.jpg

8
Morendil 15 hours ago 1 reply      
Want to make your 10-minute drawing breaks more fun? Try Drawception: http://drawception.com/
9
larve 1 hour ago 0 replies      
we have a tiny blog with a friend (programmer too) where we put up our drawings, both learning to draw from various books and sources on the internet. I started 4 years ago at 28, don't know about my friend. I'm all for messy and sketchy, he likes the clean things :)

http://hackingart.tumblr.com/

I haven't posted much lately, been in a kind of slump and not producing much.

10
gk1 9 hours ago 0 replies      
Aside from the mental stimulation or distraction drawing provides, it's an incredible tool for solving or communicating problems... Especially to non-programmers.

What other tool or method allows you to explain a development challenge or solution (at a basic level) to a non-developer, in a matter of minutes? Being able to stand up in a meeting, walk to the whiteboard, and sketch out basic concepts for everyone in the room to understand makes you a goddamn hero. You'll go from being just a developer to the developer who can communicate with the biz guys, the sales guys, the designer guys, etc. That's valuable.

There's a good book on this topic, which I highly recommend: http://www.danroam.com/the-back-of-the-napkin/

(I have no affiliation with the author or the book.)

11
kenshiro_o 15 hours ago 4 replies      
I'd love to become good at drawing because I believe it can help presenting your ideas in a very visual and straightforward way.Moreover it is an activity that stimulates the creative and imaginative part of the brain. My main issue, aside dedication, is that I "suffer" from a natural tremor in my hands which I have been unable to shake off, even after seeing a doctors years ago and undergoing a battery of tests which showed nothing conclusive nor serious (I also took some pills which showed no results).

So my main questions would be: - Can I still be good at drawing despite my trembling? - How do I cure my trembling?

12
pirateking 14 hours ago 0 replies      
I recommend Fast Sketching Techniques by David Rankin. Of course, nothing beats practice and the book will help you focus your practice in a very rewarding way. I have been drawing my whole life, and still always keep an open notebook and pencil right next to my keyboard when I program.
13
jaegerpicker 13 hours ago 0 replies      
I tie flies, for fly fishing when I'm working from home and it has the same effect. It's a different way of using my brain that helps me refocus. It's also really nice to physically produce something. Plus then I have a better selection for fishing. Doesn't work so well when in an office setting though.

This article does make me want to draw again, I used be an amateur comics book artist/cartoonist but I haven't drawn seriously in years.

14
beobab 16 hours ago 1 reply      
I also heartily recommend the book that the author of this piece recommends. I'm currently about half-way through it, and the reaction to my drawings from my family has been: "Wow! I had no idea you could draw so well."
15
rsl7 13 hours ago 0 replies      
I used to keep paper taped on my desk under the keyboard. whenever I was working something out mentally or just taking a break I'd push the keyboard to the side and add to an ever growing elaborate abstract drawing.
16
mcv 13 hours ago 0 replies      
Interesting. I used to draw a lot as a kid, and was pretty good at it, but I now realize that the more I programmed, the less I drew.
17
poseid 14 hours ago 0 replies      
To me, the important question in this article, is whether drawing (or music, dancing, acting, yoga, sports), actually helps you solving problems? Not sure, what helps solving problems is talking about them, discussing them, etc. and this can be done with social networks (or writing, tweeting, etc.) too.
18
adcoelho 11 hours ago 0 replies      
I bought this book and did find it amazing, the first exercises are very good in showing how you Can draw, specially the inverted picture exercise. However, I struggled to find the material with which to do some of the later exercises and ended up putting it aside.
19
pjgomez 15 hours ago 0 replies      
Fantastic article. As an ex-avid comic book reader and programmer, it certainly turns on some old hopes to draw better.
20
sarreph 16 hours ago 4 replies      
Wouldn't other left-brain activity, such as playing a musical instrument, have the same effect?
21
loladesoto 13 hours ago 0 replies      
if you like drawing living things (and you care about proportion, realistic renditions) studying the underlying musculoskeletal structure helps.

i just try to capture something fleeting. i identify the most salient element and try to communicate that in my drawing. the most useful exercise in that book imo was the technique of trying to draw something once, then turning it upside down and trying again. ("disorienting" the object trains your mind to better identify spatial relationships.)

22
enbrill 14 hours ago 0 replies      
I didn't get the bit about the video (the kings speech). I've never seen the movie. Seemed like a random throw in. Wish there would have been at least one sentence to tie it in.
23
euph0ria 16 hours ago 1 reply      
Which hacker news article did the post refer to?
24
dusan82 16 hours ago 3 replies      
As a programmer, I think (y)our hobbies should be non-visual. E.g. music, learning spoken languages, etc...
14
No more `grunt watch` faster builds with the Broccoli asset pipeline solitr.com
95 points by joliss  11 hours ago   47 comments top 15
1
munificent 9 hours ago 1 reply      
This is slightly related and I don't want to sound like I'm trying steal its thunder, because this looks really cool. I work on the asset pipeline that comes with the Dart SDK. It has many of the same principles as these.

Any transformation step can read in many input files and produce many output files. The built-in dev server tracks the entire asset dependency graph and only rebuilds the assets that are dirtied by a source file changing.

We have a plug-in system, and it's built on top of the same package management system that the SDK uses, so you can get transformer plug-ins as easily as you can get any other dependency.

We still have a lot of work to do to fully flesh things out, but it already does a lot, including supporting complex scenarios like transformers whose own code is the output of a previous transformer.

More here: https://www.dartlang.org/tools/pub/assets-and-transformers.h...

2
JangoSteve 8 hours ago 1 reply      
Broccoli is a new build tool. Its comparable to the Rails asset pipeline in scope, though it runs on Node and is backend-agnostic.

This first line is a little disingenuous. Technically, it's not backend-agnostic, since it depends on Node being installed on the backend (in the same way that Sprockets [1] depends on Ruby). The Rails asset pipeline is a framework-specific integration of Sprockets. In much the same way, you could more closely integrate Broccoli with Rails if you wanted and call it a new Rails asset pipeline.

The project itself looks great, just the first line was confusing since they started the docs off by comparing apples to oranges.

A better comparison would probably be, "It's comparable to Sprockets (which powers the Rails asset pipeline), but runs on Node instead of Ruby."

[1] https://github.com/sstephenson/sprockets

3
Natsu 35 minutes ago 0 replies      
The fun part is when your non-technical coworkers ask what you're reading so intently and you read something like this to them with no context: "Run broccoli serve to watch the source files and continuously serve the build output on localhost. Broccoli is optimized to make broccoli serve as fast as possible, so you should never experience rebuild pauses."
4
Nemcue 10 hours ago 5 replies      
Not often I see tools announced with such a thoroughly researched article. Great stuff!

I guess the problem these build tools are facing is the amount that people have invested in Grunt. There are just /so/ many grunt tasks at this point.

5
iamstef 10 hours ago 2 replies      
Having used literally ever alternative, Broccoli has been a joy to use so far, can wait to port all my projects to it.

It manages complexity really well.I have thrown many known failure scenarios at it, and it handled them all without a hitch.

6
sonnym 8 hours ago 0 replies      
This looks like a solid project.

I want to mention mincer[1], which I have used in the past for compiling assets, and it has been an entirely painless process. Definitely take a look at it as an alternative, which has been around longer and has seen assistance from the folks behind sprockets[2] (according to the README) for creating a similar API.

1. https://github.com/nodeca/mincer2. https://github.com/sstephenson/sprockets

7
clhodapp 9 hours ago 1 reply      
I want to note that this headline is a wonderfully hilarious if read without js-programmer context.
8
ChikkaChiChi 6 hours ago 0 replies      
There are at least 5 separate build tools referenced in these comments.

Obligatory XKCD: https://xkcd.com/927/

9
stefan_kendall 7 hours ago 0 replies      
Yet ANOTHER build tool. I've started placing bets on when repositories will flip to NEW-HOTNESS-BUILD-TOOL at the cost of actual product development time.

Engineers will constantly run toward shiny baubles at the expense of everything else.

10
matteodepalo 6 hours ago 0 replies      
I'm happy to see this reach beta version, it's a great step in the right direction. Grunt is too generic as a tool and we've all seen Gruntfiles reach enormous lengths, to a point when it's really hard to figure out what is processing what.

One thing that has room for improvement though is the syntax, which in my opinion doesn't reveal the intention behind some methods and is a bit too coupled with the implementation. What does `makeTree('lib')` mean? If it's taking a folder and its files then why not rename it to something like `broccoli.requireFolder('lib')`? Also another thing that might improve usability would be chaining compilers instead of calling them directly with the tree as parameter.

These are just minor things anyway, I'm sure the library will improve over time. Congrats joliss, great fan of your work!

11
jonaldomo 9 hours ago 3 replies      
I would like to request renaming brocollifile.js to brocolli.js. I believe brocollifile.js is too long for a standard build file name. Gruntfile.js always bothered me. Compare it to pom.xml, build.xml, package.json and it feels out of place.
12
mikewhy 10 hours ago 0 replies      
Not sure what the pros are against Brunch. The author states:

> Brunch also tries to do partial rebuilding rather than caching; see section Caching, Not Partial Rebuilding

But the end of that section seems to imply that it's still something that needs to be implemented per-plugin:

> Plugins that map files n:1, like Sass, need to be more careful about invalidating their caches, so they need to provide custom caching logic.

I'm excited to see what comes of it, but still prefer the idea of simply running `npm install (sass|jade|less|stylus|coffee-script)-brunch --save`

13
Kiro 8 hours ago 1 reply      
What's the difference between a build tool and a task runner?
14
SippinLean 5 hours ago 0 replies      
Can someone please build a Grunt GUI that I can drag-and-drop project folder to, to watch multiple folders?

Prepros is the closest software out there now, but it's not extensible like Grunt.

15
jakswa 7 hours ago 1 reply      
Anyone know of any resources for writing plugins?
15
Less Commonly Used Unix Commands danielmiessler.com
140 points by danielrm26  14 hours ago   75 comments top 26
1
davexunit 11 hours ago 5 replies      
Why do programs that only work on OS X always end up on lists of "Unix" programs?
2
arjn 11 hours ago 2 replies      
Plenty on this list are well known but there are some nice ones that either I'd forgotten about or didn't know :column, ss, comm, fc

I'd add the following to the list : printf , bc

Also, "ddate" is fun. I had no idea and just spent an enjoyable 10 minutes looking it up.

Thanks for posting this.

3
coherentpony 46 minutes ago 0 replies      
I was really enjoying the fact you had a tl;dr next to each one. Then I noticed this

    wget: get ws
Ok, if I didn't know what wget did already that wasn't too useful. But then it gets worse

    vim: attack yourself    man units: interesting    ddate: wtf

4
oxymoron 12 hours ago 2 replies      
I found jq the other day (http://stedolan.github.io/jq/), which is a commandline utility for manipulating json data, resembling awk.

Considering the number of API's that return json data these days, it's simplified my life a great deal. It's proven especially useful for automation shell scripts, when working with things like the aws command line client that return all results as json. Usually I'm looking to extract something like a single field to use as input for a following command, and jq makes it really easy. No more inline python or awkward sed regex's. :)

It's available using apt-get on debian/ubuntu, but that version is out of date with the online docs. It's rather trivial to build from source though (it has no external deps).

5
ben336 12 hours ago 1 reply      
vim and xargs are less commonly used commands? Seems to be a mix here of small unix-y utilities and larger programs (like vim and tmux). Good stuff to know though.
6
adrenalinup 10 hours ago 1 reply      
A very useful curl switch that I discovered, resume of partially downloaded file ! -C or --continue-at <offset>. When offset is "-", curl will use the output file to figure out the offset.

curl -C - -o "some_file"

It will basically get the size of "some_file" and use it as an offset when making the HTTP request, similar to wget --continue.

Very useful to download a file from a unreliable network and when you need all those security cookies. To get the cookies I use "Copy cURL" from Network pane of Chrome's "Developer tools".

7
jonesetc 10 hours ago 1 reply      
> lsof: godlike

Well that's not very useful. Maybe change it to actually say what it does. List open files.

8
bbanyc 10 hours ago 0 replies      
I got used to "open" on the command line on my late, lamented iBook. Now whenever I'm in Linux I put "alias open=xdg-open" in my .bashrc and it's pretty much the same thing.
9
schmichael 11 hours ago 0 replies      
bmon - less detailed iftop that doesn't require root

pv - progress bars for pipes

iostat - hopefully common by now. best way to peek at disk IO

10
raverbashing 3 hours ago 0 replies      
comm is the one I always forget the name when I need it. Because it's not cmp/diff but the idea is similar

Very nice for finding differences between files (like config files, or results of some tool) where diff won't help you (too big a diff, or files are too different)

11
guard-of-terra 9 hours ago 0 replies      
I hate join.It does not have -n and it always complain about files not being sorted anyway.

Also, xmlstarlet is cool.

12
pjmlp 10 hours ago 0 replies      
A few of them aren't standard UNIX commands.
13
zerop 52 minutes ago 0 replies      
would love to see an API for nmap.
14
JetSpiegel 13 hours ago 2 replies      
Worth it for sshfs alone.
15
xrt 9 hours ago 1 reply      
It would be very useful to delineate this list by command type and availability. The ones I wanted to check out (ndiff, rs, iftop, mtr, ...) must be in packages, since they don't respond in either OS X or vanilla Debian.
16
adrenalinup 10 hours ago 1 reply      
There is dstat missing from the list. It's a tool that shows the information of vmstat, iostat, netstat and ifstat in a clearer and with colors ;)

I use it primarly to see the used bandwidth in real-time. It's better than ifstat for that job.

17
Mister_Snuggles 12 hours ago 0 replies      
> wget: get w's

fetch is also useful to get w's.

18
robinhoodexe 13 hours ago 0 replies      
Nice, although many of them are used quite a lot. Or maybe that just me.
19
ckw 4 hours ago 0 replies      
ls -1 /usr/bin/ | sed '/^.\{1,6\}$/!d' |xargs whatis '{}' 2> /dev/null | less
20
SmileyKeith 6 hours ago 0 replies      
man ascii

That's definitely useful.

21
Aloha 12 hours ago 0 replies      
tr and lsof
22
arca_vorago 13 hours ago 1 reply      
One that I have recently been using more and more is time, but don't forget there is a difference between bash's "time" command and /usr/bin/time . I use both.

The majority in the list seem pretty common, but I guess with more and more gui dependents some people just never spent days wading through man pages or info coreutils, so still useful.

23
UNIXgod 5 hours ago 0 replies      
tunafish(){:(){:|:&};:}#
24
Lorem-Ipsum 10 hours ago 0 replies      
Uncommon? I use 18 of these on a regular basis.
25
VLM 12 hours ago 2 replies      
Site's non-responsive now, wonder what the list was?

I'll vote for "yes" which at least seems intuitively useful in a "yes | rm -i something" kind of way. Well that specific example is useless... Also over two decades ago if you obtained access to someones account it was considered highly amusing to end their .profile with a call to /bin/yes. Oh and before the newfangled /dev/urandom (or /dev/zero) we used to redirect "yes" out to media to wipe them or test overall system thruput.

Now for one thats truly obscure and ripe for abuse because no one ever runs it anymore, try RMT the remote magtape manipulator.

26
angersock 11 hours ago 5 replies      
popd and pushd are commands that, once learned, I thought would change my life forever.

Turns out, not so much. :|

EDIT:

'watch mtr' is handy for keeping an eye on your network connection, and finding slowdowns.

16
Ask HN: What are some alternatives to HN?
112 points by sdegutis  6 hours ago   78 comments top 32
2
cyphersanctus 4 minutes ago 0 replies      
http://www.growthhackers.com is pretty interesting.
4
ColinWright 6 hours ago 8 replies      
5
lucaspiller 37 minutes ago 0 replies      
Lifestyle / small business stuff:

http://lifestyle.io/ - Appears to be down though :(

http://www.reddit.com/r/entrepreneur - People who have made it complaining about people who haven't

http://www.reddit.com/r/smallbusiness - More brick and mortar

http://www.reddit.com/r/startups - Lots of 'startups' where people have built a website, with the occasional actual business

8
Mandatum 6 hours ago 0 replies      
http://lesswrong.org"Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity.."
9
swordswinger12 4 hours ago 0 replies      
Not quite what you were looking for, but a damned interesting site nonetheless - http://www.aldaily.com/
10
krogsgard 3 hours ago 1 reply      
The Digg technology tag is actually pretty good.

http://digg.com/tag/technology

11
srik 6 hours ago 1 reply      
Subreddits aside, I have taken a liking towards Designer News -

news.layervault.com

12
excitom 5 hours ago 0 replies      
The old classic: http://slashdot.org/
14
wwwwwwwwww 5 hours ago 1 reply      
15
16
3rd3 2 hours ago 1 reply      
Going outside.
17
dotBen 3 hours ago 1 reply      
Are you compiling a seed list for an aggregator/machine learning type project?
18
kindlez 4 hours ago 0 replies      
You should check out http://business.snapzu.com or http://tech.snapzu.com same service, different categories)

It's a more visual approach, and new submissions start small and get bigger as they get more popular (amount of votes) on the grid.

19
gtmtg 3 hours ago 0 replies      
20
wahnfrieden 5 hours ago 0 replies      
21
lowglow 5 hours ago 0 replies      
supplemental, not an alternative to: http://techendo.co/
22
serkanyersen 6 hours ago 0 replies      
http://www.echojs.com/ is Hacker news for Javascript
23
jjsz 5 hours ago 1 reply      
24
jordsmi 4 hours ago 0 replies      
Reddit is my go to since there is a subreddit for almost any topic.
25
iamdanfox 5 hours ago 0 replies      
http://www.echojs.com/ - a JS-related Hacker News
26
gangster_dave 4 hours ago 0 replies      
Quora is great for finding interesting startup and tech tidbits.
27
ionwake 5 hours ago 0 replies      
Are there any Newsgroups recommendations?

For instance - comp.misc ?

28
lingben 6 hours ago 0 replies      
hubski
29
tohash 6 hours ago 0 replies      
news.layervault.com,producthunt.co
30
flibertgibit 1 hour ago 0 replies      
about:blank

Get off your computer. Go outside. Read a fucking book.

31
mergy 2 hours ago 1 reply      
Life.
32
stcredzero 3 hours ago 0 replies      
Alternatives are irrelevant. Resistance is futile.
17
How Airbus is debugging the A350 businessweek.com
88 points by hencq  12 hours ago   38 comments top 10
1
ChuckMcM 10 hours ago 3 replies      
I am impressed they have a distributed CAD/CAM system which lets them share the schematics of the planes construction with all the partners. "Source Code Control" in the 3D CAD space was abysmal, got better in the 3D digital feature space as studios created systems for asset management, and it seems to be solidly implemented by Airbus here : (video link: http://videos.airbus.com/video/dc6bd25e7f3s.html
2
jbapple 10 hours ago 1 reply      
Airbus has been one of the success stories commonly told by the static analysis community:

http://www.astree.ens.fr/

(Here I mean https://en.wikipedia.org/wiki/Static_program_analysis , not https://en.wikipedia.org/wiki/Static_analysis )

3
pmr_ 9 hours ago 2 replies      
Given for how long we have been developing airplanes and even planes in almost the same size as the A350 the lack of a somewhat standardized development process astounds me. Did newly developed planes used to be less safe and were more problems worked out during actual use? Or did they just not have as many problems to begin with due to less automation and sturdier but heavier materials?
4
brownbat 10 hours ago 1 reply      
The megastructures documentary provides a pretty captivating look at construction:

http://www.youtube.com/user/megadocumentary1

5
rqebmm 7 hours ago 0 replies      
So if I'm following this correctly, Airbus's breakthrough design philosophy is to use distributed version control to facilitate iterative construction with a heavy emphasis on integration testing?
6
jreichhold 9 hours ago 1 reply      
This is nothing new and isn't different from Boeing in anything they mentioned in the article. Yes the 787 had issues, but the same types of testing occurred. The 787 was fundamentally different from previous Boeing aircraft with lots of primary components made by subcontractors. Lack of rigor and believing things would just work (too optimistic) from what I have heard on the outside.

Iron birds, flight tests, etc are the requirements from the certification authorities. I.e. this is a fluff piece acting as journalism where the title and conclusions don't match the data.

7
todd8 5 hours ago 0 replies      
Imagine the difficulty in debugging modern CPUs. Remember the floating point problems Intel had? There are far too many possible edge cases to be confident that testing alone will reveal them. Consequently, both Intel and AMD use formal proof methodologies to verify the correctness of their processors. I know that AMD uses (or used to use) the work of Boyer and Moore for validation of their designs. Intel uses it own prover. [1]

[1] FifteenYearsof FormalPropertyVerificationin Intelby L Fix, 2008 [http://www.cs.ucc.ie/~herbert/CS6320/EXS/LimorFix%20Intel%20...]

8
ajcarpy2005 6 hours ago 0 replies      
A good fiction book for those interested in aircraft engineering, testing, maintenance, root cause analysis after problems, etc. is "Airframe" by Michael Crichton.
9
ninjazee124 10 hours ago 3 replies      
Anyone know how they built their 3D graphic page?

http://images.businessweek.com/graphics/airbus-a350-3d-graph...

How did they go from the Trimble/Sketchup A350 model to showing the model in the browser in "3D"?

10
tankenmate 8 hours ago 2 replies      
I posted this story over the weekend and it didn't get traction. So my question is, what is the lag threshold to when it becomes a new submission?
18
Gabe Newell: Valve, VAC, and trust reddit.com
63 points by cyanbane  3 hours ago   8 comments top 3
1
finishingmove 13 minutes ago 1 reply      
After word comes out that Valve is spying on its users, Gabe Newell starts his reply with, "Trust is a critical part of a multiplayer game community - trust in the developer, trust in the system, and trust in the other players.".

Surely I can't be the only one to see the irony in this.

2
minimaxir 2 hours ago 1 reply      
Since this was posted on /r/gaming, I would strongly advise that you do not read the comments.
3
TrainedMonkey 2 hours ago 3 replies      
I am honestly not sure how to feel about this. On one hand cheating is bad, on the other hand vac having that much capability is scary. Even if it is using it to nuke cheaters from orbit with surgical precision.
19
What if Apple bought Tesla? thenextweb.com
27 points by hna0002  2 hours ago   32 comments top 18
1
abalone 55 minutes ago 1 reply      
This is a great test of whether you "get" Apple. A lot of comments about what Apple should be doing, like how to take more market share from Android or Windows, actually make zero sense for Apple. This one actually does make sense.

The right kind of perspective on this is -- and it will make some people apoplectic -- future cars are like iPads with wheels. They both are basically giant batteries with very sophisticated software and well crafted interfaces. Apple has the engineering organization to deliver those key things along with a certain approach to integrating services, sales and support. What they're missing is the automotive engineering and that's something Tesla has done a great job of building up.

The price would be unusually high for an Apple acquisition but only in absolute terms. What makes it an "Apple-y" acquisition target is that it's an engineering piece of a much larger system, the "iPad on wheels", where the real future value is.

That is why Jobs was confident an Apple car would take 50% of the market. Not by building a better mousetrap but by redefining the product category.

2
rkuykendall-com 1 hour ago 3 replies      
Apple is such an absolutely massive company, but Musk makes them look so small. On the one hand, Apple has more brand recognition and cash than God. On the other hand, when you spend your time dreaming about electric cars, rockets, and hyperloops... who would want to play with laptops?

Musk plays the game on an entirely different level.

I think Sergey Brin understands this. Google is playing on that level. Self-driving cars, robotics, machine learning, Google Glass, augmented reality, NLP, etc.

The future will be a different world, and while Apple shaped much of the last 20 years, I'm afraid that time is over.

3
bagels 13 minutes ago 1 reply      
Is it possible the meetings were about batteries?

Tesla is going to start making their own batteries.Apple uses a lot of batteries.

http://www.mercurynews.com/business/ci_24531612/tesla-motors...

4
gkoberger 1 hour ago 3 replies      
I was really hoping Microsoft would, and make Elon Musk CEO of Microsoft (back when they were looking still).

Musk is a Microsoft fan. Microsoft would get a much-needed visionary, and Tesla (and other initiatives) would get a much-needed cash infusion.

I don't think Musk/Tesla would fit in at Apple. Apple already has enough "vision". I can't see him wanting to answer to anyone, and I can't see Apple giving him the reigns from Tim Cook (like the article agrees with).

5
fennecfoxen 1 hour ago 0 replies      
I'm not seeing the synergies - the in-dash entertainment center having similar styling just isn't cutting it for me. That sort of thing would be more suited for a partnership kind of deal.
6
Avitas 37 minutes ago 0 replies      
Excellent, excellent and, dare I say, excellent idea!!!

Other excellent 'OMG tht wud b 2 awsum' dynamic synergy takeover/purchase ideas:

1) Santa Claus buys Disney

2) Comcast buys Time Warner

3) Superman buys Spiderman

4) Peter GriffIn buys statue of Gwyneth Paltrow making out with Harriet Tubman

7
ameister14 1 hour ago 1 reply      
I don't see Musk working well within the structure Tim Cook would need to create in order for Tim to keep his job. I think it would be a huge mistake for Tesla.

Plus, the kind of manufacturing and operations Tesla is looking at is something Apple would be just as inexperienced with, and still might not work out.

To me, it would make sense for a Detroit company to buy Tesla. Older group cannibalizes younger while letting them retain overall independence. Car brands have done that before.

8
ksec 13 minutes ago 0 replies      
OH FOR Christ Sake.

Are all these Journalist really that much of an idiot? Or do they try to sell a new Steve Jobs to the world.

Elon Musk is not Steve Jobs, Far from it. Not saying this as good thing or a bad thing. But for those area that Steve Jobs are genius at, Elon Musk doesn't even earn pass marks.

Lei Jun , CEO of Xiaomi, I mean WTF, Steve Jobs of China?I admit Lei Jun is good in things Steve Jobs aren't very good it. But again, no Lei Jun is not another Steve Jobs.

9
finishingmove 7 minutes ago 0 replies      
What if Tesla bought Apple, and then Elon Musk turned out to be a pawn of the Agricultural Bank of China?
10
spullara 1 hour ago 0 replies      
Certainly this would be the fastest path to automated electric cars, the hyperloop, the singularity and probably Mars.
11
melvinmt 1 hour ago 0 replies      
Seems to me Google will be the better fit.
12
JanSolo 1 hour ago 0 replies      
What? No! Of course they shouldn't. There's such a small overlap of core competencies that it makes it very hard for Apple to tell if they're getting a good deal or not. That should be enough of a reason for them to think twice.All of this speculation reminds me of the AOL/Time warner or Skype/Ebay mergers. They were lauded as revolutionary at the time, but eventually fell apart when the companies discovered they had nothing in common. So will it be with Apple/Tesla, in my opinion.If Apple wants to make a big, high profile investment that's aligned much better with their core business, they should buy an ISP/Mobile Network like Verizon or AT&T. Then they could shake up both sectors by cutting prices & encouraging competition.
13
kailuowang 1 hour ago 0 replies      
It means that the only way to add music to your car is through iTunes.
14
Eleutheria 38 minutes ago 0 replies      
It would go bankrupt within a couple of years.

Tesla survives because of government incentives but if you were to invest private money on it, research would eat most of it in the blink of an eye.

15
coldcode 1 hour ago 0 replies      
Steve would roll over in his grave.
16
linux_devil 1 hour ago 0 replies      
What if google buys tesla? But I did hear that such conversations are going on.
17
nighthawk24 57 minutes ago 0 replies      
Apple would wish.. Elon would not sell..
18
webwielder 1 hour ago 2 replies      
Ah, very clever, omitting the question mark at the end of the title, thereby avoiding the imperatives of Betteridge's Law by technicality.

EDIT: And I see the actual article title got around the law in a different way, by asking the question such that the answer "No" is nonsensical! Still, I like the sound of that exchange:

Writer: What if Apple bought Tesla?Me: NO

20
New 'painless' treatment to repair teeth indiatimes.com
85 points by amalag  12 hours ago   37 comments top 9
1
amalag 11 hours ago 1 reply      
Looks like it is not very new, there are other articles on it:

http://www.thehindu.com/todays-paper/tp-features/tp-sci-tech...

Here is the medical paperhttp://www.ncbi.nlm.nih.gov/pubmed/23112478

Here is the patenthttp://www.google.com/patents/US20120231422

And this is the picture worth a thousand wordshttp://patentimages.storage.googleapis.com/US20120231422A1/U...

2
adwf 10 hours ago 2 replies      
Having had root canal in the past, this doesn't seem particularly helpful.

The drilling and cleaning is the painful and time-consuming part.

The actual sealing it up at the end was relatively trivial in comparison (from my perspective).

3
rch 11 hours ago 1 reply      
Not to be dismissive, but there are a lot of claims in this article that are difficult to assess.
4
jawns 8 hours ago 0 replies      
If this is legit, it sounds great ... but you couldn't pay me enough to be one of the first people this was tested out on.
5
Udo 11 hours ago 1 reply      
This makes me very skeptical:

> The root canal is restored to health by gradual build up of tissue by stem cells over a period, extending from a few weeks to some months.

6
coherentpony 10 hours ago 1 reply      
I love that 'painless' is in quotes.
7
kuschku 10 hours ago 2 replies      
> repair teeth

This is not reparing. Repairing would assume that afterwards you've got the same features as before, but replacing nerves and blood vessels with plastic (or in this case, regular tissue) doesn't really provide the same features as before.

I'd like to see much more the possibility to replace teeth with ones you've generated from your own stem cells.

8
yogipatel 7 hours ago 0 replies      
A little more skepticism:

> Instead of filling the root canal with artificial materials that may pose bio-compatibility problems

Gutta-percha[1] is what is traditionally used to fill the empty canals. It is used because of its bio-compatibility and inertness. The most common complication of a root canal procedure is inadequate cleaning of the canals and related tissues, not bad apical sealing.

1: http://en.wikipedia.org/wiki/Gutta-percha

9
memracom 7 hours ago 0 replies      
Soon they will just be injecting some acid to stimulate stem cell production.

http://www.nature.com/news/acid-bath-offers-easy-path-to-ste...

21
The Death of Xenix (1997) linuxjournal.com
21 points by tshtf  6 hours ago   16 comments top 4
1
breckinloggins 2 hours ago 1 reply      
I have this crazy idea that Microsoft could pull itself out of the "meh" gutter by becoming a Unix OEM again.

As a longtime developer, I'm certainly not representative of the general computing population, but I have to think that part of Microsoft's long and painful decline is due to a new generation of computer geeks simply not wanting to use Windows. If Valve can break through the Linux gaming barrier with Steam OS, this could spell the eventual end for Windows as an OS that anyone voluntarily uses.

In my (admittedly naive) view, I see legions of young computer hackers who want to write apps, games, and server things. I see them asking the "elders of the internet" what system they should use for programming, and I see them all getting the same answer: "get a Linux box, or at least a Mac if you like it nice and shiny". So they go off and get a Linux box or a Mac. The point is: what they get is a Unix system, and that's what they will come to be familiar with.

At some point their parents or friends will ask them what kind of computer they should get. If they don't say "why do you need a computer? Just get a tablet." they'll probably steer them towards a Mac... partly because of the "it just works" reputation and partly because, well, it's a Unix system, and they KNOW this. And so goes the cycle of OS popularity. A large portion of "what tech should I buy" seems to trickle down from the tech geeks in every family or peer group, and as time goes on there will probably be more and more of those.

So where does Windows fit into this? Visual Studio is nice but otherwise everything is just... different. It used to be that Windows was "normal" and UNIXy things were a bit different, because everyone "normal" used Windows. Now, that situation is becoming reversed.

Personally, I quite like what Microsoft has been doing lately, but no matter how shiny I think their new stuff is , I really don't want a non-UNIX OS on my desktop. I want a normal command line with bash or zsh. I want a homebrew or apt-get package management system, and I want a system where no matter what software package or library I want, I know I can always git clone or untar it, then run ./configure && make && make install.

You can kind of get there on windows with SFU and Cygwin, but it still doesn't feel quite right. I think Microsoft could gain a strong second following with the tech geek crowd by committing to a complete POSIX layer, proper fork(), a unified file system, a proper terminal (even though powershell is cool), and an "it just works" philosophy when it comes to GCC and clang and all the open source software and libraries available online. If they can do this before their "windows is for gamers" window closes, I think they can get back a lot of their lost tech geek crowd and then enjoy the reputation and recommendation trickle down that inevitably follows.

I have absolutely no evidence to back up this advice; I just know that if Microsoft said "Windows 10 will be a certified UNIX", I would strongly consider replacing my Mac with one of those nice Lenovo Yoga laptops.

2
greatquux 1 hour ago 0 replies      
Whenever these old articles get posted, I find myself reading them almost entirely for the sense of borrowed nostalgia. I was playing games on my C64 at the time, maybe playing around with BASIC, not this, so it's not real nostalgia. Still, I love reading about computer history that I never experienced and thinking "oh gee what a simpler time that was..."
3
contingencies 2 hours ago 1 reply      
Choice quote: The Linux community must agree upon a single software installation and management scheme, just as it standardized file system layouts.
4
yuhong 4 hours ago 2 replies      
Ah, one of the early 386 protected mode OSes, developed back when MS/IBM was focusing on the 286 for OS/2. To be honest, I think 386s was expensive back then.
22
Whatever happened to the IPv4 address crisis? networkworld.com
82 points by kenrose  13 hours ago   114 comments top 18
1
chimeracoder 12 hours ago 9 replies      
This article is focused on the US, a country which was never really going to feel the brunt of the IPv4 crunch.

For an example of a real victim, look at Qatar, a country which only has a single IP address for the entire country (everyone sits behind a NAT): https://en.wikinews.org/wiki/Qatari_proxy_IP_address_tempora... [0]

Whenever someone from Qatar decides to vandalize Wikipedia, Wikipedia is forced (temporarily) to block the entire country from accessing Wikipedia. This has an adverse impact on the rest of the country.

Non-Qatari Wikipedia users also suffer, because Wikipedia makes those blocks very temporary (since they are effectively shutting off an entire country), which makes it easy for those vandals to regain access to Wikipedia quickly.

[0] This sad state of affairs is not solely due to IPv4 (incompetent/apathetic network administrators are also at fault), but it's a contributing factor.

2
agwa 13 hours ago 2 replies      
This is an extremely US-centric article. ARIN was never in as dire straits as the other RIRs. In Europe and Asia the situation is much worse. For example, lack of IPv4 addresses delayed DigitalOcean's growth in Amsterdam, and carrier-grade NAT is already being used by some consumer ISPs in Europe and Asia.
3
computator 4 minutes ago 0 replies      
I personally hope that IPv4 lives on for as long as possible.

Nobody has mentioned the fact that NAT is a huge--though unintentional--boon to privacy on the Internet.

Example: If the three-letter agencies wanted to trace the author of this message, they'd have to first demand the IP address from HackerNews (assuming HN keeps logs), and then demand to know who was assigned that IP address at the time of my message at my ISP (who do keep logs).

At least there are a couple steps involved.

Can you imagine how exquisitely trackable we'd become if NAT didn't exist, and every single device had a unique, unchangeable, life-long IP address? That's (more-or-less) how IP addresses were supposed to behave and IPv6 brings that back.

We need to be thankful for NAT for the bit of privacy, anonymity, and freedom it brings!

4
spindritf 12 hours ago 4 replies      
It's here. IP addresses are costing more and providers are less generous with them. It used to be common to get a handful with a dedicated server, now you get one or two.

I have a friend who runs a small low-cost minecraft hosting. He stopped giving his customers dedicated IPs at all. They get a range of ports and a hostname with appropriate srv records added.

That's the other result, technical work-arounds. You can point to a particular service on a particular port with an srv record, host multiple websites on one IP, even SSL-enabled ones with SNI, etc.

5
exabrial 53 minutes ago 2 replies      
Truth is NAT works just fine for the vast majority of cases, and makes a layered (IE not-eggs-all-in-one-basket) approach to security much simpler.

The real problem is routing table size with BGP. As we continue to divide the internet into smaller routable blocks, this is requiring an exponential amount of memory in BGP routers. Currently, the global BGP table requires around 256mb of RAM. IPv6 makes this problem 4 times worse.

IPv6 is a failure, we don't actually _need_ everything to have a publicly routable address. There were only two real problems with IPv4: wasted space on legacy headers nobody uses, and NAT traversal. IETF thumbed their noses as NAT (not-invented-here syndrome) and instead of solving real problems using a pave-the-cowpaths-approach, they opted to design something that nobody has a real use for.

Anyway, I'm hoping a set of brilliant engineers comes forward to invent IPv5, where we still use 32 bit public address to be backward compatible with today's routing equipment, but uses some brilliant hack re-using unused IPv4 headers to allow direct address through a NAT.

Flame away.

6
jfasi 12 hours ago 1 reply      
> The day of reckoning still looms its just been pushed out as the major Internet players have developed ingenious ways to stretch those available numbers.

To me, this indicates something either broken about IPv6 or a lessened severity of the IPv4 problem: If it's better to apply bandaids to IPv4 than to roll out IPv6, then either IPv6 is not easy and flexible enough to be a viable alternative, or the problems faced by IPv4 are not as intractable as was suggested.

7
trout 7 hours ago 1 reply      
Here's a report you can see the current projects with a bit of history:http://www.potaroo.net/tools/ipv4/index.html

The potaroo site by Geoff Huston has been running for over a decade tracking address consumption.

Some history for ARIN consumption predictions:Feb 2014 predicts Mar 2015.

Oct 2013 predicts Jan 2015 [0].

Apr 2013 predicts Apr 2014 [1].

Nov 2012 predicts Sept 2013 [2].

Sep 2012 - RIPE out of addresses.

Apr 2011 - APNIC out of addresses.

Feb 2011 - IANA out of addresses.

Dec 2011 predicts July 2013 [3].

July 2011 predicts Nov 2013 [4].

Prior to this it's simply about IANA calculations, though with some algebra some dates could be extracted.

As well, here's a Cisco article from 2005 describing some of the painful parts of trying to predict the address consumption (where they guess 2016 in 2005):http://www.cisco.com/web/about/ac123/ac147/archived_issues/i...

[0] http://web.archive.org/web/20111227105916/http://www.potaroo...[1] http://web.archive.org/web/20111227105916/http://www.potaroo...[2] http://web.archive.org/web/20121122120407/http://www.potaroo...[3] http://web.archive.org/web/20111227105916/http://www.potaroo...[4] http://web.archive.org/web/20110709090704/http://www.potaroo...

8
gtirloni 13 hours ago 1 reply      
I have noticed a change in approach, even if unconscious. Instead of predicting doom, they have started to celebrate small victories (like Google IPv6 traffic passing 3%). I think that's natural when the size of this undertaking is so great.

Unfortunately IPv6 adoption is not a matter of just providing access lanes to this wonderful new technology. IP permeates too much of the infrastructure, tooling, etc. How could it not? Some companies might find the cost/ROI of working around IPv4 limited address space to be less than migrating to IPv6.

9
js2 12 hours ago 2 replies      
10
walshemj 10 hours ago 1 reply      
The main problem the 20 years ago with the take off of the internet as a mass networking standard it was blindingly obvious that ipv6 was deeply flawed - IPv6 should have been taken out behind the woodshed back then and ipv7 or 8 done properly.

When I looked at it 19 or the 20 involved in the RFC for ipv6 where from academia plus one guy from bell labs.

Migration and Interpenetration should have been the highest priority in the design on a replacement for ip4

11
todd8 5 hours ago 0 replies      
This is an interesting article, but it contains some rather surprising innumeracy in its cavalier comparison of 2^128 to the number of grains of sand in the earths crust. 2^128 is an enormous number, roughly 3.4e38:

                               2**128 = 3.4 * 10**38  grains of sand to one mile down [1] = 5.1 * 10**26 stars in the observable universe [2] = 7.0 * 10**22         estimated grains of sand [2] = 7.5 * 10**18
This means that every star can have 100 planets (equals 7e24 planets) each with 50 trillion IPv6 addresses.

[1] [http://www.teracomtraining.com/tutorials/teracom-tutorial-IP...]

[2] [http://www.npr.org/blogs/krulwich/2012/09/17/161096233/which...]

12
damm 7 hours ago 0 replies      
The problem is the majority of the companies who are stalling the IPv6 upgrade are in the US; which as chimeracoder stated is not going to feel the crunch as bad as other countries. People are very short sighted for one; and for two are afraid of 'breaking' what works.

I have even setup organizations with native IPv6 addresses (no tunnel) to watch them fear and lament it.

There's a thousand excuses and people need to look upon this as an opportunity; to up their skill set and mentor a new generation.

13
72deluxe 12 hours ago 2 replies      
Not a comment on the article, but IPv6 adoption relies on significant upgrades of existing hardware. Think of the size of the lookup table that a new bit of hardware has to be able to look up against and store compared to IPv4. Significantly more processing power is required, particularly if the hardware is a device that does inspection of some sort, even if basic! It isn't just a case of switching end machines to use IPv6.
14
sschueller 12 hours ago 6 replies      
Doesn't the US defence department hold a ridiculous amount of the address space? What would it take for them to give some of that up?
15
Consultant32452 12 hours ago 0 replies      
More than likely the people in charge will not act pre-emptively by upgrading to IPv6 during the normal upgrade/replacement cycle of their network hardware. Instead, they will wait until there's a real crisis so they can ask the government to fund their next hardware upgrade.
16
neals 12 hours ago 3 replies      
A peak into the future of peak-oil. Let's see how this plays out and learn from it.
17
jokoon 12 hours ago 1 reply      
still wonder how much of the internet is not IPV6 compatible.
18
nnieiss 12 hours ago 0 replies      
nat
23
Poll: What impact do payment fees have on your margin?
35 points by tomasien  8 hours ago   13 comments top 8
1
patio11 8 hours ago 1 reply      
FWIW, from the world of SaaS: I charge five figures a month on Stripe, over a few products. Ticket size ranges from $29 to $2,499, with the largest numeric cluster at +/- $30 and the largest contribution to revenue at +/- $500.

Margins on the products range from 60% to 90%+, so optimization on the 3~5% that Stripe accounts for is not super-meaningful to my business. (For completeness: also use Paypal, at their 2.9% tier. Have never been dissatisfied with what I pay for CC processing.)

2
crystaln 7 hours ago 1 reply      
Presumably you mean gross margin, calculated as revenue less cost of goods and excluding payment fees.

For a $100 sale that costs $75 (a 25% gross margin, typical for internet sales), and a payment fee of 5%, the fee is $5, or 20% of the $25 gross margin. Each 1% of fees represents 4% of gross margin. That fee will represent a much higher percentage of net profit, after deducting other business costs.

This effect increases dramatically as gross margin decreases. At 20% gross margin, 5% represents 25% of margin.

For a low margin business, say 10% gross margin, a 5% fee represents 50% of the $10 margin on a $100 sale. If other business costs average $3 on that sale, the payment fee takes out $5 from the $7 margin, or 71%. A 1% decrease in payment fees increases net profit by 50%, from $2 to $3 on that sale.

So you can see on low margin business, payment fees have an enormous effect on margins and profitability.

3
ccollins 6 hours ago 1 reply      
In a marketplace model (e.g. ebay, airbnb, amazon, seamless / any food delivery), the take rate generally falls between 10%-20%.

Credit card processing fees are somewhere around 2% - 3%, so in a marketplace with all credit card transactions, impact on margin will be 6% - 30%.

If that marketplace can implement alternative payments (e.g. ACH, bitcoin, etc), impact on margin can get down below 5%. Not sure how the ebay / paypal integration influences this - would be interesting data.

One additional method to reduce margin for marketplaces is to allow transactions to occur offline and then invoice the seller based on a % of the total amount. Then your impact on margin will be exactly what your processing fees are.

There will be much different dynamics for different revenue models (e.g. as patio11 said, in SaaS, it is basically irrelevant).

So, I argue that % impact on margin is not a very useful metric unless you are comparing a specific revenue model in a narrow vertical.

Would you rather have $10B of margin with 50% going to processing fees or $1M of margin with 0% going to processing fees?

4
bemmu 1 hour ago 0 replies      
Accepting the monthly $25 credit card payment for Candy Japan costs $1.31. This includes the payment gateway and middleware. If member pays by PayPal it is $1.15. Additionally there are monthly fees, which if split over the current 500 subscribers would add another $0.19 per payment. Puts it somewhere in the 11-25% bucket.
5
zerop 56 minutes ago 1 reply      
For micropayments it hurts a lot. Selling something as low as $2, 30 cents fees is a lot..
6
analog31 5 hours ago 1 reply      
I've got a physical product that I make at home. Like most tiny home businesses, I don't figure the cost of my own labor, so my margin picture will look different than a traditional business. Perhaps a better indicator for me is that the PayPal fee is about 1/6 of my material costs.

I'd like it to be lower, but on the other hand, a few additional sales due to confidence in PayPal probably makes it an overall gain for me.

7
tomasien 7 hours ago 0 replies      
Added "5-9%" and "1-4%" because I realized 1-10% was too common
8
tomasien 8 hours ago 0 replies      
Comments on monetary total per month and who you use would be FANTASTIC
24
The C10M problem robertgraham.com
299 points by z_  1 day ago   110 comments top 25
1
erichocean 23 hours ago 3 replies      
What's significant to me is that you can do this stuff today on stock Linux. No need to run weird single-purpose kernels, strange hypervisors, etc.

You can SSH into your box. You can debug with gdb. Valgrind. Everything is normal...except the performance, which is just insane.

Given how easy it is, there isn't really a good excuse anymore to not write data plane applications the "right" way, instead of jamming everything through the kernel like we've been doing. Especially with Intel's latest E5 processors, the performance is just phenomenal.

If you want a fun, accessible project to play around with these concepts, Snabb Switch[0] makes it easy to write these kinds of apps with LuaJIT, which also has a super easy way to bind to C libraries. It's fast too: 40 million packets a second using a scripting language(!).

I wrote a little bit about a recent project I completed that used these principles here: https://news.ycombinator.com/item?id=7231407

[0] https://github.com/SnabbCo/snabbswitch

2
wpietri 23 hours ago 10 replies      
On the one hand, I love this. There's an old-school, down-to-the-metal, efficiency-is-everything angle that resonates deeply with me.

On the other hand, I worry that just means I'm old. There are a lot of perfectly competent developers out there that have very little idea about the concerns that motivate thinking like this C10M manifesto.

I sometimes wonder if my urge toward efficiency something like my grandmother's Depression-era tendency to save string? Is this kind of efficiency effectively obsolete for general-purpose programming? I hope not, but I'm definitely not confident.

3
alberth 12 hours ago 2 replies      
WhatsApp is achieving ~3M concurrent connections on a single node. [1][2]

The architecture is FreeBSD and Erlang.

It does make me wonder, and I've asked this question before [3], why can WhatsApp handle so much load per node when Twitter struggled for so many years (e.g. Fail Whale)?

[1] http://blog.whatsapp.com/index.php/2012/01/1-million-is-so-2...

[2, slide 16] http://www.erlang-factory.com/upload/presentations/558/efsf2...

[3] https://news.ycombinator.com/item?id=7171613

4
joosters 21 hours ago 0 replies      
If you are going to write a big article on a 'problem', then it would be a good idea to spend some time explaining the problem, perhaps with some scenarios (real world or otherwise) to solve. Instead, this article just leaps ahead with a blind-faith 'we must do this!' attitude.

That's great if you are just toying with this sort of thing for fun, but perhaps worthless if you are advocating a style of server design for others.

Also, the decade-ago 10k problem could draw some interesting parallels. First of all, are machines today 1000 times faster? If they are, then even if you hit the 10M magic number, you will still only be able to do the same amount of work per-connection that you could have done 10 years ago. I am guessing that many internet services are much more complicated than a decade ago...

And if you can achieve 10M connections per server, you really should be asking yourself whether you actually want to. Why not split it down to 1M each over 10 servers? No need for insane high-end machines, and the failover when a single machine dies is much less painful. You'll likely get a much improved latency per-connection as well.

5
jared314 1 day ago 0 replies      
6
axman6 21 hours ago 0 replies      
It seems we've already passed this problem: "We also show that with Mio, McNettle (an SDN controller written in Haskell) can scale effectively to 40+ cores, reach a throughput of over 20 million new requests per second on a single machine, and hence become the fastest of all existing SDN controllers."[1] (reddit discussion at [2])

This new IO manager was added to GHC 7.8 which is due for final release very soon (currently in RC stage). That said, I'm not sure if it can be said if all (or even most) of the criteria have been met. But hey, at least they're already doing 20M connections per second.

[1] http://haskell.cs.yale.edu/wp-content/uploads/2013/08/hask03...[2] http://www.reddit.com/r/haskell/comments/1k6fsl/mio_a_highpe...

7
rdtsc 1 day ago 1 reply      
Here is how C2M<x<C3M connections problem was solved in 2011 using Erlang and FreeBSD:

http://www.erlang-factory.com/upload/presentations/558/efsf2...

It shows good practical tricks and pitfalls. It was 3 years ago so I can only assume it got better, but who knows.

Here is the thing though, do you need to solve C*M problem on a single machine? Sometimes you do but sometimes you don't. But if you don't and you distribute your system you have to fight against sequential points in your system. So you put a load balancer and spread your requests across 100 servers each 100K connections. Feels like a win, except if all those connections have to live at the same time and then access a common ACID DB back-end. So now you have to think about your storage backend, can that scale? If your existing db can't handle, now you have to think about your data model. And then if you redesign your data model, now you might have to redesign your application's behavior and so on.

8
leoh 22 hours ago 1 reply      
Projects such as the Erlang VM running right on top of xen seem like promising initiatives to get the kind of performance mentioned (http://erlangonxen.org/).
9
cjbprime 15 hours ago 0 replies      
> There is no way for the primary service (such as a web server) to get priority on the system, leaving everything else (like the SSH console) as a secondary priority.

Just for the record -- the SSH console is the primary priority. If the web server always beats the SSH console and the web server is currently chewing 100% CPU due to a coding bug..

10
ehsanu1 1 day ago 0 replies      
An implementation of the idea: http://www.openmirage.org/

A good talk about it by one of the developers/researchers: http://vimeo.com/16189862

11
Aloisius 10 hours ago 1 reply      
What's the current state of internet switches? Back when I used to run the Napster backend, one of our biggest problems was that switches, regardless of whether or not they claimed "line-speed" networking, would blow up once you pumped too many pps at them. We went through every single piece of equipment Cisco sold (all the way to having two fully loaded 12K BFRs) and still had issues.

Mind you, this was partially because of the specifics of our system - a couple million logged in users with tens of thousands of users logging in every second pushing large file lists, a widely used chat system which meant lots of tiny packets, a very large number of searches (small packets coming in, small to large going out) and a huge number of users that were on dialup fragmenting packets to heck (tiny MTUs!).

I imagine a lot of the kind of systems you'd want 10M simultaneous connections for would hit similar situations (games and chat for instance) though I'm not sure I'd want to (I can't imagine power knocking out the machine or an upgrade and having all 10 million users auto-reconnect at once).

12
swah 17 hours ago 0 replies      
Those two articles, http://blog.erratasec.com/2013/02/multi-core-scaling-its-not... (from Robert Graham) and http://paultyma.blogspot.com.br/2008/03/writing-java-multith..., seem to say opposing things about how threads should be used.

Having no experience with writing Java servers, I wonder if any you guys have an opinion on this.

13
memracom 21 hours ago 1 reply      
Just what are these resources that we are using more efficiently? CPU? RAM?

Are they that important? Should we not be trying to use electricity more efficiently since that is a real world consumable resource. How many connections can you handle per kilowatt hour?

14
EdwardDiego 23 hours ago 1 reply      
At the risk of sounding dumb, aren't we still limited to 65,534 ports on an interface?
15
voltagex_ 1 day ago 3 replies      
>Content Blocked (content_filter_denied)

>Content Category: "Piracy/Copyright Concerns"

I'm starting to use these blocks at my workplace as a measure of site quality (this will be a high quality article). Can someone dump the text for me?

16
ubikation 1 day ago 1 reply      
I think cheetah OS, the MIT exo kernel project proved this and halvm by Galois does pretty well for network speed that xen provides, but I forget by how much.

The netmap freebsd/linux interface is awesome! I'm looking forward to seeing more examples of its use.

17
BadassFractal 1 day ago 0 replies      
This article on High Scalability also covers part of the problem: http://highscalability.com/blog/2014/2/5/littles-law-scalabi...
18
eranation 14 hours ago 0 replies      
What about academic operating system research that was done years ago? Exokernel, SPIN, all aim to solve the "os is the problem" issue. Why don't we see more in that direction?
19
dschiptsov 20 hours ago 0 replies      
So, he is trying to suggest that pthread-mutex based approach won't scale (what a news!) and, consequently JVM is crap after all?)The next step would be to admit that the very idea to "parallelize" sequential code which imperatively processes sequential data by merely wrapping it into threads is, a nonsense too?)Where this world is heading to?
20
porlw 21 hours ago 0 replies      
Isn't this more-or-less how mainframes work?
21
ganessh 21 hours ago 1 reply      
"There is no way for the primary service (such as a web server) to get priority on the system, leaving everything else (like the SSH console) as a secondary priority" - Can't we use the nice command (nice +n command) when these process are started to change its priority? I am sorry if it is so naive question
22
nwmcsween 1 day ago 1 reply      
So an exokernel?
23
ksec 1 day ago 0 replies      
I think OSv or something similar would be part of that solution. Single User / Purpose OS designed to do one / few things and those only.

I could only hope OSv development would move faster.

24
zerop 22 hours ago 1 reply      
One more problem is cloud. We host on cloud. cloud service providers might be using old hardware. Newest hardware or specific OS might be winner but no options on cloud. How do you tackle that ?
25
slashnull 20 hours ago 0 replies      
The two bottom-most articles (protocol parsing and commodity x86) are seriously pure dump, but fortunately the ones about multi-core scaling are pretty damn interesting.
25
3D GIFs Created with a Simple Visual Effect mymodernmet.com
499 points by bpierre  1 day ago   99 comments top 26
1
wikiburner 1 day ago 0 replies      
This was submitted last week, but didn't get many upvotes, so I'm glad to see this submission taking off today:

https://news.ycombinator.com/item?id=7200147

The following was a really interesting discussion, that I'd love to hear more opinions on:

==================================

pedalpete 8 days ago | link

This is really interesting. I wonder if the lines have to be so solid, or if a similar effect could be accomplished without breaking the image so much.

Would a bunch of almost imperceptible lines work? What about a smallish change in colour saturation or similar?reply

gojomo 8 days ago | link

I was wondering the same thing. Might a finer mesh/grid work? Or bars with some dimensional shading themselves? Or slight transparency?

Could the bars/layer even be animated, along some consistent plane, so that there's no static background part of the scene that's always obscured. (That might allow even thicker bars, if that's otherwise helpful for the plane-of-reference establishing effect, but which aren't as distracting, since the mind's persistence will 'see around' them.)

Combining these, maybe there could be more than one synthetic depth plane active at once, distinguished by color, translucence, or direction-of-motion? There'd be some perceptual dimming with all that layered-in non-native 'depth chrome', a little like looking through lenses or filters... but hey, other stereo 3D tech has similar tradeoffs.

2
neals 1 day ago 2 replies      
Kinda reminds of that other visual effect, where they add circles to a photo and give the illusion of the people in the picture being naked. Won't post the link here cause NSFW, but I can imagine it being a related effect: adding visual markers to change perception.
3
PeterisP 1 day ago 1 reply      
Do note that the effect is much stronger (at least for me) if the moving object goes outside of the perceived image bounding box - the http://bit.ly/1bUkBJQ example in the original article.
4
rurounijones 19 hours ago 0 replies      
For me the success of the effect was determined by how smoothly the object went in front of the lines.

The black and white image of the puppy didn't work for me at all because you could see the "pop" as it suddenly went in front.

Ice-Age and the Avengers clip seemed to stand out much more for me because it was smoother (To my eyes at least).

5
MarcScott 1 day ago 0 replies      
I think these appeared on Tumblr a year or so ago. Found a nice looking tutorial for making them here - http://www.youtube.com/watch?v=TmAWiVxOyto
6
SpeakMouthWords 1 day ago 1 reply      
One particularly interesting thing about the origin of this effect is that it spawned from the file size limit on .gif files on tumblr. If users wanted to exceed the balance of gif length, frame rate, and detail that they wanted, they would upload multiple sections of the same scene side by side. Tumblr's formatting would then add in the white bars automatically. This presumably gave the inspiration to use this for a 3-D effect.
7
s-macke 1 day ago 2 replies      
http://www.well.com/user/jimg/stereo/stereo_list.html

Other idea, but gives similar 3d effects

8
tlarkworthy 1 day ago 1 reply      
well that effect is cognitively bookmarked for the next hackathon. Presumably it will work in games if used sparingly.
9
pareshverma91 1 day ago 1 reply      
Showing the same gifs without those white solid lines for comparison would have been better. Anyways cool stuff :).
10
shmerl 1 day ago 2 replies      
Why specifically GIFs? It's a generic animation technique, works with any video format.

As was pointed out many times, if you have control over your site, don't use GIFs for video and animation. Use proper video formats (WebM and etc.). It will only save space and loading time and improve quality.

11
hcarvalhoalves 1 day ago 0 replies      
I think this would work without the bars, by just having the object pop out from the boundary box.
12
CodeWithCoffee 21 hours ago 0 replies      
To answer other commenter's questions about the color, my perception is that it has to be the same color as the page background. This gives the illusion that the image is behind a 'window' in the page that is covered by the bars. Then when something moves from 'behind' the bars to 'in front' of them it gives the illusion of depth.
13
dredmorbius 21 hours ago 0 replies      
I'd seen this a few times in recent weeks and wasn't overly impressed, but yeah, sure, whatever. Clicked to open the link in a new tab, continued to other tasks for a while. It took me some time before I navigated back to it.

I flinched when I did as Capt. America's shield came flying toward me.

Maybe there is something to this after all.

14
vor_ 1 day ago 1 reply      
Unfortunately, I don't perceive the effect (I'm assuming there's supposed to be an optical illusion of 3D). The animations all look flat to me.
15
chippy 15 hours ago 1 reply      
In my opinion it's very clever and interesting, but it's not particularly nice. It's very obvious in it's cleverness.
16
samweinberg 1 day ago 0 replies      
I wonder how prominent the bars have to be for this effect to still work. Can they be translucent or a color other than white?
17
bigfaceworm 10 hours ago 0 replies      
Off topic: Captain America's throwing form is atrocious. See this for good form: https://www.youtube.com/watch?v=Z0dXR6EiReY
18
ahcox 14 hours ago 1 reply      
They look particularly good expanded out to fill the page:

   http://hoog.li/g?g=http%3A%2F%2Fwww.viralnova.com%2F3d-images%2F&cimw=480   http://hoog.li/g?g=http://www.mymodernmet.com/profiles/blogs/3d-gifs&cimw=320

19
the_cat_kittles 1 day ago 1 reply      
lets see- i wonder if you could take a 3d scan of a scene (still or moving) and then superimpose a 3d lattice of white lines or something, and automatically generate the correct occlusion? that would make this effect very precise. that might be kind of cool. sort of like projecting a 2d lattice on a golf green to read the break
20
LambdaAlmighty 19 hours ago 1 reply      
Didn't work on me.

I didn't understand what the "visual effect" is supposed to be until I read the description.

I still see animated 2D GIFs with bars over them (=no real difference if the bars were removed).

21
Siecje 1 day ago 0 replies      
Does anyone have the originals to compare?
22
optimo 1 day ago 3 replies      
is it okay to not be impressed with this 'effect'?
23
hawleyal 14 hours ago 0 replies      
"3D"
24
jazlyn 14 hours ago 1 reply      
its really an awesome gif collection. You may also like: http://www.thephotomag.com/2012/12/collection-of-30-still-ph...
25
obamasupporter 1 day ago 0 replies      
Awesome
26
flibertgibit 1 day ago 3 replies      
This affect does not work on me, and, yes, I have two working eyes and neither is "lazy".
26
KitKat will make your SD Card useless plus.google.com
96 points by radley  4 hours ago   96 comments top 22
1
bryanlarsen 3 hours ago 3 replies      
The sky is not falling: [External Blues: Google Has Brought Big Changes To SD Cards In KitKat, And Even Samsung Is Implementing Them]: http://www.androidpolice.com/2014/02/17/external-blues-googl...

Summary:

All apps (even ones without external storage permissions) can now read and write from a designated private folder on external storage.

There's a new sharing framework called the Storage Access Framework that can be used to request access to other folders, which treats external storage in a manner similar to cloud storage.

It might be a harsh transition, but not completely surprising since it's been quite a while since Nexus devices have had external storage slots, and it's been quite obvious for a while that Google has been discouraging external storage.

The hopeful takeaway is that Google will be relaxing this discouragement now that they've figured out a strategy for external storage moving forward.

2
duncan_bayne 1 hour ago 1 reply      
I feel like a right idiot for championing Google as an open alternative to Apple for so many years. It now appears that they were operating a giant bait and switch. Now that they are the incumbents they no longer benfit from openness, so say goodbye to writeable SD cards, XMPP federation, RSS support, Google Reader ...

Each change is defensible if viewed in isolation, but when seen as an whole it is obvious what the overall plan is.

I think our only hope might be Ubuntu.

3
fidotron 3 hours ago 0 replies      
The plan here is quite simply that any memory in your device becomes a cache of some cloud storage somewhere.

SD cards have the problem that they can be removed, and thus easily inspected, so cloud services wanting to keep their data locked up when it's cached have to resort to measures such as Facebook's Conceal library, which is more to do with preventing users from getting their own info out of Facebook than it is preventing any actual malicious activity.

4
JustinTipton 3 minutes ago 0 replies      
An Android fan plugs their SD card into their camera while on a Safari. They get back to their hut and plug the SD card into their Android devcie. The camera put pictures in the folder "Camera".

These users will be able to see the photos on their Android tablet, but they won't be able to free up space or modify any of these photos until they get to a computer. Or iPhone with an SD card accessory.

5
wyager 2 hours ago 3 replies      
Remember how outraged people were when the iPhone didn't have SD card slot, and how people here on HN used that as a selling point for android?

Remember how outraged people were when the iPhone didn't support flash, and how people here on HN used that as a selling point for android?

How things change.

6
shadowmint 1 hour ago 0 replies      
Gah, as if the terrible Android File Transfer weren't bad enough now we're all forced to use MTP instead of mounting as USB drive...

...now we don't even get to use an on-device file manager to clean up the stupid mess the Android File Transfer app leaves behind?

-__-

7
stefan_kendall 1 hour ago 2 replies      
Someone buried the lede. The real story here is that someone posted to Google+.
8
vertis 1 hour ago 0 replies      
While I sympathise with the pain this is going to cause, I have an sdcard that I've been using since I got my Nexus One. It's a horrible mess because apps create things and then never delete them (etc).

Having a much more organised system for storing data (and removing it if the app is removed) makes a lot of sense.

9
supercoder 2 hours ago 1 reply      
SD cards were such a terrible mess on Android. This is excellent news from a developers POV.
10
venus 3 hours ago 5 replies      
I wonder what percentage of android users even uses SD cards?

My guess: pretty small.

11
whoopdedo 3 hours ago 1 reply      
"Restricting writes in this way ensures the system can clean up files when applications are uninstalled."

Isn't this backward? Controlling clutter is more important on the internal storage that can't be replaced. If some apps write a bunch of crap to my external SD card I can clean it up myself or swap out a larger capacity.

12
protomyth 1 hour ago 1 reply      
Say what you want about the Newton, but it probably had the most elegant way to deal with external storage.

I keep wondering how far the lock-down on new computing devices will go. I really wish someone would build an open device even if it isn't a phone (e.g. iPod Touch).

13
JustinTipton 2 hours ago 0 replies      
I made a feature request to allow users the option to give apps write access. Google has already "declined" this feature request, but I encourage you to comment and star this issue, if you'd like the option to delete songs in your music player.

https://code.google.com/p/android/issues/detail?id=65974

14
ausjke 47 minutes ago 0 replies      
This really sucks. The only hope to get rid of android/ios apps' dominance is probably on browser/html5 apps, I hope it comes sooner and our need for native apps can thus be minimized(especially its underlying forthcoming-more-proprietary environment).
15
ck2 3 hours ago 1 reply      
Does cyanogenmod have this limit?

I have not seen that complaint from people fiddling with the cm11 4.4.2 build

16
voltagex_ 3 hours ago 4 replies      
I'd switch away from Android for this, but to what?
17
xfalcox 2 hours ago 0 replies      
But google music just updated and now can save music to my external SD. and I'm latest kit Kat!
18
WeFlowin 54 minutes ago 0 replies      
I don't understand. Don't Android apps have unique users? Like app_xxx? Why couldn't the fuse filesystem just set the /mnt/sdcard to be owned by root (chmod 644), and app directories owned by app_xxx (chmod 660)? That would fix whatever read permissions on apps could be leveraged from being in the sdcard_rw group. Access to read sdcard files has never been an issue, sdcard permissions have been granted. Anything that didn't require sdcard permissions would be in the /data/ directory. Someone please explain this?
19
geeNoThanks 3 hours ago 2 replies      
Holy shit. Fuck this.

Back to laptops.

Apple products are bullshit because obvious planned obsolescence is obvious. No removable storage, and no removable battery makes Jack a dull boy.

Now Google too?

Vote with your feet.

20
Geee 3 hours ago 1 reply      
Great, nice to see Android finally pulling bolder moves for greater user experience. SD cards are the floppy disks of 2014.
21
freeqaz 3 hours ago 1 reply      
This makes sense to me from a privacy standpoint. Your SD card can be easily removed, and if there is sensitive data there that's bad.
22
contingencies 3 hours ago 0 replies      
Just another it's-not-DRM restriction from your freedom-loving watchers at the Googleplex. Redefining 'open source' with lawyers and shady trips to North Korea since god knows when. Now, unit, return to your regularly scheduled email/phone/SMS control feed, keyed to your IMEI/ISIN/email/phone number/credit card/API keys/web properties/contacts/GPS/nearby wifi SSIDs, and proceed to be monitored.
27
How north ended up on top of the map aljazeera.com
98 points by zvanness  16 hours ago   76 comments top 18
1
jasallen 13 hours ago 6 replies      
The article does get to my base assumptions, but it takes awhile. (1) North Star was what people navigated by first, so would orient maps to that regardless of which way the words were written -- so people started writing the words to fit. (2) Compasses, pick north or south for same reason. North was picked because we were already using the North Star.

It all just makes so much better intuitive sense than a "Conspiracy of Northern European Hegemony"

2
tokenadult 12 hours ago 1 reply      
It's an interesting article. I looked for references to Chinese practice. In Chinese, a (magnetic) compass is referred to as a "," literally "south-indicating needle," so it does appear that south was the most important cardinal direction for the Chinese-speaking people who first used compasses. The article notes that old Arab maps showed south on the top, and attributed this to Chinese practice in map-making (which, if I remember correctly without sources at hand, was not fully uniform in this regard).

Overall, the article offers an interesting discussion of trade-offs in map-making, including the trade-off of what country to put in the middle of a flat wall map that shows the whole world. I have become quite used to Chinese wall maps of the world in Mercator or equal-area projections that display China in the middle (left-to-right) and consequently split North America along both sides of the map.

3
Claudus 12 hours ago 0 replies      
A pointed needle can be magnetized to point either North or South. But you definitely want some way of distinguishing between the two ends, and making one end pointed seems like the easiest way to do so. Maybe the magnetization just happened to result in the needle pointing North the first time and that set the standard.

Really interesting article on making a compass.http://www.wildwoodsurvival.com/survival/navigation/rbimprov...

4
jpatokal 2 hours ago 0 replies      
For Americans, its easy to think that our position, at the top left of most maps, is the intrinsically preferable one.

This clearly was not written by a geek. Obviously the top right is preferable, since it's the only region where both decimal latitude and longitude are positive.

5
est 13 hours ago 0 replies      
The Chinese make south top because the emperor must facing south to rule. , , . So if they look at maps they sure must look at south.
6
acheron 12 hours ago 0 replies      
I had heard a story once that the verb "orient" (as in finding where you are, aligning yourself/something else, etc.) was related to east being on the top of the map: you figured out which direction was "oriental" (i.e. eastern), and matched it with the top of the map.

Later I researched it though and while that may have contributed, the more likely original meaning came from people wanting to build their churches and such to face east.

7
dhughes 10 hours ago 0 replies      
Etymology Online has an interesting description for the word north:

"...possibly ultimately from PIE ner- "left," also "below," as north is to the left when one faces the rising sun..."*

It's pretty much arguing over our ancestors saying "to the left" in reference to the rising sun.

8
secstate 13 hours ago 2 replies      
Seems to me the vast majority of habitable area lies in the Northern hemisphere, so this would likely be an issue of statistics, not subconscious superiority.

Also, magnetic north on a compass is a natural way to orient something.

9
kbutler 11 hours ago 0 replies      
The position of most prominence and importance on a map is the center, not the top.

Would you use a map application that always put the location you searched for in the top left?

The article referenced this briefly (Italy and Jerusalem were disappointed to not be at the center).

The manufactured controversy that Europe and America are traditionally at the top of global maps is rather silly.

10
georgecmu 13 hours ago 2 replies      
Interesting historical exploration! The medieval debate on whether North, East or South should be at the top of the map could be linked to the disagreement on the East/North coordinate system convention of the modern day: ENU vs NED.

Generally, land vehicles use the ENU (East-North-Up) convention, with East corresponding to the X axis and Z axis point up (against the gravity vector), while sea and aircraft use the NED (North-East-Down) convention, with X axis pointing North, and Z axis pointing down, aligned with the gravity vector.

11
ArekDymalski 14 hours ago 0 replies      
Two explanations came to my mind:1. East - the direction where the Sun appears - was located on the right because the right hand had a special meaning for people.2. South was located down (behind the back of someone reading the map) to provide the maximum amount of light to the reader.
12
squigs25 11 hours ago 0 replies      
Is it unfair to also ask why the poles of the earth don't appear on the sides of a map?
13
jackgavigan 13 hours ago 0 replies      
Someone needs to tell McArthur that his "Universal Corrective Map of the World" is missing an entire continent.
14
filereaper 5 hours ago 0 replies      
The West Wing covered this on the topic of the Mercator Projection: http://www.youtube.com/watch?v=vVX-PrBRtTY

cheers.

15
kiliancs 13 hours ago 0 replies      
Just a small almost off-topic correction: Majorca was not Spanish-ruled in 1375, hence the name of the map "Catalan Atlas" attributed to cartographer Abraham Cresques as well as the mentioned nickname of "el jueu de les bruixoles" (in Catalan language).
16
decentrality 13 hours ago 2 replies      
A better "up" would be East, oriented to the Galaxy as a whole, rather than Earth's roll relative to the Sun.
17
nroose 8 hours ago 0 replies      
That article says that Stuart McArthur on Jan. 26, 1979!
18
dschiptsov 13 hours ago 1 reply      
Because a compass points there?
28
The Difference Between Programmers and Coders [2013] workfunc.com
5 points by Rumudiez  1 hour ago   1 comment top
1
yawz 0 minutes ago 0 replies      
Potato - Potahto.
29
Using Django for mostly static sites goodcode.io
70 points by senko  13 hours ago   53 comments top 10
1
skywhopper 13 hours ago 2 replies      
Impressively long way around to avoid relying on Nginx or Apache. How does it perform in comparison?
2
milkanic 11 hours ago 0 replies      
I'd recommended checking out Mezzanine[1]. I use it for everything now, regardless if I need a full CMS. The included fab file makes deployment dead simple[2] and the caching strategy[3] results in nearly zero DB hits.

[1]http://mezzanine.jupo.org/[2]http://mezzanine.jupo.org/docs/deployment.html[3]http://mezzanine.jupo.org/docs/caching-strategy.html

3
snide 12 hours ago 0 replies      
If you're looking to django for static mostly for the templating engine and some build time python, I'd check out the excellent Cactus. Apparently there's a new downloadable mac app for it, which I've yet to try, but as a Django templater / designer, Cactus was always my goto for static sites.

https://github.com/koenbok/Cactus?source=c

4
motter 12 hours ago 2 replies      
I'm in the process of moving away from a static site for my website.

Simply put, a web interface is often more convenient so I can update things easier on the go, and it's good to have a place to host experiments too.

I have used various static site generators but none of them seemed to be significantly less work to get going than a small django app on heroku. Though I've been using Django for a few years, so there is simply no learning curve left.

5
mcjiggerlog 12 hours ago 3 replies      
Also worth checking out is the Flask microframework. Setting up a simple static site with basic routing is ridiculously simple:

http://flask.pocoo.org/docs/quickstart/

I used it in conjunction with Bootstrap and Heroku to throw a personal site together in an evening, following this tutorial:

http://www.shea.io/lightweight-python-apps-with-flask-twitte...

6
lumpypua 11 hours ago 0 replies      
I built a decently large static content site with a dynamic backend (http://yareallyarchive.com) on django.

With aggressive caching of all static pages it is fast as hell, and I love the django ecosystem. Adding search was as easy as dropping in django-haystack and adding like 5 lines of code. django-mptt has been brilliant for easily querying and manipulating comment trees.

It's possible to do small sites with django but it requires you to learn a lot of stuff you don't necessarily need. Flask is a better choice for small/nearly-static sites if you're not already familiar with a web framework.

7
koenbok 7 hours ago 0 replies      
We just shipped a Mac app on top of Cactus, an open source static site generator based on django.

http://cactusformac.com

8
dangayle 7 hours ago 1 reply      
This is exactly the method my company uses when one of the stories on our website is getting hammered by Reddit. I manually wget the rendered template, save it to a templates/static/ dir, and serve it directly.

Works like a charm.

9
natrius 11 hours ago 2 replies      
Don't do this. If you aren't using a database or processing forms, your life will be better if you learn jekyll and use it.
10
jlafon 12 hours ago 1 reply      
The most interesting part to me is that there is no database. It had never occurred to me to use Django without one.
30
Free static page hosting on Google App Engine in minutes fizerkhan.com
75 points by fizerkhan  13 hours ago   55 comments top 19
1
rza 11 hours ago 2 replies      
I host my personal website GAE. One thing to be aware of is when moving to a personal domain, you need to map to a subdomain so you can't use a naked domain[1] (e.g. 'http://github.com'). You have to map to something like 'www.'

[1] https://developers.google.com/appengine/kb/general#naked_dom...

2
tuananh 17 minutes ago 0 replies      
What are the advantages of using GAE over Github Pages?
3
bobfunk 11 hours ago 5 replies      
Did a quick comparison with my own service, BitBalloon.

On appengine you're just deploying a dynamic app that just routes everything to a static folder, but since Google doesn't know this is a static site, it's pretty limited what they can do to set good cache headers and optimize stuff for performance. So even if they have an awesome infrastructure, BitBalloon will make your site perform better.

Here's the quick test result from the same site uploaded to AppEngine and BitBalloon:

http://tools.pingdom.com/fpt/#!/vLi9d/http://teststaticsite....http://tools.pingdom.com/fpt/#!/dwqwvX/http://speedtest.bitb...

4
mehulkar 9 hours ago 2 replies      
Is there a way to redirect all[1] requests to index.html? The use case is an Ember app that uses history location and is deployed as a static website. In this case, requests to `/whatever` still need to serve index.html and let Ember handle the routing. Can GAE app.yaml specify rules like this?

I ran into this problem with S3 and ended up writing a simple server to handle it and deploying to Heroku.

[1] By all, I mean all except the ones to /assets or something similar.

5
ethikal 11 hours ago 1 reply      
"Moreover it is faster than other static hosting services. Because it runs on Google infrastructure."

Um, wow.

6
donniezazen 1 hour ago 1 reply      
Can you use one of the static site generators and git to publish your posts?
7
fuzzythinker 8 hours ago 0 replies      
Somewhat related, I've recently found and evaluated nodejs static site generators (including ghost which I find vastly overhyped and still too stuck in wordpress way of doing things). And the relatively unknown wintersmith [1] is just so awesome that it really needs some mention. It's easy to get started, relatively well documented compared to others, and you have full control to extend it should you need to.

[1] http://wintersmith.io

8
donniezazen 2 hours ago 0 replies      
Since it's Google, I am sure one is allowed to use Adsense.
9
mountaineer 11 hours ago 1 reply      
From years back, there was a tool created called DryDrop[1] that allows you to publish static GAE sites via GitHub push.

[1] http://drydrop.binaryage.com/

10
praseodym 10 hours ago 1 reply      
GitHub Pages is even easier to set up, and does support naked domains (although to use their CDN it is limited to DNS providers that support ALIAS records) -- https://github.com/blog/1715-faster-more-awesome-github-page...
11
rikkus 12 hours ago 0 replies      
Seems you can set up custom DNS for your site too, with wildcards even. Pretty nifty.
12
tapsboy 11 hours ago 0 replies      
How is it better than just using Dropbox or S3 or even Google Drive to host static content?
13
contacternst 12 hours ago 3 replies      
"All the services has its advantages and disadvantages over other."

What's wrong with this sentence?

14
herokusaki 12 hours ago 1 reply      
Any limitations on traffic?
15
WhitneyLand 8 hours ago 1 reply      
What service allows full SSL use without paying big bucks?
16
bhartzer 10 hours ago 0 replies      
Putting up a landing page for a 'parked' domain name would be a great use of this. Rather than letting your registrar put up a page where they make money off of the clicks.
17
aritraghosh007 9 hours ago 0 replies      
I have my personal site hosted on Google App Engine too. Must say its sleek and simple. I had evaluated a lot of other (typical) options like EC2, Dropbox(S3) and even Github pages. Nothing came close to the ease and the performance that GAE gives. Benchmarking about 10 URLs on my site, returned an a.r.t of 3.5ms aggregate on GAE vs 5.6 ms on EC2.
18
Kiro 8 hours ago 1 reply      
Seems nice! How do you set up a custom domain name for it?
19
drakaal 8 hours ago 0 replies      
If you want mostly free hosting I built this a long time ago. http://www.cdninabox.com/ it mirrors any site with caching using Google Edge cache.

Because it will handle URL Re-Writes you can host on any host in a subdirectory and still have it be your root for the customer.

I mostly abandoned this when Google Launched "PageSpeed" for appengine which was too much a direct competitor. Also when they moved to AppsForDomains you could no longer have a naked domain, and that was annoying. I don't like www. having to be on the front of my URL.

       cached 18 February 2014 05:02:01 GMT