hacker news with inline top comments    .. more ..    14 Jul 2016 News
home   ask   best   12 months ago   
1
The Fight for the Right to Repair smithsonianmag.com
373 points by sinak  6 hours ago   160 comments top 16
1
RcouF1uZ4gsC 5 hours ago 10 replies      
I think this whole open source and right to repair/modify will be very interesting with self-driving cars, because of their interaction with the commons. Here are some issues.

There have been some discussions about the ethics of self-driving cars if it should sacrifice the lives of the people in the cars to save more lives. In a right to repair/modify wouldn't a lot of people pay to have the algorithm changed for their car to always favor the people in the car no matter what?

If the self-driving software is completely open source, you can exploit the collision avoidance algorithms to favor aggressive driving with your car. For example if the software tries to keep a 15 foot buffer between cars, you can"tune" your software to use a shorter buffer and cut in front of traffic more easily.

Law enforcement will campaign for a remote "pull over" command to prevent people from fleeing police.

The answers to these types of questions will be very important for open source going forward.

2
CaptSpify 4 hours ago 4 replies      
I used to do after-market repair medical equipment (MRI's, CT's, X-Ray, etc). One of the biggest frustrations was: $manufacturer wants $5000 to repair a CT. We'd charge $1000.

We'd repair the CT, and it would pass all the built-in diagnostic tests. But then when the customer went to make a scan, a pop-up would appear saying "Unauthorized Repair! Call $manufacturer to fix!".

$manufacturer repair tech would come in, plug in a usb-key, type a code, and charge $1000. They didn't run any of the diagnostics, and were basically paid to keep the usb-key available.

I believe in the right to repair, because preventing it just causes artificial monopolies and price-gouging.

(note: I don't remember actual prices. Numbers were just made up)

3
acd 5 hours ago 1 reply      
I think electronics should be designed for repair and easy recycling as a main design goal.

For example a cell phone, you could build it with an aluminum backwith small phillips screws instead of glued back. Open the phone up with a standard screw driver and you will be able to replace the battery, main board and screen yourself.

What kind of environment do you want to leave for future generations. A big pile of electronic garbage or a world as clean as it could be? What about the carbon foot print of upgrade phones every two years?

If we can reuse electronics components the garbage foot print should be smaller.

Why do we have to throw a working screen and battery in a cell phone if you just want to upgrade the cpu speed or camera of your phone?

How about laws that require that consumers should be able to repair their things, average life time.

Good projects on the right path so farGoogle has Project AraFairphone2

4
mdip 1 hour ago 0 replies      
This was a hugely frustrating thing for me recently.

I picked up a used Pioneer AVR on eBay a few years ago and had no issues with it until I decided to plug my wife's older plasma television in because my main TV decided to take up an unusual smoking habit. It refused to connect to the new TV, indicating an HDCP error, despite working when plugged directly in to any of the attached devices. A quick google search yielded the likely culprit: my firmware was out of date. This model was one version behind but, unfortunately, was one model-year behind the ones that allowed for online updates. So I called Pioneer and was told they do not provide firmware updates directly to consumers and I could only get a firmware update from an authorized repair center. Besides the added frustration of having to disconnect the 30 or so cables, cart this thing across town, then come back at a later date to pick it up and plug it all back in, the fee was going to be half of what I had paid for the thing in the first place. So DRM caused a product I'm using legally to fail and the simple software fix was not allowed to be applied by me.

After following several dead links, I managed to find a forum where someone posted a Dropbox link to the firmware, making me a firmware pirate (aaarr!). This incredibly technical process that can only be performed at an authorized service center? Extract a file to a USB key, insert said key, turn on device while holding down two buttons and wait until the screen says it's done.

Pioneer's approach here succeeded in making me a "former customer" at some point in the future since the firmware update fixed my problem.

5
The_Hoff 5 hours ago 4 replies      
For me, Apple devices are always what come to mind when thinking about ability to repair and modify. I used to not mind when I couldn't open my iPod Classic to replace the battery, because failures like this rarely occurred and when they did Apple repaired them easily. Today however, when something as trivial as a computer memory upgrade is restricted, and the Genius Bar is overloaded with iPhone screen repairs, I find myself wanting some sort of standards with regards to repair rights.
6
mmanfrin 5 hours ago 2 replies      
John Deere is attempting to make sales of their tractors 'leases' to enforce their ban on self repair:

http://www.wired.com/2015/04/dmca-ownership-john-deere/

7
jswny 5 hours ago 1 reply      
I think that measures like planned obsolescence and preventing consumers from repairing the devices that they own should not be legal. They should be treated similarly to anti-competitive practices.
8
robert1976 4 hours ago 1 reply      
If substantial numbers of consumers are demanding fixable/open electronics, a manufacturer could comply and fill a great need! My guess is that market is v small. Regulation is unneccessary here.
9
SilasX 5 hours ago 1 reply      
Right to repair = ban on any waiver of repair rights in any commercial relationship.
10
azraomega 4 hours ago 1 reply      
While I hate companies suing people for repairing their own properties, "right" has been overused so many places. What a sensational word! I have the "right" to not being sued! What?

I would advocate companies certify other entities as repairing authorities or just do a good job themselves; providing the support. Of course it will reduce their product sale and that would be a huge deal to us consumers as well because we want them to do well and providing better product next round.

However, it is dangerous that we use sale as THE primary measure. I believe companies should take pride in the quality of their products and services. They should inspire customer loyalty, not unethical growth to please Wall Street.

11
paulsutter 5 hours ago 3 replies      
This is carburetor nostalgia, from a time when everything was human scale and adjustable. Look at how chips are stacked and interconnected within packages in modern design[1], that trend will only intensify. Software is also getting more complex, with deep learning the system develops its own if/thens.

[1] http://img.koreatimes.co.kr/upload/news/070905_p10_hynix.jpg

12
davidf18 3 hours ago 0 replies      
iFixit (https://www.ifixit.com) is pretty great for Mac products.
13
lifeisstillgood 4 hours ago 0 replies      
we shall, as a society, realise that software is some cross between literacy and legislation and that the right to read (and write back) software that affects us is fundamental to the good operation of society, and we will advance together.

Or we won't and we will end up like North Korea, pretending reality is somewhere it's not.

Nature is not going to care if we don't board the clue train.

14
partycoder 4 hours ago 0 replies      
Well, this is common sense.Remember the "Ecce homo" painting? there are some things that will likely go wrong if you do them by yourself. Sometimes with consequences that are not desirable for "the common good".

This includes repairing devices that are required to be reliable for public safety.

15
Aelinsaar 6 hours ago 2 replies      
A lot of issues surrounding the move to make anything and everything a "Service" are starting to become very clear. The issues with IoT devices being bricked, Tesla 'ownership', and so on really do need scrutiny from a functional body with a hint of the public's interest at heart.
16
geggam 5 hours ago 0 replies      
.... "but its for our safety"
2
Hyperuniformity Found In Birds, Math And Physics quantamagazine.org
71 points by clumsysmurf  3 hours ago   18 comments top 9
1
gregschlom 1 hour ago 0 replies      
A beautiful article from Mike Bostock that illustrates this (and other) principles: https://bost.ocks.org/mike/algorithms/

And my attempt at using Poisson-disc sampling to generate stippling patterns in real time in a GPU shader: http://gregschlom.com/devlog/2015/01/31/stippling-effect-scr...

I wish my blog post had more details on the technique, but basically I am pre-computing Poisson-disc distributions at several (255) density levels in such a way that the samples are adaptive (ie: all samples from levels 1-n are also samples at the level n+1.

I'm then storing that information in a texture, and reading from it in a shader to know whether or not to draw a stipple.

The tricky part is to figure out how to do that on arbitrary 3D surfaces.

2
twic 1 hour ago 0 replies      
A while ago (file timestamps say 2005), a colleague of mine was studying CCR5, a cell surface receptor which HIV exploits to enter cells. She had taken electron microscope pictures of the distribution of CCR5 over the surface of a cell. By eye, it was clear that it was distributed evenly, but it was suspiciously even - not random, but well spread out, exactly as in these hyperuniform cases.

I wrote some Python scripts to calculate Ripley's K-function, which looked like a good way of quantifying this:

http://www.thorsten-wiegand.de/towi_methods.html

But she didn't think the distribution was as interesting as i did, and never used them!

3
Phemist 2 hours ago 0 replies      
Amazing! This seems very similar to something I was taught about during my post-grad (and regret not pursuing, if only for this cool effect - https://en.m.wikipedia.org/wiki/Pinocchio_illusion). These were cortical maps ir cortical receptive fields in lower level sensory cortex of animals, for example barrel cortex of rats that processes whisker input. Areas that were sensitive to different deflection angles seemed to self-organize like this, with seemingly random organization looking at high frequency features, but becoming more ordered at lower frequency features. Specifically, "pinwheels" seemed to form, areas where all different deflection angles were represented in the map, and all these areas touched into a single vanishing point.

Like in the article the paper referenced below openly speculates about the use of organizational structures like this.

Edit: Sorry, this was written from my phone.

Relevant paper: http://onlinelibrary.wiley.com/doi/10.1002/dneu.22281/abstra... (use scihub)

4
lcrs 2 hours ago 1 reply      
Strikingly similar to the low-discrepancy sequences used as sampling patterns in modern ray tracers - and quite a similar role, really:

https://en.wikipedia.org/wiki/Halton_sequence

https://books.google.co.uk/books?id=DirOQ_PELlgC&lpg=PA999&o...

5
Vexs 3 hours ago 3 replies      
You can see a pretty concrete example of this in forests, the trees are all densely packed, and random- and thus fall under hyperuniform. (as far as I can tell, anyway. Haven't exactly tested it, but it sure looks hyperuniform.) I've noticed it in a handful of other things, but I never knew there was an actual thing behind it.

Absolutely fascinating, and it's applications in materials are amazing. Whatever-direction bandpass filters? Crazy cool.

6
pfd1986 2 hours ago 0 replies      
Also similar to what is done in stippling art: http://blog.wolfram.com/2016/05/06/computational-stippling-c...
7
jpfed 2 hours ago 1 reply      
It looks like Poisson disk noise.
8
aznpwnzor 2 hours ago 0 replies      
Makes a lot of sense that uniformity with constraints is exactly what nature likes, wonder if hyperpossoinity is exploited by nature or more likely can be exploited by human designed systems.
9
jsnsjsfnts 2 hours ago 2 replies      
This is just an occurrence of the principle of least action. Systems exhibiting "hyperuniformity" just arrange themselves in order to minimize their potential. This is pretty standard stuff in engineering, and is the basis for finite element simulations of structural mechanics and fluid dynamics problems.
3
Unity raises $181M round at a reported $1.5B valuation techcrunch.com
249 points by doppp  8 hours ago   170 comments top 21
1
aphextron 1 minute ago 0 replies      
How are they only valued at 1.5? Unity is the most advanced game engine in existence. Leaps and bounds above the competition.
2
jakozaur 8 hours ago 10 replies      
Unicorn from Europe (Copenhagen, Denmark). Oh wait, they moved to San Francisco.

It looks like there is a lot of talent and founders around the world, but still there is not enough VC and ecosystem to support them outside of Bay Area.

3
pdeva1 7 minutes ago 0 replies      
Unity makes no money off the games themselves. Its equivalent to a dev tools company. Does $125/dev/month really make that much money?
4
netcan 8 hours ago 5 replies      
Curious side question.

Does anyone know what (if anything) generally happens to employee stock options in the event of these large funding rounds. Most options schemes are effectively designed around the idea of "liquidity event" which used to mean IPO in successful cases.

These days when we see large investment rounds and acquisitions replacing a lot of what IPOs used to do, where does that leave options holders? I suppose this question also applies to early investors and founders too.

5
gohrt 2 minutes ago 0 replies      
Is Unity Web Player ever going to work in Chrome again?
6
gourneau 7 hours ago 6 replies      
Just wanted to chime in and say Pokemon Go is another Unity game. Unity games seem to be doing extremely well on mobile.
7
mratzloff 36 minutes ago 1 reply      
Congrats to Unity. Maybe they can use some of that case to fix the UI. At modern resolutions it's unusable.
8
michaelvoz 8 hours ago 10 replies      
I wonder, what are the current opinions about the end game for Unity? What is the goal of seeking this new funding? My understanding was they are profitable.
9
bitsweet 8 hours ago 0 replies      
For VCs that want to invest in VR this is the logical place; it is too early to invest in apps and too late to get in to the hardware.
10
dcw303 1 hour ago 0 replies      
Great, now please use some of that cash to hire more developers. We've been waiting for a modern .net runtime update for years!
11
rcheu 7 hours ago 1 reply      
Seems a bit low for a company that seems to be doing so well. Does anyone have insight into why the valuation isn't higher?

For those unfamiliar with it, Unity is the go-to game engine for most smaller companies, and many cross platform (mobile/desktop) games. It also has some of the best support for VR.

12
shmerl 1 hour ago 0 replies      
I hope they'll catch up on Vulkan support soon.
13
AWildDHHAppears 7 hours ago 1 reply      
It's a big boost for .NET in general, too.
14
cwkoss 3 hours ago 0 replies      
In-browser Unity has horrible performance.
15
questionr 8 hours ago 0 replies      
fyi their "crunchbase" profile to the right shows the wrong company (its not http://www.unity.hr, but instead https://unity3d.com/)
16
synaesthesisx 4 hours ago 0 replies      
After trying Hololens (of which all the demos I tried were built on Unity) I could see it becoming HUGE for AR/VR.
17
tostitos1979 8 hours ago 1 reply      
Man .. wish they were a publicly traded company :(
18
sjg007 7 hours ago 0 replies      
Queue Microsoft acquisition.
19
xigency 7 hours ago 7 replies      
This seems like an incredible over valuation for a company in video game technology.
20
ozy23378 7 hours ago 1 reply      
Anyone else thought this post was about Ubuntu's Unity and thus relieved?
4
The Tor Project Elects New Board of Directors torproject.org
131 points by tshtf  6 hours ago   15 comments top 5
1
weinzierl 6 hours ago 1 reply      
The interesting part is:

Roger, Nick Mathewson, Meredith Hoban Dunn, Ian Goldberg, Julius Mittenzwei, Rabbi Rob Thomas, Wendy Seltzer are out of the board. Roger and Nick will stay as Tor's research leads.

The new board consists of Cindy Cohn (EFF), Bruce Schneier, Matt Blaze, Gabriella Colemn, Linus Nordberg. Two seats have yet to be filled.

EDIT

Just to avoid confusion: This comment was written before the submission link was changed from a NY Times article that buried the information about the changes in the last paragraph to the Tor Project blog post.

2
danohu 5 hours ago 2 replies      
It's very unusual to have such a complete change of board membership, isn't it? I'm all for new blood, but I'd worry at the near-total loss of continuity.
3
justcommenting 3 hours ago 1 reply      
Since Roger and Nick knew about the allegations and (in the opinion of some) turned a blind eye to them, Shari may have wanted to to articulate some sort of sea change.

But they probably also knew (from a look at their commit logs, it's pretty obvious) that the technical work could not continue without them. So it strikes me as a pragmatic compromise.

4
tptacek 5 hours ago 1 reply      
Matt Blaze is a credible and reassuring choice.
5
jlgaddis 6 hours ago 1 reply      
5
WebAssembly Specification Prototype github.com
120 points by sunfish  6 hours ago   68 comments top 4
1
minionslave 5 hours ago 4 replies      
I have a question about WebAssembly.

1- With WASM, would developing for the web become similar to how desktop apps are written? For example, could I use C# and design a UI in xaml, compile to WASM for the web?

2- No more JavaScript?

2
dccoolgai 3 hours ago 8 replies      
I can't wait until the first time I have to maintain something someone compiled into this monstrosity. The minimal performance gains will be so worth destroying the debuggable web.

Sorry for the snark, but watching this slow-motion trainwreck plow into the Open Web that I love is excruciating.

Pre-edit to respond to the inevitable "But you already can't read and debug minified JS": yeah, you actually can if you're good enough. I do it all the time... my complaint is that the bar for "good enough" is being moved from "can read and debug scripting language" to "can read and debug Assembly"... which I know you can do, but geez. I guess what I'm saying is the Open Web should only run on (reasonably) readable languages. Hate JS? Stand in line, but if you want to replace it on the web, replace it with something a normal person could interpret without a compiler.

3
legulere 6 hours ago 3 replies      
F "todo" 29 finds

The binary and text representation sections still lack almost anything. This really seems like an early prototype.

4
icefox 5 hours ago 1 reply      
It looks like Wasm.instantiateModule(b, m) currently takes the binary wasm and memory to create a module, when would you pass in a linear memory object? Does converting a wast always produce two files?
6
A Future for R: A Comprehensive Overview r-project.org
58 points by michaelsbradley  4 hours ago   19 comments top 6
1
jonchang 3 hours ago 3 replies      
So I've actually tried to use the futures package. While it's very clean for certain types of tasks, there are a few problems that I think are inherent to the way R deals with its parallel packages (which the futures package is built on top of).

Futures is great for tasks where you have some kind of task workflow like:

 # slow task 1 --------, # ----> task 3 # slow task 2 --------' do_stuff <- function(input1, input2) { result1 <- slow_task1(input1) result2 <- slow_task2(input2) task3(result1, result2) }
Because then you can just do something like:

 library(futures) plan(multicore) do_stuff <- function(input1, input2) { result1 %<-% slow_task1(input1) result2 %<-% slow_task2(input2) task3(result1, result2) }
And boom, you can have two tasks running in parallel and everything "just works." It's extremely nice to use thanks to R's promises capability.

Where it falls down is when you try to load up a bunch of futures at once... I'm not clear on the implementation details, but from what I can tell every parallel task is assigned a "port" on your system, but if there is a port conflict (or the OS doesn't "release" (?) the port quickly enough) tasks simply die with an inscrutable error.

I've found that it's necessary to 1. ensure that only one "set" of parallel tasks are running at one time, and 2. create a central "port registry" and manually assign ports randomly within nonoverlapping ranges for parallel tasks. It's straightforward but frustrating to do.

Finally (and I don't know if the futures package has updated since I tried it out last year) it doesn't work on Windows, which is a problem for many R users.

2
haddr 2 hours ago 0 replies      
This definitely a great feature!

Now, with this in hand can we have for instance a multithreaded (or in some other parallel way) web server or even REST API for R?

Talking from the practical perspective, the biggest problem with wide adoption of R is the problem of integration. Sometimes you just want to have a single module in R, and the rest of the system in some other technology. I know there are ways to do it, but not without quite high technical debt. On the other hand having native microservice-like integration could probably help.

I know we are close, but not sure how close.

3
apathy 2 hours ago 1 reply      
See also https://mitpress.mit.edu/sicp/full-text/sicp/book/node70.htm... for fun background reading. Or if you have an actual CS degree, review from freshman year.
4
michaelsbradley 3 hours ago 0 replies      
5
nosound_warmup 3 hours ago 2 replies      
This feels a little like using a hammer for cutting down a tree. You could do it, but there really are better tools for that particular job.
6
johnmyleswhite 3 hours ago 1 reply      
I'd like to see the title changed to something like "Futures for R" since it's so tongue-in-cheek that it's effectively clickbait.
7
Asciinema 1.3 Switches from Go Back to Python asciinema.org
175 points by kodablah  2 hours ago   99 comments top 23
1
quacker 1 hour ago 1 reply      
Python is high level language while Go is low, system level language (I think its fair to say its C 2.0). 95% of asciinema codebase is high level code...

I get this sentiment. There's an attitude among golang developers that things which are "trivial" to implement don't belong in the go standard library (even if they would see high usage). That, and Go's animosity toward generic code and syntax sugar make me feel like I'm fiddling with nuts and bolts sometimes.

For example, Go stubbornly excludes a math.Round() function from it's math package because "the bar for being useful needs to be pretty high" (this comment was followed by several buggy implementations of the method[1]). Go excludes integer math functions, and the lack of generics means I have to cast to float64 or write my own implementation everywhere.

Gos lack of versioned packages and central repository makes packaging cumbersome.

One of the big headscratchers of the golang ecosystem. It's resulted in a million different package managers and ridiculous tools that rewrite all of your imports for you.

My favorite method of managing requirements now is glide[2] which pins all of your dependencies (to a git commit or tag) in a glide.lock file. Glide fetches dependencies into the vendor directory, but you never need to commit your vendored dependencies. Instead, commit the glide.lock file and glide can fetch everything for you.

----

1. https://github.com/golang/go/issues/4594#issuecomment-660733...

2. https://github.com/Masterminds/glide

2
curun1r 5 minutes ago 0 replies      
As a user, I'm afraid this means I'm unlikely to use Asciinema anymore. They've left me with no acceptable installation option. Go get was clean and installed a nice, statically-linked binary. The python ecosystem is messy. Not even considering the disaster that is version 2 vs 3, an installed python program is just harder to manage as files are strewn across many directories that are difficult to understand without understanding python development. This may be fine if you're writing an application and you spend enough time with it to understand the structure and purpose of the installed files, but as a casual user, it sucks. Brew isn't much better as I've continually run into problems upgrading packages. I love the idea of nix, but gave up on it after my first few installed programs all didn't work for anything involving SSL.

I know it wasn't intentional, but this change feels like a huge FU to users. If I'm forced to use Asciinema again, I'm going to have to resort to using it from from within a Docker container where the mess is, at least, sandboxed, but that option has its own drawbacks.

3
antirez 2 hours ago 1 reply      
Not related to the language switch, but I just want to say that I love Asciinema. Thanks to it I do small screencasts where normally I would be totally discouraged about the amount of work needed in order to really record a portion of the screen and upload a video recording. For people that want to show most of the times terminal based things (i.e. programmers), it is a zero-friction way to show an interactive session. Thanks for building it.
4
hellcow 2 hours ago 9 replies      
>if err != nil { gets old even faster.

I don't understand this complaint about Go. Either a function can result in an error or it can't. If it can result in an error and you don't handle that error, your program's behavior is undefined. Undefined behavior is a bad thing.

Go forces you to handle the error (or explicitly ignore it). That design choice results in remarkably stable programs.

5
JamesMcMinn 2 hours ago 2 replies      
I'm left wondering why the switch was made in the first place. I spend about half my time coding in Go and love the language, but if I wasn't looking for easier concurrency or a speed boost, I wouldn't re-write an existing code base in Go. It's a lot of work to do just to revert a short time later.

The majority of comments are fair, although I'd disagree with it being C2.0 and err != nil getting old - I much prefer it to exceptions.

6
vvanders 2 hours ago 3 replies      
> Python is high level language while Go is low, system level language (I think its fair to say its C 2.0).

I thought we killed this notion already but it still seems to be lurking around. It doesn't support volatile, doesn't let you specify what goes on the stack vs heap(or pin anything for that matter).

Great for web services? Sure. Low level C replacement? Nope, for my money that's Rust.

7
fizzbatter 2 hours ago 1 reply      
It's funny, when using C++ or Rust, Go feels like my version of Python/Javascript/Etc. There's something very appealing of it to me.. it's the NodeJS of the Typed world to me, and i love it for that. Granted, i'm trying to switch my codebases to Rust, but still - i can't imagine going back to Node/Python/etc. But to each their own, i'm not foolish enought to say i'm "right".

With that said though, i have a hard time understanding complaints like:

> if err != nil { gets old even faster.

I may be biased, because my time in frontend JS land, but i love checking errors every time. It's a language feature to me.

Ignoring errors and expecting something else to care and catch/handle them is just.. worrying to me. Likewise,

 try .. stuff .. catch .. stuff ..
Gets far older to me than `if err != nil {`. But that's just me i suppose.

8
percept 2 hours ago 1 reply      
This is a perfectly reasonable analysis of why a particular language choice didn't fit a particular project, without a lot of hyperbole or boosterism.
9
zalmoxes 2 hours ago 4 replies      
> Batteries included: argparse, pty, locale, configparser, json, uuid, http. All of these excellent modules are used by asciinema and are part of Pythons standard library. Python stdlibs quality and stability guarantees are order of magnitude higher than of unversioned Go libs from Github (I believe discrete releases ensure higher quality and more stability).

Go also has most of the listed libraries(like http, arg parsing, json) included in stdlib. I'd argue that the the http library in Go is one of the best out there :)

Can't disagree about dependency management, hopefully it gets addressed sooner rather than later. There was a good discussion with the Go team on the topic at Gophercon today.

10
weberc2 1 hour ago 1 reply      
> Batteries included: argparse, pty, locale, configparser, json, uuid, http. All of these excellent modules are used by asciinema and are part of Pythons standard library. Python stdlibs quality and stability guarantees are order of magnitude higher than of unversioned Go libs from Github (I believe discrete releases ensure higher quality and more stability).

Worth noting that Go has `flag`, `json`, and `http` in its standard library, and they're all much higher easier to use than the Python equivalents. In Python, you unmarshal your JSON into a dict or list and then write a function to convert it into the right object while making sure the structure is correct. With Go, the library does the right thing out of the box.

11
ryanlm 2 hours ago 1 reply      
Python is the best high level language out there. I wouldn't call golang C 2.0 because it has a GC.
12
ontouchstart 24 minutes ago 0 replies      
It took me a while to figure out why my Docker build worked yesterday but failed today.

https://github.com/ontouchstart/grs/commit/39d04916ffd678d50...

It seems that

go get github.com/asciinema/asciinema

only downloads and installs the code from the default github branch which is now Python codebase. Fortunately they keep the golang code in a golang branch.

Although my repo is just a sandbox to study how to explore and learn bleeding edge technologies, unfortunately it also shows how fragile our github centered software ecosystem is becoming.

13
knodi 2 hours ago 0 replies      
"if err != nil { gets old even faster."This never gets old builds rock solid stable services.

Completely understand their reasons. Nothing wrong with Python for what they're doing.

14
fitzwatermellow 1 hour ago 0 replies      
Often I encounter similar instances where something fundamental is missing from golang's standard packages and no third-party library exists.

Authors cite superior support for tty on variant archs. But by using golang as the central pipeline manager, calls to jsdom or PIL or ffmpeg simply become another stage in the pipeline. Any number of Python microservices can be composed, while retaining golang as the glue providing timeouts, sync, etc.

Still, whatever works for the Asciinema is OK in my book. Great service and will be recommending to all!

15
mratzloff 39 minutes ago 0 replies      
For reference, here is the code base:

https://github.com/asciinema/asciinema

There's very little code there; porting from scratch would take a few evenings. I wouldn't extrapolate too many conclusions from this.

16
ben_jones 1 hour ago 1 reply      
IMO Golang is a C-like language toted as a Python-like language. I recently replaced a hand-rolled task-queue processing service from Go to Python + Celery and its yielded large gains so far.

Granted I should've done it with Celery to begin with, but I think that's part of Go's problem it lures you into using it when you probably shouldn't.

17
mac01021 2 hours ago 1 reply      
> they dont like vendored (bundled) dependencies (also understandable) (Gentoo example)

I don't understand the rationale here. Why would a linux distro reject a package that incorporates all of its dependencies, thereby becoming dependency-free?

18
cocotino 2 hours ago 2 replies      
>if err != nil { gets old even faster.

EAFP gets older even fasterer.

19
TheBlight 1 hour ago 0 replies      
I've been coding primarily in Go for the last year or so. Previously some Ruby and lots of C/C++. I actually don't like coding in it very much. It's far too much typing and boilerplate stuff. I'd love to just be able to handle exceptions and move on with my life instead of checking for errors on every other line.
20
danso 2 hours ago 0 replies      
Exciting to read this, not just because of how interesting and rare it is to hear about a project not just switching languages, but from Go to Python, but to learn that the author is invested enough in asciinema to do such an overhaul. It serves a niche purpose but it serves it really well, especially for teaching code. The recent development of a self-hosted option for asciinema playbacks made it especially useful to me.
21
sergiotapia 2 hours ago 0 replies      
Preach it! I had a similar experience with Go. I just wasn't happy writing it.

"if err != nil { gets old even faster."

22
shadowmint 1 hour ago 1 reply      
I'd love to know why they decided to move in the first place?

To leave Python means there were some kind of problems with it?

How are those problems being addressed now?

...or was this always a political/personal preference thing?

23
zet2 1 hour ago 0 replies      
pray for you python journey!
8
An Almost Acquisition Story astronomer.io
22 points by rywalker  2 hours ago   3 comments top 3
1
maxsavin 31 minutes ago 0 replies      
Nice read. This part cracked me up: 'Interestingly, our CTO, Greg, had a different take: Theres no way this happens.'
2
serg_chernata 52 minutes ago 0 replies      
Great article and a great reminder to keep your head straight.
3
julianlaneve 45 minutes ago 0 replies      
really cool article!
10
What's wrong with deep learning? (2015) [pdf] pamitc.org
29 points by thedoctor  3 hours ago   1 comment top
12
Show HN: Polybit Build, Deploy, Host Node.js APIs polybit.com
135 points by keithwhor  9 hours ago   47 comments top 19
1
joshdickson 5 hours ago 1 reply      
Node JS developer here.

The idea of the project is great. It can be difficult to get folks up to speed on various Node JS topics, and one of the big ones (like with any code deployed on a web framework, really) is getting people up to speed on deployment. Anything that is working on making that easier, especially for new/young developers, is great.

That being said, I really do not like the pricing strategy because I think it completely incentivizes bad behavior and optimizing your API/apps for weird things at anything past micro-project scale. Things like when database queries happen are also not entirely opaque to developers, especially new developers, which is going to lead to them not understanding charges (for instance, would a new developer understand that on a fairly basic installation of Passport and Express, every page request, logged in or not, would result in DB activity as Passport tries to figure out if the session is logged in or not? That's likely to far overshadow the single API credit needed for the API hit.) I think that Heroku's free tier would be much more attractive for small projects, and for anyone more advanced than that, a $5 Digital Ocean droplet is simple enough to learn very basic server administration and quickly deploy your application. There are a number of other Node platforms (Modulus) who will give you good service for < $15/month.

I think it would make a lot more sense to structure this more like EC2's burstable instances, which accrue CPU credits for times when they do not run at their allotted CPU, and then can expend those credits to burst above that CPU for smaller amounts of time. That achieves a lot of the good things you're working on, but doesn't push people toward minimizing trips to and from the DB or instance.

2
mmanfrin 5 hours ago 0 replies      
Looks very similar to Zeit:

https://zeit.co/

3
ThatMightBePaul 8 hours ago 1 reply      
Congrats Keith :D

I've met Keith a few weeks ago at the NodeJS NYC meetup. Great dude, who genuinely wants to make development better. Polybit seems particularly cool for front-end / designers, mobile devs, and anyone else who'd rather build an app than fret over the high availability, scalability, or etc.

What I'm saying is: cool idea + Keith's very approachable if ya wanna pick his brain about the design :)

Hope this goes well for ya dude!

4
mxuribe 8 hours ago 2 replies      
The premise of this is a great idea. The sign up is pretty neat - def. suited to expected audience/users. However, no pricing info? I'll come back once pricing is displayed.
5
joshstrange 7 hours ago 1 reply      
This looks really interesting but I personally really dislike the pricing, maybe others will love it but to me it's nearly impossible to guess how much this will cost me. When I start a new project I have no clue how many requests or queries will be made. I get the advantage of not working about infrastructure but I'd rather pay a flat fee and then be able to scale up to handle more requests.
6
webXL 8 hours ago 1 reply      
Pretty cool stuff. I got stuck a bit in the registration though. I always use a password manager and paste in my passwords. It's telling me it doesn't like my 16 character password because it needs to be 5 or more, and when I paste it, it's in cleartext.
7
jamesjyu 8 hours ago 1 reply      
Keith is awesome. I've met him a few times to chat about Polybit and he is seriously dedicated to making a great developer product and solving the pain of standing up a backend API.
8
asimuvPR 7 hours ago 1 reply      
Bug report:

REPL on https://polybit.com/ fails to display fonts. See [1] for a screenshot.

Firefox 47.0 on OSX.

[1]http://i.imgur.com/ty8PZT9.png

The console logs the keydown events per pconsole.js line number 308. I see the keys I pressed but nothing shows up.

keydown { target: <textarea>, key: "h", charCode: 0, keyCode: 72 } pconsole.js:308:5

keydown { target: <textarea>, key: "e", charCode: 0, keyCode: 69 } pconsole.js:308:5

keydown { target: <textarea>, key: "l", charCode: 0, keyCode: 76 } pconsole.js:308:5

keydown { target: <textarea>, key: "l", charCode: 0, keyCode: 76 } pconsole.js:308:5

keydown { target: <textarea>, key: "o", charCode: 0, keyCode: 79 } pconsole.js:308:5

keydown { target: <textarea>, key: " ", charCode: 0, keyCode: 32 } pconsole.js:308:5

keydown { target: <textarea>, key: "w", charCode: 0, keyCode: 87 } pconsole.js:308:5

keydown { target: <textarea>, key: "o", charCode: 0, keyCode: 79 } pconsole.js:308:5

keydown { target: <textarea>, key: "r", charCode: 0, keyCode: 82 } pconsole.js:308:5

keydown { target: <textarea>, key: "l", charCode: 0, keyCode: 76 } pconsole.js:308:5

keydown { target: <textarea>, key: "d", charCode: 0, keyCode: 68 } pconsole.js:308:5

:)

9
unchaotic 7 hours ago 0 replies      
Intrigued by the whois information :)

Creation Date: 20-dec-1999

Expiration Date: 20-dec-2025

10
Roodgorf 6 hours ago 1 reply      
I'm confused on my credit balance after signing up. The pricing printout shows a 1,000(x2) for the registration tier, and below the table seems to indicate that these (x2) markers represent the double credit offer for beta users.

However, the account I just registered shows a balance of only 1,000 credits. Is the credit bonus only applied to credits which were paid for? If so the (x2) in the first row is a bit misleading.

That being said, I'm excited to check out this service and don't mean to complain about getting something for free.

11
dazhbog 3 hours ago 0 replies      
I am surprised Nodal still uses 90% of Nordic Semiconductor's logo :/

Love the website design!

12
franciscop 8 hours ago 1 reply      
Totally broken for me (after writing "Register"): http://imgur.com/a/cYtZz

Browser: Firefox 47 (default browser)

Extensions: AdBlockPlus, LastPass

OS: Ubuntu 16.04 LTS

13
mgrennan 7 hours ago 1 reply      
Very COOL! As a DBA, as fare as I'm concerned this is how databases should be used. NO! application should contain SQL. SQL should ONLY be in an API.
14
johns 6 hours ago 1 reply      
Love the idea. I think you really need an about page though. People need to know why they can't trust you.
15
udkl 7 hours ago 0 replies      
I'm reminded in some sense of RoR/CakePHP with scaffolding.

What are some other tools that provide similar functionality ?

16
CLei 8 hours ago 0 replies      
Awesome website design, the fact I register using command linea is extremelly cool. Nice touch
17
throwawayReply 8 hours ago 1 reply      
I'm on Windows/Firefox the console only echoes back missing characters.
18
stockkid 5 hours ago 0 replies      
feedback: the console in the home page doesn't seem to do anything. (Chrome Android)
19
kimmshibal 6 hours ago 1 reply      
Do you have bug bounty? :)
16
How America Could Go Dark wsj.com
27 points by msisk6  3 hours ago   28 comments top 7
1
DrScump 2 hours ago 4 replies      
Government indifference (at all levels) to this risk is scary.

"The Metcalf substation (San Jose, CA, USA), while undergoing security upgrades, was hit again in August 2014. Intruders cut through fences and burglarized equipment containers, triggering at least 14 alarms over four hours. Utility employees didnt call police or alert guards, who were stationed at the site, according to a state inquiry."

The problem is compounded by a large (and growing) number of electrical outages caused by copper thieves ripping out wiring; it becomes more difficult to distinguish mere theft from terrorism.

2
musesum 2 hours ago 1 reply      
This is a good companion to Gibney's Zero Day Docu: http://www.recode.net/2016/7/7/12045334/alex-gibney-zero-day...

Went to a SmartGrid conference in 2009. What I learned:

. $1T infrastructure is scheduled to be replaced,

. Power Co's operate on a 30 year amortization cycle,

. Utility regulations can not only change from state-to-state, but from county-to-county,

. The grid is hackable (known in 2009)

[Edit] spelling

3
teh_klev 1 hour ago 1 reply      
Article without the paywall etc:

http://archive.is/HCwUJ

4
bllguo 1 hour ago 1 reply      
It's scary to think about the damage an actually competent force can inflict in this day and age.
5
taf2 1 hour ago 1 reply      
So an obvious maybe long term solution here is to eliminate the need for a grid. The transition to battery solar powered homes would reduce the need for the upstream power supply and larger and transmission lines...
6
stretchwithme 1 hour ago 2 replies      
I thin the real problem is that we've set up only one grid. If we had competing distribution providers, the failure of one network wouldn't matter so much.
7
cced 1 hour ago 1 reply      
Can we stop posting paywalled articles?
17
Startup Technical Diligence Is a Waste of Time codingvc.com
247 points by lpolovets  13 hours ago   143 comments top 41
1
kafkaesq 7 hours ago 2 replies      
A quick skim of CB Insights' collection of 150+ startup post-mortems reveals that only ~5% of post-mortems referenced a lack of technical ability/execution. Most startup failures were caused by building the wrong product, or lacking sales skills, or not having a viable business model. The strong presence or absence of amazing engineers was rarely a factor.

"Our biggest obstacle to growth, right now? We still can't seem to attract candidates with strong CS fundamentals. Maybe we should crank up HackerRank hazing process from 3 hour-long sessions, up to 5. Because I mean, just look around -- aren't we amazing? At this critical juncture for our venture, surely we can't afford to dilute our crack team with mediocre talent."

2
mbesto 10 hours ago 2 replies      
I do technical diligence for a living so I feel like I should chime in.

1. Many of my clients are unsophisticated when it comes to technology, regardless of whether they buy a "tech company" or not. Yes this includes both PE and VC clients. When a PE firm makes an investment in a tech company (and take 51+% of the company) they need to be confident the technology will hold up. About 90% of my work is PE, with the other being 10% in VC (mostly Series A +). Believe it or not, many commercially successful companies do not always use "today's world of SaaS tools, APIs, and cloud infrastructure". And even if they do, just because you use AWS doesn't mean you've setup a VPC correctly, you have multi-az setup, or sensitive data isn't encrypted at rest. These can have potentially costly (i.e. lose customers / brand recognition) consequences. (case in point -> https://news.ycombinator.com/item?id=11999712 )

2. If you're a VC, and especially an early stage one, then no, due diligence isn't necessary. At all in fact (including finance, etc). You're thesis isn't aligned with careful analysis, it's spray and pray, so diligence isn't a worthwhile endeavor. I wonder if something prompted the author to write this or if it's just a random rambling. I thought it was pretty well understood that doing diligence at the angel/seed stage was almost unanimously worthless.

3. Tech due diligence at the later stage of VC, if done right, has more to do about scalability of operations and processes then it does about the specific tech. When you put $50M into a company, spending $50k is a pretty good safeguard.

4. Tech diligence, by the right partner, could have certainly identified risks at Theranos or uBeam (assuming there are some there), but that's precisely what it is - risk. I'm pretty confident the investors in these companies understand the risk, they simply just don't care, because the potential outcome will return their whole fund and some. Do this 10 times and one bet is bound to pay off.

5. I always try to remind entrepreneurs, VCs are finance professionals, not tech professionals. While some may have made their way into VC because of their previous technical ability, there are also many who do not. And just like when you stop coding for 6 months and get "rusty", so do previously technical able VCs who get older and out of touch.

3
gedrap 7 hours ago 3 replies      
> For >95% of startups, however, technical diligence is a waste of time for a more fundamental reason: in today's world of SaaS tools, APIs, and cloud infrastructure, most startup ideas don't have significant technical risk. That means technical resources are rarely the cause of success or the reason for failure.

While it's a tough pill to swallow for many engineers, there's a lot of truth in it.

The company I work for (Bored Panda) is a pretty good example of it. I was the first full time engineer, and the company already had 20+ millions of monthly visitors and ~2m facebook followers. Surely, there was tons of technical debt to clean up and uptime was... really poor :)

4
lpolovets 7 hours ago 2 replies      
(I'm the post's author.)

Just to clarify a little bit, since I didn't make this explicitly clear in the article: there are certainly questions about the technical side that are useful.

One example from the blog post is trying to understand if the tech team's experience lines up with what they're building. If the founders are 2 infrastructure engineers who are trying to build a social app, or if they're mobile devs who are trying to build a streaming database, then that's a yellow or red flag.

Another example: I've gotten a lot of value out of drilling in a tiny bit when people make bold technology claims. If someone says they've developed cutting edge ML, they rarely expect an investor to ask about it. When I ask them what "cutting edge" means, if I hear words like linear regression or decision trees, then my bullshit detector goes off -- but not for technical reasons. It's not bad at all to use simple ML models, what is bad is misrepresenting your tech, because then I start wondering what else you're misrepresenting.

But for most (seed) VCs that I've met, technical diligence means asking an engineer they're friends with to talk to a CTO they're thinking of investing in. The engineer asks about system design, infrastructure decisions, deployment environment, coding practices, etc. and then tells the VC if they think the tech stack is robust. (VC: "Ah, they're using AWS and Redshift? That sounds modern. Thumbs up!") For many, many startups (SaaS, mobile apps, ecommerce, etc.) I think this type of diligence is basically useless for almost all seed stage companies.

5
payne92 11 hours ago 2 replies      
While intensive technical due diligence at the seed stage is rare (for the good reasons cited in the note), I wouldn't make a unilateral assertion that it's a waste of time.

Some startups ARE technology driven (I bet the Theranos investors wished they did more tech diligence).

I find a light / informal discussion about technology to be really helpful. What's been built gives a sense of the real (vs stated) tech skills on the founding team (we all tend to reach for what's most familiar). For example, if the first version is built on the Microsoft stack, you might want to know that.

Finally, code meant to be thrown away always hangs around longer than expected.

6
franciscop 8 hours ago 0 replies      
> "The ability to scale with success is important, but designing products for high scalability from Day 1 is usually a mistake."

In my opinion, and having helped dozens of startups (friends or work), this is the biggest mistake I've seen. Engineers love to make things that scale, while business logic imply to leave it for when it is actually needed. I'd say it's #1 waste of time in a startup (in Spain). Also, when doing so mostly it's done without measuring anything, just doing it "for the sake of it".

7
DonHopkins 9 hours ago 0 replies      
I understand the reluctance to evaluate a startup based on a prototype, but there are prototypes that are designed from the start to be incrementally refactored and replaced, and there are prototypes that are monolithic dead ends in hell. There are people who use technical debt to get to the next level so they can repay the debt, and there are people who use technical debt like Donald Trump uses bankruptcy court to screw people who trusted him.
8
erdevs 3 hours ago 0 replies      
The headline for this post is far too broad. More accurate would be to say that "very early stage startup technical diligence is usually a waste of time". That is often the case, for many of the reasons the author outlined.

Later on in a company's life, however, technical diligence becomes very important. If anything, VCs should do a better job of it. Countless companies have struggled, stalled or failed because they didn't become aware of or address technical issues early enough and scaled the business side too far out in front of what could be managed technically. This gets your company upside down fast, and in very difficult to unwind ways. Customers become dissatisfied at instability and lack of rapid improvements. Internally, strife between revenue owners and technical owners can easily increase as tech tries to balance delivering new funcitonality and addressing old technical debt.

After you find product-market fit and start really scaling-- ie after the early stage-- technical diligence should be a much higher priority than it currently is, for both VCs/investors and for company operators within their own orgs. Better technical assessments of issues and opportunities would serve most startups very well.

9
rywalker 54 minutes ago 0 replies      
Great post Leo. Most startup "diligence" is a waste of time, usually just investors checking the box after they've already made a decision.
10
xenadu02 6 hours ago 0 replies      
Technical debt can absolutely kill a company if you have competitors who are fast-following while already knowing where the map will lead them. They have a chance to avoid some of the problems.

I would suggest this isn't addressed in post-mortems because most of the time customers won't or can't tell you why they chose a competitor, sales doesn't know or doesn't communicate that to engineering, or engineering is too dysfunctional to act on the information and has a vested interest in justifying their own decisions.

Sometimes it isn't even tech debt per-se, just bad decisions that make implementation of critical features much more expensive... sometimes so expensive everyone feels it would take too long to implement so it isn't done.

11
vasilipupkin 8 hours ago 1 reply      
I actually disagree with this. Of course, building the wrong product and having poor sales/marketing and all that other stuff matters a lot, BUT if the startup's technical people are weak and cannot quickly iterate, then this startup will bleed money really fast, while not being able to quickly modify the product as needed. So, I would say, you need to do technical diligence on the founders, but not necessarily on their code
12
arcanus 10 hours ago 2 replies      
Anecdotally, I'll mention that for the two founder level pitches/seed raises I've been involved in, there was absolutely no technical diligence at all. Nor was there really any technical discussion at the YC pitch (we did not get selected, admittedly).

I've always conjectured that this is why vetting the team is considered such a critical element. For instance, I have a Ph.d. and so I've found people just assume I know what I am doing in terms of tech and algorithms. This can be a disadvantage when I want honest feedback, actually.

Most VCs are much better equipped to discuss the business related aspects than anything technical. In fact, I would be hard pressed to evaluate most start-up technology outside my fields of study, because it becomes so specialized so quickly.

13
claudiusd 4 hours ago 0 replies      
As a technical founder having gone through a few VC rounds, I was initially surprised by the lack of technical diligence but eventually came to the same understanding as the author: product-market-fit and sales/marketing execution make-or-break most companies.

I would add though that the impact the technical founder has on the design of the product and the quality of execution is immense and often hidden behind the non-technical founder selling the company to VCs. Good technical founders do a lot more than code. If I were a VC, I would look for teams with technical founders that have skills extending into product management, people management/recruiting, and operations.

14
thomasrossi 13 hours ago 0 replies      
I relate with most of the points, also from my experience with due diligence (bought by the vc from an external consulting company) they cannot say all is good, since if it fails then they could be blamed, they cannot say all is bad otherwise you'd just need to sell once and you prove them wrong, so pretty much the outcome is always in between. That due diligence was really not worth the money spent.
15
heme 8 hours ago 0 replies      
In my experience as a software developer... Technical diligence, and technical debt, only matter in the way they affect revenue & growth. Technical people are usually more acutely aware of how they will be crushed by the current debt they are accumulating. Business is more acutely aware of how the business will fail when growth goals are not met. Both things should be everyone's concern. Striking that balance, and communicating serious pitfalls and shared goals should help give perspective.
16
wyc 5 hours ago 0 replies      
Another supporting argument for this is from The Innovator's Dilemma: disruptive technology (startups tend to leverage this) tends to be constructed from pre-existing and commercially available components. Think Snapchat, Instagram, tablets, digital pianos, and all-in-one PCs. All the major building blocks had already existed at the time of invention.

This is in stark contrast to sustaining technology, which tends to require expensive teams of scientists and engineers to make incremental improvements to an existing business line, making it palatable only to large organizations with deep pockets. Samsung can afford to have its physicists to find tighter SSD data packing. Google can afford statisticians to tune their search algorithm with complex models. Startups simply can't, nor should they!

17
dkarapetyan 7 hours ago 1 reply      
Sigh. This article just sets up a bunch of false dichotomies. You can ship good code and put a product in the hands of people at the same time. You can also ship shit code and do the same. It obviously takes more expertise and diligence to ship the better quality code but it's not like taking 5 extra minutes before every commit is gonna break your business if you didn't have a proper business model to begin with.
18
wslh 8 hours ago 0 replies      
Probably most of the time technical due diligence is not necessary but this is not my experience working with startups where the commercial success is strongly connected with a technical solution. Some examples:

i/ An startup wants to enter as an automatic security escrow in a Bitcoin multi-signature wallet (please forget for a moment this is about a sector with too much hype). If you do a basic analysis you will realize that multisig wallets are well supported but there is a missing link: there is no standard out-of-band way to setup these wallets and you should do many manual steps. These manual steps add a lot of friction for user onboarding and undermine the startup success.

ii/ A security startup wants to integrate their app inside Microsoft Outlook via an add-on. They are confident because they look at the API and it seems you can extend Outlook but they discover very late that a specific feature can't be achieved.

iii/ An startup wants to virtualize Windows applications (e.g. VMware ThinApp) using binary instrumentation and/or filtering drivers. They will find that some few but critical applications (e.g. legacy Java applets) require an ad-hoc solution for specific issues that significantly increases the development costs.

19
gersh 7 hours ago 0 replies      
He is looking at obituaries of startups for advice. If you look at successful startups, did they face early technical risk?

Google won early by having the best algorithm. Facebook's early competitor Friendster died due to technical issues. It seems less clear at what stage companies like Uber and AirBNB faced substantial technical risk? I'd guess Slack and DropBOx had significant technical risk in early stages. I suppose WeWork probably had less technical risk.

I suppose the early prototype's code may not be totally indicative of whether the company can overcome the technical risk. Still, I think you need to evaluate whether the founding team can deal with the technical risk.

Technical due diligence isn't about looking for beautiful. It is about looking for an ability to manage technical risk. This includes an ability to hire and manage good engineers.

20
andrewfromx 7 hours ago 0 replies      
but there is some minimum required. You want to avoid situation where you are investing in a team that litterally does not know what they are doing and are scamming you for money. Zero tech dilligence can't be the answer. But it should be pass/fail not graded for an A+.
21
jacquesm 5 hours ago 0 replies      
The main reason why investing in start-ups is hard is not because of the ones that die, it's because of the ones that remain afloat but only barely so.
22
c-smile 7 hours ago 0 replies      
Truth is as usual in between of two polar opinions - in the golden middle.

Just hire at least one professional for the startup. He/she will simply not be able to do complete garbage. That's meaning of "professionalism" at the end, right?

Good Experienced Professional Engineer also understands real life compromises. For him/her it is not that hard to make initial architectural decision that allows to implement something fast but has a foundation for future expansion. Usually it takes about couple of days for the person that "been there, seen that" to make good architectural decision.

Startups: "We are not so rich to buy cheap things". Well, at least CTO / architect.

23
LordHumungous 4 hours ago 0 replies      
>only ~5% of post-mortems referenced a lack of technical ability/execution

I'd be curious to see how he arrived at that number. In my experience poor technical execution manifests itself as other problems, like slow time to market, poor user experience, or even building the wrong product.

24
chris_va 9 hours ago 0 replies      
There isn't a distinction made here between product+technical diligence (e.g. is something possible to build given the funding sought) and execution+technical diligence (e.g. is the team building it the right way).

I would argue that the former is always important.

I agree that the latter is less important, but with some caveats. The number of engineering hours put in during the first year is going to be a tiny fraction of the second year, etc, so tech evaluations will change drastically over time. The author argues that this makes them a slight waste of time.

However, I've been very successful in using rough technical diligence in the early rounds as a proxy for measuring the decision making abilities of founders. This is hard to measure, so I still find technical diligence useful. You have to know the constraints they are working under, and not hold technical debt/differences of opinion (like homebrew vs stock) against them, the purpose is different.

Just my 2 cents.

25
fillskills 7 hours ago 0 replies      
Anything that starts with "For the most part, I was wrong." gives me a boost of trust and respect for the author of the post. It is hard to admit your own fault and it is much harder to admit to it publicly.
26
vinceguidry 6 hours ago 0 replies      
Solid engineering may not be needed from a business standpoint, but good engineers want to work on solidly-engineered products. If you've got a pile of hacks, and you keep telling developers they can't take the time to clean it up because business needs come first, then speed of feature implementation is going to slowly degrade and eventually grind to a halt. The good ones you'd have tapped to clean everything up will be long gone by the time this happens.
27
eldavido 9 hours ago 0 replies      
What about cases where "poor technology execution" is the root cause behind "we didn't move quickly enough" or "we didn't build the right features" because you didn't move quickly enough, and the product architecture made change too difficult?
28
theseatoms 9 hours ago 1 reply      
Because odds are high that you'll fail, in which case you default on your technical debt.
29
emblem21 10 hours ago 6 replies      
How to run a startup in 2016:

* No homebrewed tech allowed. It's too "complex"

* We should only use off-the-shelf libraries and SDKs because bar-to-entry isn't a real investor risk ("Just trust me, I'm a charismatic CEO who played golf a few times with the best friend of the college roommate of the guy who helped the guy close Series A for Baidu")

* Soft skills, soft skills, soft skills! Smiling > coding

* I don't know what technical debt is, but I'm sure we can safely rewrite our core product from scratch in between Series B and Series C. Product owners, customer support teams, and quality assurance re-on-boarding is a money problem, not a leadership problem.

* Security audits? I have no idea what those are, but lets incorporate IoT into our product suite somehow. That seems hot right now to VCs.

Bubbles. Bubbles everywhere.

30
tobyc 8 hours ago 2 replies      
Technical due dil doesn't just need to look for poorly architected systems and bad code. It should also look for code that's prematurely optimised and over-engineered.

If an early stage team is building a massively complex microservices based product, using a new type of database and deploying it via kubernetes where a Rails CRUD app on Heroku would easily satisfy requirements for years. Then someone seriously needs to ask the question as to whether using bleeding edge tech is necessary, and whether it will impact the ability to iterate and hire.

31
mgrennan 7 hours ago 1 reply      
Steve Jobs rule 4 should come in here somewhere.

I agree, the first software produced is just a proof of concept and will be re-written. I even believe the only way to write good code is to build it fast in some intransitive language and then throw it away and do it over in something compiled.

So I agree with this post except, to become great, you need to know upfront you are just proving the idea and you will do Technical Diligence of the final product and NOT SELL CRAP.

32
nabla9 6 hours ago 0 replies      
It seems that the Internet and mobile technologies are established technologies at this point. Most internet startups are not technology driven. They are all about marketing, customer relations, service and business as normal. MBA's are more important than engineers.

High-tech startups still exist but they are not the norm.

33
20andup 9 hours ago 0 replies      
"The ability to scale with success is important, but designing products for high scalability from Day 1 is usually a mistake. ("Premature optimization is the root of all evil." -- Donald Knuth)"

He is using the quote wrong. A scalable product and an MVP are build entirely differently. Both scalable product and MVP can have technical debt, but I wouldn't say the scalable product is the optimized version of the MVP.

34
lima 9 hours ago 0 replies      
And then you get hacked and your customer data is leaked.
35
mizzao 7 hours ago 0 replies      
Is the article arguing that ability to execute is not a distinguishing factor for startups? Or just that non-technical execution is more important than technical execution?
36
biot 7 hours ago 0 replies      
What's the typical cost of bringing in someone to investigate and write up a technical due diligence report?
37
kalman5 6 hours ago 0 replies      
Flawed logic all around. First 4 points:

1. Early version of a product especially when in hurry with features delivery are going to be "temporarily definitive" implementation. As soon a feature reached the production the code has to be considered "legacy code"

2. Technical debt is a choice, and when you have too much of it you enter in a spiral of death.

3. The quote from Donald Knuth is only a poor excuse to slack and to justify bad development habits. Writing efficient code most of the time doesn't require more time than not (choose the right container or the right algorithm for example).

4. It's a so generic statement that adds no info at all.

"For >95% of startups [...] That means technical resources are rarely the cause of success or the reason for failure."

that's the best statement of the whole post that more or less sounds like this:

"In Africa 90.3% of people die due to: nutritional causes, hearth disease, cancer and diabets, why send over there good doctors then?"

38
hywel 9 hours ago 0 replies      
"Most startup failures were caused by building the wrong product, or lacking strong sales skills, or not having a viable business model" <- the first one of these is the biggest cause of startup failure, and is technical.

Startup technical diligence needs to check for a minimum level of ability and beyond that, ability to find the right direction to build in.

The idea that what product you build is not the responsibility of the technical team is almost certainly why you think there's no point in doing technical diligence.

39
kang 8 hours ago 0 replies      
The world very soon is going to be the opposite of it, where you just release software and wait.An anonymous person launches a company without any capital, registration, employees, marketing, sales, which is ipo from day 1 and the founder is a billionaire just by releasing a piece of code. With blockchains it is possible to do so.
40
zxcvvcxz 9 hours ago 0 replies      
Provocative title! Substantive points to back it up?

> Early versions of a product are often prototypes that are intentionally meant to be rewritten or heavily refactored in the near future.

Cool, but surely a check of basic physics is in order for certain ideas? (see: uBeam)

> Because getting a product in the hands of users is a top priority, even great engineers will intentionally take shortcuts and accumulate technical debt in order to launch sooner.

Agreed for the most part. I see technical debt as incurring a price to get to market faster. Sometimes the price is too high, though I'd wager 9 times out of 10 it's the product-market fit that would kill a company faster.

> The ability to scale with success is important, but designing products for high scalability from Day 1 is usually a mistake. ("Premature optimization is the root of all evil." -- Donald Knuth)

Sure, but I don't think this is technical diligence necessarily. To me, the diligence would be researching whether or not the product/service could scale, either by seeing other examples of similar companies, understanding how AWS works, understanding manufacturing processes, etc.

> "CTO" is an ambiguous title at small companies.> Because code is likely to be temporary and the CTO's role is likely to change quickly, it's not obvious what should be measured during technical due diligence.

Ok... but can one not measure or research the person (and team's) ability to create technology or manage those who can?

> (An aside: in addition to tech diligence having questionable value, many seed stage founders consider it overbearing and passively resist it. Given that engineering resources are scarce and that there are plenty of investors who write $100k+ checks after one or two short meetings, founders often prioritize "lighter-diligence investors" instead of submitting to an hour or two of technical grilling.)

Jesus Christ I need to move to California already.

> in today's world of SaaS tools, APIs, and cloud infrastructure, most startup ideas don't have significant technical risk. That means technical resources are rarely the cause of success or the reason for failure.

That's actually a very fair point. In the specific domain of web product companies that are leveraging the heavy lifting of others, a few Business Bros can get very far.

But does this cover ">95% of startups" ?

> A quick skim of CB Insights' collection of 150+ startup post-mortems reveals that only ~5% of post-mortems referenced a lack of technical ability/execution.

Yes but did none of these startups have any technical due diligence? Maybe they did, and that's why the rate is so low.

> But for a typical consumer app or SaaS tool, technical risk is low enough to be ignored.

Sure, just double-check that the engineers on the team did more than a coding bootcamp.

> On the recruiting side, a technical founder needs to be someone who can do recruiting and someone whom other engineers want to work for. This is especially true in today's hiring market, which is extremely competitive. The best ideas won't go far if a company can't attract enough talent to launch a product.

This sounds like a great reason to vet the technical prowess of the team.

> Notably, these tests don't require having a technical background.

Oh come on now. A GitHub filled with shit code and poor technical decisions on past projects would completely escape you folk.

> Instead, they require the ability to read a resume, common sense, EQ, and a few reference checks.

Reference checks maybe. Everything else you can get duped. Even reference checks.

> If you're a seed investor and you're doing tech diligence on a company, you're most likely wasting your time and the founders' time.

Perhaps for your narrowly defined web product startups, which apparently are 95% of startups. But when someone starts talking to you about drones delivering packages, all of a sudden it might be nice to be able to draw some free body diagrams and verify the feasibility of a payload claim.

> A well-built product that doesn't solve a problem will always be inferior to an ugly, buggy product that addresses a burning need.

Sure, but I don't think this logically leads to the conclusion "technical diligence is a waste of time".

Overall a pretty meh post. Focuses on a narrow subset of startups for which, it's mostly true, the tech is a commodity. I don't even consider web products to be "technology" anymore quite frankly.. Not until you need to scale some sort of real-time computing with millions of users and build your own datacenters.

41
graycat 3 hours ago 0 replies      
It appears to me that the OP is making a very serious, fundamental mistake: He keeps talking about "most" startups.

Alas, in shockingly stark terms, as a VC he is quite definitely, necessarily, in very strong terms not looking for "most" startups.

Instead, he is looking for exceptional startups. How exceptional? A few or so each decade.

What is true for most startups is no doubt close to irrelevant for the exceptional startups he is looking for.

Sure, due diligence is not important for lemonade stands, but it was just crucial for, say, the Lockheed SR-71. And for any startup that is using new, advanced, original, powerful, valuable technology, proprietary intellectual property, barrier to entry, technological advantage, due diligence stands to be crucial for, say, estimating the exit value of the company. Sure, a lemonade stand may have traction growing rapidly right away, but it will never have much exit value.

Moreover, don't do due diligence on just the code or the architecture but also on the crucial core technology, e.g., some original applied math. We're talking theorems and proofs here, guys, maybe with some pure math grad school prerequisites missing among nearly all chaired full profs of computer science. Did I mention technological barrier to entry?

Uh, again, we are looking for the exceptional, and that may look different from the "most".

18
An Atlas of Fantasy wikipedia.org
19 points by benbreen  3 hours ago   2 comments top 2
1
pvaldes 1 hour ago 0 replies      
Is interesting how much this is incomplete now and also how difficult would be to update this small atlas having in mind that 1) our generations have a much expanded sense of what fantasy is; and 2) most new places are defended by an army of lawyers.

Racoon city, Matrix, LV-426, Mario island, Nublar, Hogwarts, Irontown, Far Far Away, Gotham...

2
benbreen 3 hours ago 0 replies      
Blog post featuring a few of the maps featured in the atlas here:

http://basementgeographer.com/an-atlas-of-fantasy/

19
React: Mixins Considered Harmful facebook.github.io
184 points by tilt  7 hours ago   127 comments top 22
1
janci 4 hours ago 3 replies      
I like how core features of React are being considered harmful. First it was component internal state, now it's mixins and next thing will be the lifecycle methods.

React components will then boil down to pure render functions. React will then be replaced by simpler VirtualDOM implementation. JS function declaration boilerplate will be removed from render functions, so they will be more HTML with some JS as the other way around. Also they will be called templates.

We are getting back to good-old-days PHP-style webcoding, but with few HUGE improvements.

1. no globals, mostly pure functions

2. no business logic in templates, but in easy-to-reason-about redux-style state reducers

3. client-side rendering / isomorphic apps possible

2
jameslk 28 minutes ago 0 replies      
That's too bad, mixins were simple to grasp and worked really well. The oft suggested alternative to mixins is higher order components. Here's one of my favorite quotes[0] on the matter:

You lose so much with [higher order components], especially with es6:

- You lose the (original) class, and with it, the ability to compose it, to extend it and . reflection. All your components are [the higher order component]. For example, you cannot rely on the Class.name in es6.

- You lose the ability to extend the component and expose extra members. Your higher order component will not pass that through.

- The above two basically renders the ability to compose several decoupled and agnostic one to the other higher level components impossible.

- It is verbose and non-declarative , and basically much less readable and maintainable.

If there's a better pattern than mixins, I would say its traits[1]. For some reason, I don't find it used much often in the wild. I'm not sure it would be practically applied to React anyway without ending up with mixins again, but at least the concept would be clearer: there should be no shared state and dependencies are explicit.

0. https://medium.com/@danikenan/you-lose-so-much-with-your-sol...

1. https://en.wikipedia.org/wiki/Trait_%28computer_programming%...

3
mikegerwitz 4 hours ago 0 replies      
I'm not familiar with React's mixins, but I get the impression from the suggestion to use higher order functions that it doesn't allow overriding methods from other mixins. E.g. Scala's concept of stacking.

For those interested in the concept in JS, I implemented a full trait system in GNU ease.js designed around Scala's approach, with support for stackable traits. The system also supports protected/private properties and methods, which can also be taken advantage of in traits to provide public and protected APIs, and keep state encapsulated in traits. I don't have documentation available yet (soon), but I do have a great many test cases that demonstrate various aspects:

https://www.gnu.org/software/easejs/#traitshttps://www.gnu.org/software/easejs/news.html#d9b86c1

Hopefully others find this to be interesting even if they don't agree with ease.js itself.

4
ThePhysicist 6 hours ago 3 replies      
The decorator-based approach sounds interesting, but (in my understanding) it will require moving the data fetching logic away from the main component into the decorator, which also creates a level of indirection that is intransparent to the component user, and I imagine stacking several of these decorators on top of each other should provide plenty of room for unforeseen side effects as well.

Personally, I think almost all components should be "dumb": They should just receive the stuff that they display as properties from their parent components and report changes back through callbacks. They should not perform any "controller logic" beyond form validation and input serialization. Only a few components at the top should be responsible for fetching and distributing resources from the backend. Typically, it should suffice to have an "Application" component that takes care of fetching stuff that every other component should see (e.g. data about the user), while components one level below the application should fetch stuff that is specific to the given page/view that is being displayed.

Unfortunately, compared to the humble beginnings a few years ago I feel that the whole React.js stack gets more and more bloated these days. As an example, managing state through Redux requires writing actions, reducers and implementing a subscriber pattern in my components, just to fetch some data from the server. I mean, if we're writing a version of "Photoshop" for the browser this level of complexity might be warranted, but in most cases we just want to fetch some JSON, display it nicely formatted to the user, let him/her click on it and possibly send some data back. If we need 500 kB of Javascript libraries to do that while having to patch/reinvent many other things that we took for granted before -like simple hyperlinks (I'm looking at you, react-router)-, chances are we're doing it wrong.

5
bcherny 6 hours ago 3 replies      
Implementing mixins correctly in JS (which React does not) is already a well explored problem space.

I like https://leanpub.com/javascript-spessore for great explorations and derivations of various mixin patterns.

It's not that mixins are bad in general, it's that React doesn't implement them well in particular.

6
ludwigvan 7 hours ago 8 replies      
I believe as time passes, we are moving away from the simplicity that made React win. Flux, Redux, higher order components generate too much complexity most of the time. React used to be simple, it still is, but the ecosystem and the mentality has gotten needlessly complex.
7
n0us 7 hours ago 3 replies      
I personally moved all my projects away from mixins a while ago when I first heard they were deprecated. At first I was frustrated because of JS churn but this certainly was the right move.

For anyone who is apprehensive, the shift in thinking from using mixins to HOCs was not so difficult even if it's initially puzzling.

Quick edit: forgot to mention that this shift made my code way easier to understand in some places and thus the initial investment saves dev time in the long run.

8
eiriklv 7 hours ago 2 replies      
Great post Dan. Just out of curiosity - what's your goto approach for replacing examples like the SetIntervalMixin (https://facebook.github.io/react/docs/reusable-components.ht...) with a HOC? I can't seem to find something that feels very elegant for these cases.
9
sergiotapia 2 hours ago 2 replies      
React dropping mixins is the #1 reason why I stopped using Meteor. Sure Meteor has Blaze, but nobody uses it.

Before we had a nice mixin called getMeteorData:

```var HelloUser = React.createClass({ mixins: [ReactMeteorData], getMeteorData() { return { currentUser: Meteor.user() }; }, render() { return <span>Hello {this.data.currentUser.username}!</span>; }});```

Short, simple, you knew exactly what it was doing without breaking your wrists.

Now it's some create container lunacy that brings no benefit to most projects except dogma "properness" - the developer UX is just fungled beyond belief and it makes me so sad that Meteor lost so much.

I guess my gripe in general with Javascript now is how complicated simple things are. It's entire ecosystem is intertwined with complexity and verbosity.

I wonder if typescript alleviates some of these pain points.

10
Cshelton 5 hours ago 2 replies      
For those of you complaining about "javascript complexity":

I think one of the biggest misunderstandings about JS and a large portion of the community complaining about "churn rate", is that JS does not have a churn problem, it has an inexperienced developer problem. Which is not to say that is a bad thing, JS/web is the first language for MANY programmers now.

When you get stuck on learning this framework vs that framework, or a antipatterns within a framework and "complexity" being added to a framework, take a look back and understand the WHY first.

Many ideas and the fundamental design of React are not new. Infact, they are very old, even before there was such thing as the internet. Functional programming patterns, also very old. Immutability, a very old concept. Composing rather than inheriting, old concept. Eliminating all side effects (through immutability and composition), very old concept.

The idea of a "higher order component", is not new, nor does it have any direct relation to React. It is a design pattern. This article is simply teaching you a design pattern that aligns with the design patterns React was built upon and works very well. Instead of complaining about React and "added complexity", I encourage you again, to ask why. Learn why this blog post, from the React core team, recommends doing it this way. It is in no way a requirement. React is just a tool. It is just Javascript. It is just programming. Welcome to the development world.

Also, understand the problems you are trying to solve. Facebook uses these design patterns because they have thousands of components at a massive scale. The majority of users will not be working on something of that size. Don't feel like you have to use everything that comes out and is available in the React ecosystem. Infact, I would urge you, never start using something UNTIL you come across a problem and need to find a solution. It is a common thing for inexperienced developers to feel the need to incorporate everything they have read about, when they are using them to "solve" problems they don't understand, nor even have. This applies to problem solving in general, not just React, not just programming.

TL;DRUnderstand WHY something is used in the way it is before throwing it out as "too complex". Don't solve problems you do not have, until you need to actually solve them. If you do use a pattern/tool/etc., understand the WHY. (Don't use Higher Order Components or redux if you have not had the need for them and do not understand the problem they solve.) Also understand that many design patterns coming to JS are not a result of JS churn, they have been around a long time and have a very good reason for existing. JS churn is not real, it is a misunderstanding of using a solution in a world that has many solutions available.

11
abalone 3 hours ago 0 replies      
"Considered Harmful" Essays Considered Harmful: http://meyerweb.com/eric/comment/chech.html
12
iandanforth 4 hours ago 1 reply      
I worry this suggests an opposition to inheritance in general. Sharing code through class hierarchy is incredibly useful and common. I'd hate for ES6 to add decent classes and then have React push people away from 'extends'. But perhaps I'm reading too much into this.
13
wmccullough 5 hours ago 1 reply      
I'll likely catch some hell, but there's a reason mixins are considered an anti-pattern in most languages...
14
BinaryIdiot 6 hours ago 1 reply      
But mixins are so easy and are used in thousands of JavaScript libraries. Seems odd to deprecate their usage in React; why wouldn't you embrace a typically used construct in JavaScript? The syntax without mixins just seems overly complex. I mean sure it's still simple but more complicated than before and certainly not intuitive (in my opinion anyway).

I just started exploring React not long ago. It's interesting but I'm not sure it's my cup of tea just yet but most of the issues mentioned exist everywhere with JavaScript because, well, it's a dynamic language. I would suspect many of the issues lie outside of using mixins and more of how everything is architected but without seeing their codebase I don't actually know that. I just know mixins can be used, relatively easy, and cleanly as long as your design is appropriate for them.

15
tlrobinson 5 hours ago 0 replies      
I thought this was pretty well known by now, given the lack of mixin support in ES6 React components (and functional stateless components), but this is a nice summary of the problems and solutions.
16
rayshan 7 hours ago 1 reply      
I see some React code using decorators but this article doesn't mention them. I'd love to get everyone's opinion on whether decorators are also an anti-pattern, even if decorators become a JavaScript standard.
17
pjmlp 3 hours ago 0 replies      
Apparently React is already moving from "Technology X will solve all problems" to "Technology X considered harmful" pattern.
18
kimmshibal 6 hours ago 1 reply      
I ported my app from React to Vue in 2 days. Couldn't be happier
19
anaclet0 4 hours ago 0 replies      
Coming from ember 1.x, mixins are pure evil
20
Double_Cast 6 hours ago 2 replies      
So uh, HOC's are basically monads. Right?
21
daliwali 7 hours ago 7 replies      
It doesn't fare well that a framework that is only 3 years old already has an extensive list of anti-patterns and tons of statements on what not to do. Also having to do manual performance optimizations using the framework is a hassle to application developers and can be a major pitfall (ex. PureRenderMixin, shouldComponentUpdate).
22
hardwaresofton 7 hours ago 1 reply      
Did facebook just rediscover that mixins are an anti-pattern? I would have expected them to know that going in, figured they had just thought it was fine as long as they convinced people to use them very sparingly
20
The Habitat of Hardware Bugs embeddedrelated.com
7 points by ChickeNES  1 hour ago   1 comment top
1
ChuckMcM 0 minutes ago 0 replies      
That is a pretty reasonable way of looking at it. One of the things that made NetApp interesting when I was there was ONTap, a completely custom OS with one memory space and no user mode. When you thought about it made sense, all you need for a NAS box is a really feature rich ethernet driver :-). Anyway, what it meant was that NetApp would uncover problems in CPUs and Chipsets that nobody in the "PC" world would ever see. Race conditions on the frontside bus, PCI express traffic that would freeze up the chipset etc. It was also true of drive firmware. Drives have all these commands which look good in the manual except no PC ever calls them in production. As a result they don't get a lot of testing. We discovered that 'write zeros' which was a command for zeroing out a disk, on some firmware revs was "write mostly zeros, except when you don't." Never good when you're trying to initialize RAID stripes. As a result there was always a "Netapp version" of the drive firmware which had been qualified but customers always believed it was just a way of preventing them from using commodity drives[1].

Any time you step off the beaten path and try to use a complex technology in an "unusual" way, you are blazing a trail which may not have been traveled before. Always good to be on the lookout for undocumented bugs.

[1] It did have that effect but it wasn't the motivation.

21
Harmony Explained: Progress Towards a Scientific Theory of Music arxiv.org
177 points by colund  11 hours ago   137 comments top 33
1
dahart 10 hours ago 4 replies      
I love the dissection of harmony from a physics/computation point of view; I'm going to read this in more detail.

But the wrapping is quite off-putting to me; the author seems to misunderstand both music theory and science, and rather deeply. I don't see a scientific theory here at all. To be called science, it needs to have a hypothesis and be falsifiable, and after that get tested experimentally and proved - I don't see that, nor even a stab at a metric that is verifiable. What I see here is note relations described using science terminology instead of music terminology. That's not science, that's just someone using the terminology they're comfortable with rather than learn the accepted paradigm.

"The more factored a theory and the more emergent the observed phenomena from the theory, the more satisfying the theory." - Feynman.

It's ironic that this better explains existing music theory than the one presented here, even though Feynman wasn't talking about music theory.

The author accuses music theory of being pseudo-science, when music theory is not and never was attempting to be science, and yet the author proposes a theory of music claiming to be science that in fact isn't science.

Unfortunately, this 'theory' here only seems to contend with the first two of the 32 chapters in my copy of "Harmony and Voice Leading", and this mainly talks about what makes notes sound good, not what makes music sound good.

Also unfortunately, the author doesn't seem to be aware of what's going on in modern music theory, which is exploring things like how to compose music out of the beat frequencies that exist as harmonic dissonances between two or more notes. A lot of the physics of harmonics is currently already being incorporated into music theory.

2
metaxy2 6 hours ago 3 replies      
The best work happening in this area is by a Princeton professor named Dmitri Tymoczko [1]. He's discovered some pretty fundamental results that tie together a lot of the previous theories of harmony based on geometry. He was the first music theorist ever published in Science [2] and he wrote an amazing book called A Geometry of Music [3] (technical but readable, a must for anyone trying to really understand why music sounds good).

He also teaches a 2-semester intro level music class at Princeton that is infused with his ideas, and makes the lecture notes available online [4]. Those notes are so good they're better than any intro music textbook I know of, especially for us geeks who like to hear the scientific explanation for everything.

[1] http://dmitri.mycpanel.princeton.edu/bio.html

[2] http://dmitri.tymoczko.com/sciencearticle.html

[3] https://www.amazon.com/Geometry-Music-Counterpoint-Extended-Practice/dp/0195336674

[4] http://dmitri.tymoczko.com/teaching.html

3
mazelife 8 hours ago 3 replies      
Other commenters here have done a good job picking out specific problems with the author's arguments. It's always frustrating when someone who is capable and knowledgeable about one subject decides to try their hand in a different field without having bothered to do any real research or study in it. And it's doubly so when they proceed to then dismiss, literally, the entire field as "superstitions," and "unscientific" even as everything they've written betrays that they haven't bothered to familiarize themselves with basically any of the work that's been done in music theory or psychoacoustics in the last 100 years.

I usually love it when articles about the intersection of music and computation show up on HN, but in this case it's really unfortunate. At least it's sparking some good discussion, I guess.

4
squeaky-clean 10 hours ago 1 reply      
I don't like the tone of this book, it gets too hung up on the literal word "theory" in music theory, and also badly explains a lot of concepts in music theory while insulting them.

For example, the circle of fifths doesn't explain at all why the circle is cool or useful for chord transitions. It's not a coincidence at all. By rotating around the circle, your key changes by only one note at a time.

It's also just waaaay to objective about subjective things. Other cultures use of limited or different musical scales or temperaments is because they are missing out on real harmony? A major triad sounds best because a factor of 3 and 5 are the "most interesting" intervals?

Section 4.3 is where I'm just quitting.

> "Play C and F# on a piano; it sounds awful" [...] " because someone has even written a piece of music based entirely in the most un-harmonic of intervals, the Augmented Fourth, and gotten away with it." [...] "There are people who can abuse themselves to the point of re-calibrating their expectations to all kinds of strange inputs, including thinking that getting beaten with whips is fun or that McDonald's tastes good. That doesn't mean that those inputs are natural or good or beautiful or true. "

So I don't really like dissonant music, I've only tricked myself into thinking it's good? The Ben Franklin quote after it just sounds like someone comparing pop music to anything else. Just because it's Ben Franklin doesn't make it definitive.

It starts by insulting the assumptions that the diatonic scale and triads are a given in conventional music theory. But then goes on to make even greater assumptions as to why some harmonies sound "better" than others. Please prove it, or it's just as meaningless as the work you're insulting.

5
beat 9 hours ago 5 replies      
"Music theory" tends to be incredibly narrowminded and Eurocentric. Harmonic theory? Outside of Europe and European-derived forms, harmony is very rare. There are melodic traditions that do not use harmony, yet are just as traditional and sophisticated (Arabic, Hindustani, and Carnatic music all come to mind), and there are purely rhythmic traditions (various African, Arabic, central Asian, etc). And in the modern world, there is ambient, music built out of tone rather than rhythm/harmony/melody.

Having studied and played several different traditions to varying degrees, one thing I've learned is that music is equally sophisticated everywhere. There's this idea that classical and jazz are more sophisticated than "folk" forms. It's BS.

6
dmritard96 4 hours ago 1 reply      
"music: * the Major Scale, * the Standard Chord Dictionary, and * the difference in feeling between the Major and Minor Triads. "

Doesn't this already carry a ton of biases in that these are largely constructs of Western music? Certain pieces are based on physics, an octave is a doubling of frequency and the way a major chord fits together to some degree is a constructive interference. I studied music performance and compsci and its amazing to me how much magic and mysticism the music schools believe in. It was also pretty eye opening to see how much they believed that western music was the center of the human sound intersection world. It wasn't until taking a world music class that you realize how myopic even these researchers can be.

7
tobr 9 hours ago 0 replies      
The writer starts out complaining that the distinction between "chord" and "scale" is arbitrary. He then proceeds to derive the major chord and major scale using a bunch of arbitrary decisions. For example, why does he stop adding notes to the major chord when he has three of them? And look at what he does when he starts constructing the major scale from the major chord:

> Well, we like the Major Triad, so let's make another one, but starting with a different note as the fundamental. To preserve as much theme with the previous triad, let's start with the "closest" notes to the C that we have in our first triad: The first note other than C that we hit was 3/2 times the Root, also called the Perfect Fifth; therefore let's build a triad using 3/2 times C4 = G4 as the fundamental.

And then

> Ok, that was so much fun let's go in the other direction as well.

Why not just continue with the harmonic series, rather than construct a new chord from the fifth? Why continue with a fifth below the root, and not continue in the established direction? Because these changes would lead to a completely different answer. He knows which answer he wants to end up with, so he picks a path that will lead there. This is numerology.

8
jkingsbery 11 hours ago 4 replies      
"You can't start a science textbook like that" ... actually, lots of math and science textbooks assume a certain level of knowledge, and start more-or-less that way.

There also seems to be confusion about what Music Theory actually is - it's (usually!) not an attempt to axiomize music, rather it's an attempt to take a musical corpus and explain how pieces in it tend to work (which is why one sees different music theory text books for Jazz and Classical, for example).

9
pcsanwald 7 hours ago 1 reply      
I've mentioned this before, but for me, the definitive work on this vast subject is "The Harmonic Experience" by W.A. Mathieu. He really gets into low prime ratios, why they sound good to us, and also why the building blocks of modern western harmony are approximations of those ratios, rather than the real thing.

A fascinating read for anyone interested in the subject.

10
jng 7 hours ago 0 replies      
I urge anyone interested in this topic to read Helmholtz's "On the sensations of tone", a fascinating 19th-century tour-de-force trying to understand music sensation as coming from physical principles, deriving the standard western intervals and consonance/dissonance from first principles (beats between upper partials), and for which he is considered the father of acoustics. He spent 8 years researching in order to produce it, and is to me a great example of a 19th century hacker.
11
apalmer 10 hours ago 1 reply      
Ehhh.. fundamentally I think this argument is flawed. It assumes harmony as the underlying driver of music. Majority of music across history is not harmonic oriented. Really only in western music is harmony the driving force behind music.

now western music is extremely popular but that is really driven by historical coincidence that europe colonized the world, rather than the appreciation for the music itself.

I do think the points are well thought out, and seem reasonable... but ignores way too much empirical reality to be taken very serious as a 'scientific' explanation for music...

also its more or less the standard explanation for western musical theory, not sure what is novel here...

but i like it.

12
mrob 9 hours ago 0 replies      
William Sethares has already done some excellent work towards formalizing consonance and dissonance. See:

http://sethares.engr.wisc.edu/consemi.html

This new paper looks worse than Sethares' as it only considers timbres based around the harmonic series, ignoring important sounds such as tuned percussion, and ignoring the possibilities for genuine harmonic novelty with synthesisers. It cites Terhardt (1974) as "modern" while completely ignoring more recent work.

13
gardano 10 hours ago 2 replies      
Given that in the first few pages of this article, the author never mentioned temperament (tuning) systems, I had a heck of a time parsing the argument.

Well-Tempered? Equal Tempered? Mean Tone? Pythagoric?

I had a hard time even concentrating on the argument when the basic assumption regarding horizontal (melodic) vs vertical (harmonic) had some badly unstated assumptions.

Should I have kept reading?

14
sampo 9 hours ago 0 replies      
The harmony of of popular music and jazz is based on the diatonic of major scale. Each of the twelve scales is a frame forming the harmonic system.

Translation: Western music is based on the diatonic scale.

Diatonic harmony moves in two directions: Horizontal and Vertical.

Translation: Music consists of notes played one after each other, and at the same time as each other.

By combining these two movements... we derive the scale-tone seventh chords in the key of C

Translation: We can build the scale-tone 7-chords (root + third + fifth + seventh played at the same time) on top of any note in the scale.

Wanting to sound unnecessary grandiose can be a problem with textbooks in almost any field.

15
sunnyps 1 hour ago 2 replies      
Not exactly on topic but I wanted to ask:

I have never learnt how to play a musical instrument (sadly) but I would like to dabble in computer generated music and music theory. What's the best way to get started with this?

16
AnthonyNagid 8 hours ago 0 replies      
I have a theory that papers seeking a true break through in music theory should be exploring Christopher Alexander's work 'The Nature of Order'. We need music thoerists / scientists / programmers to be fleshing out Alexander's thoeries in relation to music. Shouldn't the scientific spirit take us where we haven't looked yet?

To illustrate my point with a musical example I'd like to refer to Keith Jarrett's recording 'Hymns / Spheres'. This is a musician who has become as intimate as a human can with classical and jazz. In a C. Alexandrian fashion, Jarrett takes all of these elements and applies them on an ancient cathedral organ in Germany and trancends both genres and his usual way of playing. The lowest hanging fruit in terms of understanding what's going on here more completely is not in music theory or acoustics but in Christopher Alexander's 'Nature of Order'.

https://www.youtube.com/watch?v=lXymPInuMkM

https://en.wikipedia.org/wiki/The_Nature_of_Order

https://en.wikipedia.org/wiki/Keith_Jarrett

17
dbranes 10 hours ago 1 reply      
There exists a related and impressive body of work towards understanding the geometric structure of elements of music theory, see e.g. https://www.google.com/url?sa=t&source=web&rct=j&url=http://... and references therein.
18
pjdorrell 2 hours ago 0 replies      
I take this opportunity to re-advertise my own alternative approach to developing a scientific theory of music: http://whatismusic.info/.
19
kingkawn 2 hours ago 0 replies      
Anything to say about music is better said by making it.
20
denfromufa 7 hours ago 0 replies      
You should perhaps watch this incredible video from PyCon 2016 given by Lars Bark about music, road trips, software development, and fractals!

https://www.youtube.com/watch?v=bSfe5M_zG2s

21
nabla9 8 hours ago 0 replies      
Here is much better theory:

Music And Measure Theory A connection between a classical puzzle about rational numbers and what makes music harmonious. https://www.youtube.com/watch?v=cyW5z-M2yzw

22
bigjcr 5 hours ago 0 replies      
The questions the author asks in the first chapter make me wonder whether I want to keep reading or not; his comments to those questions look full of arrogance or just sarcasm. You don't start learning music theory in a jazz theory book, so you wouldn't be asking those questions if you started learning music from a music theory book. Or at least you wouldn't have that kind of comments. Or are they just intended jokes?

Seems interesting tho, I might keep reading, I would like to see another approach to harmony.

23
bluetwo 9 hours ago 1 reply      
As a life-long lover of music and programmer who decided in his 30's to try learning an instrument, I share his frustration with traditional music theory instruction and desire to explain using more physics-based models.

I'm curious if his spent any time developing tools that might leverage his theories. I ended up developing a midi-toolset for composition based on my attempts at understanding the space.

24
nirajshr 9 hours ago 0 replies      
This article helped me grasp how the major/minor scales are constructed by using the idea of consonance and dissonance.

It is sort of like a derivation of a mathematical formula. So many other books just give you the scales as given. For that alone, it was an insightful read.

25
holri 8 hours ago 1 reply      
"...phenomena of any machine that must make sense of sound, such as the human brain"

A machine is a man made artifact.A brain is a living artifact, we do not understand it and it can not be built by us.

26
hkailahi 9 hours ago 0 replies      
For anyone interested the intersection of rhythm and harmony:https://www.youtube.com/watch?v=_gCJHNBEdoc
27
costcopizza 7 hours ago 1 reply      
Does anyone else not care to know what makes music sound good and enjoy the mystery and emotional complexity of it?
28
GFK_of_xmaspast 9 hours ago 2 replies      
It's a discredit to the arxiv that this paper is still up, and I have to assume it's because there are just many fewer 'music theory nuts' out there than 'quantum physics nuts.'
29
wry_discontent 7 hours ago 0 replies      
Maybe I'll finally be able to understand music.
30
multinglets 8 hours ago 0 replies      
Pulses and waves compounded in whole number ratios form periodic and stable, i.e. consonant, waveforms.

But oh yeah some people just play all random frequencies or like a dog. A dog can be music. It's such a huge mystery, music.

31
simbalion 7 hours ago 0 replies      
I've read a number of music theory books and guides, and not one of them contained: "unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases". In fact all were based on mathematics and at least some science.
32
privong 10 hours ago 1 reply      
Note that the post currently links to version 1, which was uploaded in 2012 (https://arxiv.org/html/1202.4212v1). There is an updated version from 2014 (https://arxiv.org/html/1202.4212v2).

For arXiv posts, unless a specific version is under discussion, it's probably best to link to the abstract page without the version tag:

https://arxiv.org/abs/1202.4212

This lets folks see if there are multiple versions of the preprints and multiple formats to view those preprints.

33
toomim 7 hours ago 0 replies      
Wonderful paper. And wow, the author has really managed to piss a lot of people off!

People get really upset when you present work that points out flaws in their own thinking, using a different style than they've come to accept. It comes across as an attack from a different tribe, and they get all tribal at you. You can tell because they attack the style as much as (or more than) the content.

This paper provides one of the first theories at the foundations of music! How exciting is this!

23
Gluon: A static, type inferred and embeddable language written in Rust github.com
112 points by jswny  11 hours ago   48 comments top 12
1
cfallin 9 hours ago 1 reply      
This is an impressive effort!

It seems that Gluon has taken inspiration from Haskell right down to the monadic solution to mutable state: https://github.com/Marwes/gluon/blob/master/std/state.glu

I wasn't able to find any 'cell' type or mutable records like in OCaml, so this does seem to be a pure language -- is that right? I wonder how that works in practice for small embedded scripts/logic -- my intuition is that the discipline enforced by monadic types and purity is really great at large scale but can be frustrating when you just want to hack something together.

Anyway, it would be interesting to see typical use-cases for Gluon!

2
sitkack 9 hours ago 1 reply      
3
kenOfYugen 8 hours ago 2 replies      
I really like how the parser is implemented [1]. Very straightforward and readable even to a Rust amateur like I am.

Are there any more open source languages of similar nature, using Rust?

1. https://github.com/Marwes/gluon/tree/master/parser/src

4
cm3 8 hours ago 0 replies      
Another project from the author which is now abandoned, but might have provided the spark for gluon: https://github.com/Marwes/haskell-compiler

It's a partial Haskell compiler written in Rust.

5
majewsky 9 hours ago 1 reply      
Is it just me, or do all the new languages this year look the same (like a hodgepodge of Haskell and Rust)?
6
cm3 7 hours ago 0 replies      
I like the separate heaps ala Erlang and GHC. Hope they're small enough by default and initial size can be tuned for actively avoiding GC.
7
StevePerkins 10 hours ago 7 replies      
Very unfortunate name since "Gluon" is already a commercial entity, sort of a Java counterpart to Xamarin, and almost certainly has trademarks that would overlap. Even if they didn't, it's still a pretty uncool move.

http://gluonhq.com/

Did nobody run a simple Google search before picking the name?

8
cm3 7 hours ago 1 reply      
In the sources there's a component called VM, despite Gluon being statically compiled. Is that more akin to a runtime system or is it a separate VM for the REPL?
9
baq 4 hours ago 1 reply      
if the language is statically typed and types are inferred, are there blockers against including a JIT like LLVM or something simpler to the vm?
10
ubertaco 6 hours ago 0 replies      
Ooh, this is nice! Always a fan of more ML-alikes out there.
11
sdegutis 9 hours ago 2 replies      
How does this interoperate with the Rust program it's embedded in? Is making functions or values available to it as tedious and verbose as in Lua? Or is there some way of calling Rust functions or accessing Rust values automatically from inside embed_lang ne Gluon? I'm assuming it's just as tedious as in Lua, since otherwise it would need powerful runtime reflection, which seems like a feature that Rust wouldn't really have or want.
12
ilaksh 7 hours ago 2 replies      
That should tell you something when the first thing a competent Rust programmer decides to code is a better language.
24
Scottish Tech Map docs.google.com
120 points by jbms  11 hours ago   33 comments top 14
1
jarofgreen 10 hours ago 2 replies      
Hello, I'm the founder of https://opentechcalendar.co.uk/ mentioned in the events tab.

We are a open site listing tech events around Scotland. Open means anyone can add an event, like a wiki. We can import data from suitable open sources, and most importantly we have many Open Data feeds. Our data is reused many times by many different people and sites.

We are about to celebrate our 4th birthday, and are the main listings site in Scotland. There is a great community around, and we are proud to be part of that (we are based in Edinburgh). Our software is Open Source to!

Give me a shout if any questions,Thanks,James

2
jbms 11 hours ago 1 reply      
What it is:

Freely editable google spreadsheet with information on where tech is happening in Scotland. Currently lists >300 companies and 90 groups/events.

What it's not:

- A startup/VC map.

Challenges:

- getting people to add/curate existing content (current theory: I think people browse mostly on phones, and can't edit without installing the Google Sheets app), as I can't put a lot of time into it.

- better quality categorisation of companies

- finding dev teams within organisations like local government

- identifying what's missing (my gut feel is there's easily another 300-1k companies to add, but most existing entries have come through business directories or searches on twitter for "developers/python/java/etc near me").

How it began:

The wikipedia article on Silicon Glen is short and out of date.

I was finding out about companies geographically near me in similar industries that I just wasn't coming across unless they were hiring.

I heard of Eden Shochat, who successfully crowdsourced the Israeli startup scene in a Google spreadsheet.

3
mdhayes 9 hours ago 1 reply      
Really cool list. I'm founder of RookieOven - https://rookieoven.com - a coworking space for tech startups in Glasgow based in a Victorian shipyard.

If anyone is ever in Glasgow and looking to meet the local community feel free to pay us a visit.

4
s_dev 9 hours ago 1 reply      
Not sure this is relevant but it might be helpful to people looking for a similar project like this to work on:

Heres a curated tech calendar for Dublin, Ireland.

https://www.startupdigest.com/digests/dublin

The Dublin startup commissioner just launched this is week as well:

http://www.techireland.org/

Basically a database of the Irish tech scene.

5
abstractbeliefs 9 hours ago 0 replies      
Very proud to see the medical company I work for listed here, and not even by me.

Optos was acquired last year by Nikon, and we're going strong and expanding into new tech for devices we're releasing this year and next.

We're hiring a few positions right now if shooting lasers into people's eyes to save their sight is your kind of thing: http://www.optos.com/en-GB/About-us/Careers/Europe/

6
jplahn 4 hours ago 0 replies      
Very cool! I've been looking for something like this for a while in hopes that I could get a better feel for the tech industry through Scotland. I lived there for 7 years during middle school and high school and I'm always flirting with the idea of heading back. Sadly my EU residency lapsed, making it a little more difficult (not that it would help for much longer..).

It would be great to have more sorting options, particularly if I wanted to get back to the Granite city :).

7
gergderkson 10 hours ago 0 replies      
This is great. We love being part of the Scottish tech scene. There are some really great meetups around Edinburgh and Glasgow areas too.
8
RubyWrangler 10 hours ago 0 replies      
Love it! Scotland has a fantastic tech scene which I'm super happy to be part of :)
9
maaarghk 5 hours ago 0 replies      
On this topic, is anyone in the Glasgow area doing anything cool with Golang? I'm looking to dive in and wondering if there is any kind of local community around it - travelling to Edinburgh for meetups is inconvenient to say the least!
10
urbik 5 hours ago 0 replies      
Calendar of events for belgium: https://hackeragenda.be/?section=all(source code: https://github.com/psycojoker/hackeragenda, fork and enjoy o/)
11
archieb 5 hours ago 0 replies      
There's also a list of Edinburgh-based software companies on http://www.nobugs.org/deved/, though the last update was 2015-09.
12
smpetrey 7 hours ago 0 replies      
Is there a list of tech companies/agencies for NYC out there? Asking for a friend.
13
dandrino 6 hours ago 1 reply      
Wouldn't wiki be a better format for this than a Google sheet?
14
sparkzilla 9 hours ago 3 replies      
How do we add to the list? The Companies page doesn't seem to be editable.

Perhaps you should consider adding the data to a simple WordPress site. I'd be happy to set that up for you on my server.

25
Q&A with Aaron Levie themacro.com
48 points by dwaxe  8 hours ago   13 comments top 6
1
doctorpangloss 5 hours ago 2 replies      
> startups with zero baggage

Well, a megalomaniacal founder who wants to hire 10,000 people and run a huge company has the exact same baggage as his large competitors. The baggage is just cultural, rather than business baggage or technical baggage. I think investors like Aaron Levie, who's all around a brilliant and highly accommodating guy, don't recognize that cultural baggage exists at day 0.

> Dont hedge your bets.

That makes sense from the point of view of an investment portfolio manager, where you would want all of your companies to do the thing you actually invested in them to do in order to achieve diversity. But from a business point of view, bifurcating strategy could make a ton of sense.

Maybe we're talking about different companies. I really admire places like Valve, which definitely regard culture as their main source of enterprise value (rather than some notion of legacy-free competitiveness) and bifurcation (game development and store management) as successful. You could find lots of people who wouldn't care to found a Box.net. But who wouldn't want to found Valve?

2
tacon 5 hours ago 2 replies      
>Ten years from now, how have you improved yourself?>>The list is pretty much endless. To name a few: I wish I were better at chess, I wish I could juggle five balls instead of barely four, I wish I were better at piano, I wish I were a speed reader, and I wish I could sleep fewer hours.

This is a very disappointing list. He is the CEO of a major corporation, but everything he wants to improve has nothing to do with how he interacts with other human beings, be they family, employees, customers, or anyone else.

3
misiti3780 7 hours ago 1 reply      
Am I the only one here who has never heard of the book Blue Ocean Strategy ?
4
cocktailpeanuts 5 hours ago 0 replies      
Lame answer to "What do you believe that few people agree with you on?". I don't think "few" people think that think AI will create more jobs than destroy. This type of controversy has existed every time there was any type of technological innovation throughout the history.
5
Aelinsaar 6 hours ago 0 replies      
I generally agree with UntilHellbanned, but I think the closing point about spending time with, and getting to know your customers makes it all worth it. Too many people lose sight of that, or worse, develop adversarial relationships with their customers.
6
untilHellbanned 7 hours ago 1 reply      
Fairly useless platitude-fest/groupthink. Kinda surprised.
26
Nightcode 2.0.0 released github.com
72 points by jonathonf  5 hours ago   20 comments top 8
1
S4M 5 hours ago 2 replies      
For those who are wondering, some screenshots are on the project's page: https://sekao.net/nightcode/
2
mintplant 4 hours ago 2 replies      
> Nightcode is written with Swing, a deprecated UI framework. Were gonna replace it with Java FX. What do you got Rock?

Wait, Swing is deprecated? That's news to me. I just rewrote a small GUI app in Java with Swing.

Any good resources for getting started with JavaFX? Looks like it can't use native controls...

3
ljoshua 4 hours ago 0 replies      
Best. release notes. ever. :)
4
afhammad 5 hours ago 1 reply      
How would you compare it to Cursive feature-wise?
5
daveloyall 4 hours ago 1 reply      
Let's see if it can build itself...
6
sdegutis 4 hours ago 3 replies      
This write-up is amazing! Dear everyone reading this, from now on, please have more of a sense of humor and playfulness like Zach, instead of the self-important and dry stuff that typically gets posted here. This is literally the only post on HN that I read in its entirety in years.
7
atomicbeanie 4 hours ago 0 replies      
Woot!
8
MrBra 2 hours ago 0 replies      
link to download -> go to project home -> clojure -> bye
27
A Saudi Arabia Telecom's Surveillance Pitch (2013) moxie.org
46 points by JoshTriplett  5 hours ago   11 comments top 4
1
drewcrawford 2 hours ago 2 replies      
Shameless self-promotion, as this article is one of several that inspired me to work on this.

I have been secretly working for several years on a drop-in iOS technology based on the latest peer-reviewed crypto research, pinned certificates, PFS, and implemented in a safe language that doesn't buffer overflow.

It's invitation-only right now but many customers are running it in production. We have to position it as a performance technology (and it's very very fast) since security doesn't make sales. But my personal goal is to Make Things Dark Again.

Unfortunately the hardest part of security work is funding itperformance and business concerns drive a lot of the conversations which limit how much time is left to work on making bulletproof transport security dead simple for app developers. And it needs to be dead simple to get deployed in enough apps to protect the average user from pervasive surveillance.

But if you believe in the problem, and your employer can create a budget for the problem, I would love to chat with you (email in my profile). There is still time to rewrite the future and keep totalitarian governments out of our communication. We just have to decide we actually care enough to make it happen.

2
Mao_Zedang 2 hours ago 1 reply      
https://en.wikipedia.org/wiki/United_Nations_Human_Rights_Co...

The UN has always been a joke in my eyes because of this.

3
therealasdf 3 hours ago 1 reply      
What bothers me most is most Saudi's are aware that they are being monitored. They assume if you have nothing to hide you shouldn't care for privacy.
4
jaryd 3 hours ago 0 replies      
Please add 2013 to the title.
28
Show HN: Turn a markdown document into an interactive tutorial stacktile.io
106 points by danfromberlin  12 hours ago   59 comments top 17
1
spdustin 9 hours ago 2 replies      
If the browsing user selects one of the samples on the site (which is verified easily with a quick hash on the client side), why on earth are you processing it again? Load shouldn't be an issue here.

Honestly, users shouldn't even be able to ruin your sales pipeline by interrupting your pitch. Why let us enter any markdown? It's an example, you have a few representative samples, just take their output and hard code it and move on. As it stands now, I probably won't remember to come back so see what this is all about, and waiting for an email isn't really going to increase the odds that I come back.

Just some honest feedback for you. TL:DR: I was intrigued by the first page, put off by the unnecessary wait when you could've hard coded the response when a user chooses an existing sample.

2
nijiko 7 hours ago 1 reply      
1. Clicked "try it now" for demo.2. Went straight to a "give us your email" page.3. Left and will never go back.
3
pmontra 12 hours ago 2 replies      
I'm getting

 500 error [root]$ :(){ :|:& };: -bash: fork: Resource temporarily unavailable
Hopefully that second line and bash don't hint at a command execution vulnerability.

4
BoudewijnE 11 hours ago 2 replies      
> The Data Controller reserves the right to make changes to this privacy policy at any time by giving notice to its Users on this page. It is strongly recommended to check this page often.

I laugh.

5
lima 6 hours ago 0 replies      
The HN title is somewhat of an understatement... This is really cool.

https://stacktile.io/org/ansible/workflows/670a1fda-3372-400...

6
joosters 11 hours ago 1 reply      
The real killer app would be the complete opposite, i.e. turning an interactive tutorial into a README file...
7
afshinmeh 11 hours ago 2 replies      
Is this an email collecting app or what?

I'm clicking on the button to see what does this app do and they it says signup and we will notify you later.

8
fuzzythinker 1 hour ago 0 replies      
{detail: "CSRF Failed: CSRF token missing or incorrect."}
9
nkjoep 4 hours ago 1 reply      
The usual scam, they ask for email and try to gather more leads.

```Unfortunately, we are out of free slots at the moment. We're Sorry! If you would like to be notified as soon as we have a free slot available, we invite you to sign up.```

I'm really disappointed.

10
erjjones 6 hours ago 1 reply      
#fail on the demo and then add the arrogant .. "We seem to be to popular at the moment ... give me your email"
11
rickycook 11 hours ago 0 replies      
that is one hell of a jump in pricing. free to 239 euro p/month
12
Freaky 9 hours ago 1 reply      
The markdown text input is borderline unreadable in Chrome on Windows 10: https://i.imgur.com/vTu2J34.png
13
fiatjaf 11 hours ago 2 replies      
Markdown or HTML?

From the example:

 1. <a href="https://app.storj.io/#/signup" target="_blank">Sign up</a> for a Storj account.

14
hbz 12 hours ago 0 replies      
An interactive guide to fork bombing
15
herbst 12 hours ago 0 replies      
Dat fast scroll jacking.
16
speps 12 hours ago 1 reply      
500 then 404...
17
diegorbaquero 11 hours ago 0 replies      
Good idea, Not working though :(
29
Alexa: A dating bot for Facebook Messenger meetalexa.com
15 points by tomikk  2 hours ago   11 comments top 4
1
guelo 42 minutes ago 0 replies      
I really don't understand why chatting is supposed to be a better interface for something like that. Tinder blew up because they innovated the simplest possible UI. The UI here is broken, notice that after the second "like" the user would have had to scroll up to like another one. Or they would have to memorize some chat commands like "main menu" or "more women".
2
randylubin 1 hour ago 3 replies      
Branding it the same name as Amazon Echo's chat agent (also Alexa) seems like trouble...
3
cellis 44 minutes ago 0 replies      
Misspelled "increase". Looks interesting though.
4
jonathankoren 1 hour ago 0 replies      
So it's a menu system.
30
Fast Transaction Log: Windows ayende.com
99 points by jswny  11 hours ago   32 comments top 11
1
rusanu 10 hours ago 1 reply      
> Like quite a bit of other Win32 APIs (WriteGather, for example), it looks tailor made for database journaling.

Indeed, WriteFileGather and ReadFileScatter are specifically tailored for writing from and reading into the buffer pull. The IO unit is the sequential layout of an extent (8 pages of 8Kb each), but in memory pages are not sequential so they have to be 'scattered' at read and 'gathered' at write.

You also have to keep in mind that the entire IO stack, Windows and SQL Server, was designed in the days of spinning media where sequential vs. random access was ~80x faster. SSD media has very different behavior and I'm not sure the typical 'journaling' IO pattern is capable of driving it to the upper bound of physical speed.

As a side note, I was close some folk that worked on ALOJA http://hadoop.bsc.es/ and it was a very interesting discussion I had with them: the default configuration for Java/Hadoop was providing, out of the box, the best IO performance on Linux. Same configuration was a disaster on Windows and basically every parameter had to be 'tuned' to achieve decent performance. This paper has some of their conclusions: https://www.bscmsrc.eu/sites/default/files/bsc-msr_aloja.pdf

2
PaulHoule 10 hours ago 3 replies      
Windows is fast at some things and slow at some things.

For instance, metadata reads on the filesystem are much slower on Windows than Linux, so it takes much longer to do the moral equivalent of "find /"

Circa 2003 I would say the Apache Web Server ran about 10x faster on Linux than Solaris, but that a Solaris mail server was 10x faster than the Linux server.

Turns out that Linux and Apache grew up together to optimize performance for the forked process model, but that fsync()-and-friends performance was much worse than Solaris at that time, if you want to meet specifications for reliable delivery.

3
Jedd 10 hours ago 1 reply      
Been a while since I used AWS in anger, but EC2 instances were massively (hair-pullingly) variable from one moment to the next. I can't see any detail on either blog post (GNU/Linux or Microsoft Windows) regarding how they catered for this, how many runs they did of their custom benchmark code, and what kind of variances they were seeing in each iteration.
4
markbnj 9 hours ago 0 replies      
As a former Windows/C++/C# dev who has been working on linux for five years now, I have never automatically assumed Windows was slower than linux. The main advantages of linux over windows are not in the performance area, imo, but in any case I think you'd have to average a lot of runs to make sure of getting reasonably meaningful numbers from an ec2 instance.
5
zsombor 4 hours ago 2 replies      
The Linux version was benchmarked with gettimeofday() while the Windows one with QueryPerformanceCounter. The first has a lower resolution of 10 micros and as such the benchmark is not comparable.
6
ComodoHacker 7 hours ago 0 replies      
The site is having problems. Google cache: Windows benchmark[1], Linux benchmark[2].

1. http://webcache.googleusercontent.com/search?q=cache%3Ahttps...

2. http://webcache.googleusercontent.com/search?hl=en&q=cache%3...

7
noja 11 hours ago 1 reply      
An 80% performance difference? Something doesn't seem right here.
8
MrZipf 5 hours ago 1 reply      
Windows also has the cool and little known CommonLogFileSystem if you need logging:

https://en.wikipedia.org/wiki/Common_Log_File_System

https://msdn.microsoft.com/en-us/library/windows/desktop/bb9...

9
justinsaccount 10 hours ago 1 reply      
Either I am reading this wrong or something is not right here.

Buffered:

windows = 0.006linux = 0.03

80% Win for windows?

But where do those numbers come from?

The time in ms for linux was 522, the time for windows was 410. That's not an 80% win.

where does the "Write cost" number come from?

In general for the other numbers I don't think they are comparing the same things. I don't think it is a coincidence that the two systems had write times of both about 10s and 20s for the different tests. Where linux took 20s and windows took 10s I'd bet that they were comparing different behaviors.

10
UK-AL 11 hours ago 0 replies      
It's well known windows performs better in these sorts of situations. Probably because of the reasons mentioned, they have their own products that rely on good performance for these code paths.
11
zihotki 11 hours ago 1 reply      
       cached 14 July 2016 01:02:01 GMT