hacker news with inline top comments    .. more ..    29 Aug 2016 News
home   ask   best   2 years ago   
1
Hunter S. Thompson On Finding Your Purpose (1958) tranquilmonkey.com
134 points by zeeshanm  3 ago   30 comments top 11
1
AndrewKemendo 1 ago 1 reply      
I remember reading this in college and coming away with the feeling that what he was proposing was impossible.

I didn't know what my Abilities were and how they relate to the others around me such that I can use them effectively.

Not only that I didn't even know what I desire aside from food and sex.

Luckily I was on a career trajectory and had quite a bit of experience ahead of me so I took solace in the idea that world travel, responsibilities etc... would flesh a lot of those abilities and desires things out for me.

It's been 13 years since, and I still have no idea where my Abilities lie or what I Desire aside from food and sex.

I like his suggestion to read Sartre, though I suggest Camus instead.

2
terryf 3 ago 0 replies      
TL;DR: Finding the correct path to take in life, is the correct path to take. Cleverly self-referential, yet not impossible. Succint.
3
giis 2 ago 2 replies      
"In short, he has not dedicated his life to reaching a pre-defined goal, but he has rather chosen a way of life he KNOWS he will enjoy. The goal is absolutely secondary: it is the functioning toward the goal which is important."

I find this is quite new & profound.

4
unexistance 2 ago 3 replies      
"no one HAS to do something he doesnt want to do for the rest of his life. But then again, if thats what you wind up doing, by all means convince yourself that you HAD to do it. Youll have lots of company."

Conformity

5
Tadlos 28 ago 0 replies      

 Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma - which is living with the results of other people's thinking. Don't let the noise of others' opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. -- Steve Jobs

6
e15ctr0n 30 ago 0 replies      
For those not familiar with Hunter S. Thompson and looking to get an understanding of his outsize personality and cult following, I highly recommend watching the 2 movies in which Johnny Depp played his character:

* Fear and Loathing in Las Vegas (1998) http://www.imdb.com/title/tt0120669/

* The Rum Diary (2011) http://www.imdb.com/title/tt0376136/

If you are not already a fan of Johnny Depp, these movies should do it.

7
amist 1 ago 1 reply      
I prefer Mike Rowe's opinion on that subject.

http://youtu.be/CVEuPmVAb8o

8
SCAQTony 49 ago 0 replies      
"Man is condemned to be free; because once thrown into the world, he is responsible for everything he does" Jean Paul Sartre (Existentialist philosopher)

Following your passion[s] may or may not be bad advice but predicting the consequences of doing should be measured. Life has a way of making that self evident.

9
S_Daedalus 2 ago 0 replies      
"All advice can only be a product of the man who gives it"

That's on a par with, "Buy the ticket, take the ride." I think.

10
louprado 2 ago 0 replies      
"But beware of looking for goals: look for a way of life".

So says every philosopher that ever lived.

11
chebastian 2 ago 1 reply      
Was about to repost this but then i saw,"10 foods that most def will give you cancer" that host page gave me cancer.
2
Motion capture of recorded parkour using rotoscoping techniques moral.net.au
33 points by ropable  1 ago   1 comment top
1
bitwize 15 ago 0 replies      
The use of rotoscoping to deliver realistic 3D animation has been used extensively by Neill Blomkamp for his films: CG characters such as Christopher Johnson (insectoid alien) and Chappie (robot) were animated by rotoscoping an actor's on-set (or on-location) performance rather than suiting the actor up in one of those ping-pong-ball suits and having them gesticulate in an imaginary set in front of a green screen.
3
Principles for Programming Languages for Learners acm.org
104 points by akkartik  5 ago   46 comments top 5
1
teach 4 ago 6 replies      
I've been teaching kids to code for 19 years now, and I still make them do it the hard way.

Just two days ago this year's crop of 83 students wrote their first Java programs in Notepad and compiled and ran it from the command-line.

And I have a waiting list to get into my class.

I do work with older students (aged 14-18) and the key is TREMENDOUS support. I'll happily show them the command-line stuff over and over again for a month if need be.

And my curriculum goes through the basics of Java very slowly. As I've said on HN before, I make my students code FizzBuzz, but they will have literally done 106 complete programming projects before it.

2
parfe 3 ago 3 replies      
If excel isn't the first item on your list for teaching programming you're already out of touch.

Let's pretend you're a high schooler volunteering at a charity.

"Donors who attended our last five fundraisers?"

"Email list for our whale donors from this fiscal year?" (>100, 1000, 100000+ whatever)

"Who are our best fundraisers?"

"Was the dinner last week a success?"

What are the odds the student who can answer these questions is helped by a teacher who ticks off any of the five principals in the submission? Logo? People aren't drawing basic geometric shapes. "Cognitive load" referencing types? I'm not even sure how to address this. Any beginner expects 1/2 to display 0.5. That entire paragraph is nonsense. Be honest Ok, learn how to excel in your field with programming/excel/sql/etc as tools to succeed at your job. "Computer Science" isn't programming and it isn't a job. Never confuse any of those. #5 is just generic filler for being a not-bad teacher.

3
bluetwo 3 ago 2 replies      
Where do students learn how to understand a problem, break it down into parts, document requirements, and plan development?
4
gameofdrones 3 ago 0 replies      
Ahh, fondly remembering LOGO for the Apple II in grammar school when I was about 6.
5
known 2 ago 1 reply      
4
Zig: a system language which prioritizes optimality, safety, and readability ziglang.org
134 points by vmorgulis  8 ago   92 comments top 21
1
kzhahou 3 ago 1 reply      
Almost half the comments here are just to remind everyone that C can/will never be replaced. Is that really the most insightful we can be? Is HN so pedantic now that we can't analyze an interesting language idea, but just nitpick at its existence?
2
girvo 5 ago 0 replies      
Interesting language. Somewhat reminds of me "ooc"[0], though I believe ooc is ref-counted/garbage-collected (? I think. I can't remember, been a while since I've touched it); but being able to easy use C libraries from a relatively low-level language that has nicer high-level constructs is still a beautiful idea, in my opinion.

The language I'm using these days for that exact feature set though is Nim[1] -- being able to use {.compile.} pragmas and bring in header/source files, along with the great C type support is wonderful; but again, garbage collector (albeit one that can be tuned and/or turned off). Zig seems to be targeting the "true C replacement" niche, which I'm going to have to keep an eye on!

[0] http://ooc-lang.org/

[1] http://nim-lang.org/

3
ocschwar 6 ago 7 replies      
Once you've made the decision to leave C for a systems programming project, why would one not just go over to Rust?
4
CaliforniaKarl 5 ago 3 replies      
I don't know. C is very close to the hardware, and it's pretty portable, too.

I'd suggest doing what some other languages do, and get yourself to the point where almost all of the code for Zig is written in Zig.

In other words, you'd first write a `microzig` that's written in C++, and microzig knows enough to make `minizig`, which knows enough to make the rest of zig. This is what Perl does, and I expect other languages do the same thing.

On the other hand, zig is supposed to be able to cross-compile really well, so maybe you can skip that: Have version 0.9.9 be the last version written in C++. Then, for version 1.0, re-write the entire toolchain in Zig, and use the 0.9.9 compiler to compile Zig 1.0. At that point you are in full dogfood mode.

Finally, since it's called Zig, I get to close with this sentence: "You have no chance to survive make your time."

5
pmontra 1 ago 0 replies      
How is the goal of readability being addressed? I didn't find it particularly readable. Basically it's C with better types.

A few suggestions about the syntax, based on what many languages did in the last 25 years.

1) No mandatory () around conditionals. Make {} mandatory even for one liners instead. They remove bugs and are a common suggestion in the style guides of several languages.

2) The multine string syntax is verbose and uninviting. Use heredocs or a string delimiter reserved for that purpose.

3) Try to do without the ; line terminator

As a huge bonus, to reduce clutter, try to implement automatic import. I don't know of any language doing it, it's IDE territory right now (or editor's [1]). Still it's very useful because there is little as boring as going through a long list of imports at the beginning of each file. They should be there only when there is an ambiguity to resolve.

About readability, those {var v; while (cond) {}} blocks in the examples are puzzling. Finding a }} alone on a line was an "is this a syntax error?" moment, or just bad style.

Anyway, it seems to have generics so it's already ahead than Go, which seems to be stuck in the 60s regarding this feature (they've been pioneered by ML in 1973 according to [2]). For the rest, it's in the average of what we've seen in the past decades, so not shiny but not bad. Plenty of successful languages are like that and maybe it's a reason for their success: being average they don't scare away people. I'm thinking especially of Python, which smells oddly of C with its __underscores__ and the explicit self argument, reminding me of how we used to do OO in C passing around struct pointers (those structs were the objects, holding data and function pointers to methods).

[1] http://www.vim.org/scripts/script.php?script_id=325

[2] https://en.wikipedia.org/wiki/Generic_programming

6
dsego 1 ago 0 replies      
7
adrianratnapala 2 ago 1 reply      
Well I am impressed. And I am very glad to see more and more languages in this space. I particularly like that Zig seems to be minimalistic -- a lot more fun to look at (for me) than C++ or Rust.

But I didn't see anything in Zig similar to Rust's lifetimes. Well it's nice to be rid of that complexity, but I don't see how you are going to do C-like pointer stuff safely without them.

Can anyone explain what these languages (e.g. Zig, Myrddin, Nim) do instead?

8
Gankro 4 ago 1 reply      
I can't seem to find any details on safety other than

> Safe: Optimality may be sitting in the driver's seat, but safety is sitting in the passenger's seat, wearing its seatbelt, and asking nicely for the other passengers to do the same.

Is Zig memory-safe? How? (Specifically, is there some useful safe subset ignoring the obvious FFI and "explicitly unsafe operations" exceptions every language gets?)

If it's not, is that even a goal?

9
qwertyuiop924 6 ago 5 replies      
The thing is, you will never replace C. Supplement it, extend it, yes, but not replace it

The fact is, C is still probably the lowest level HLL still used. It's the lowest level you can be, while still providing the HL constructs we're all used to. There is always going to be code that wants or needs to be there. Code where anything further up either can't give the performance, or provides impedance to the design or actual goal.

You still want to replace C? Fine. What will you write your bootstrapping compiler in? :-D.

10
web007 1 ago 0 replies      

 return if (value >= radix) error.InvalidChar else value;
That is just painful to parse. Suffix-If sometimes makes sense and simplifies things, but it always makes code flow more difficult to understand.

11
z0xcd 6 ago 1 reply      
One question: Why is Zig implemented in C++? What was the reason behind the choice?
12
gragas 6 ago 1 reply      
Obligatory: why is Zig meant to replace C? Why is it a better replacement than Rust (when Rust is considered a C replacement)?

Interesting work nonetheless.

13
Koshkin 4 ago 0 replies      
C is beautiful in its way and it is rather minimalist from the syntax point of view (especially the pre-C99 variants). Judging by the code examples shown on the page, this new language has a pretty noisy (compared to C, downright ugly) syntax. Granted, it may be dictated by some logic and possibly even its inner beauty, but on the surface it does not seem to be much of an improvement compared to the languages we already know and love (or love to hate), including Perl and JavaScript.
14
fspeech 2 ago 1 reply      
The first thing I would want to use something like this for is to write C API for a C++ library. Is that possible with the current state of Zig?
15
wyager 5 ago 1 reply      
> Maybe type instead of null pointers.

It's often a bad sign when a language advertises this high up on its features list. It means that they didn't really get the true takeaway of the Maybe type (which is that you should support algebraic data types to properly constraint data to valid states), but instead saw a single specific use case of ADTs (Maybe) and thought that's all there was to it.

I've run into this with Java 8. Optional is pretty common now and has eliminated the need for the null pointer in much of everyday code, but they still don't have something like Either to eliminate the need for exceptions. Maybe is extremely useful, but it's a small fraction of the usefulness you get with true ADT support.

16
parenthephobia 6 ago 1 reply      
Syntax looks very Rust-inspired, but it lacks Rust's OCD. I also catch hints of other syntaxes, like Ruby/Smalltalk-style block parameters, and a defer very much like go's.

I find it interesting that it implements generics by passing types as normal arguments. Say, `list(i32, 1, 2, 3)` rather than `list<i32>(1, 2, 3)`.

17
ajarmst 5 ago 0 replies      
Intend in one hand, and ... well, you get the picture.
18
sotojuan 6 ago 0 replies      
Cool to see a fellow Recurser in HN!
19
matthewhall 5 ago 1 reply      
Not with that logo it won't.
20
necessity 6 ago 3 replies      
Don't fix what's not broken.
21
yyhhsj0521 6 ago 1 reply      
Is Zig compatible with C? No? End of the story.

Edit: I know that you can include a C header in Zig, and cross-compilation is possible and made easy. But you can't continue develop on current C projects if you switch to Zig. I guess that you have to change the Makefiles too.

5
The Worlds Largest Pyramid Is a Hidden Mountain in Mexico bbc.com
43 points by Cozumel  4 ago   13 comments top 5
1
goldscott 2 ago 2 replies      
I was here back in February. You can walk through some of the excavated tunnels, but most are closed off.

Wikipedia has a good image showing the sizes of different pyramids compared: https://upload.wikimedia.org/wikipedia/commons/3/3e/Comparis...

Note that Cholula has the largest base, but isn't the tallest.

2
insulanian 44 ago 1 reply      
> ... its the largest pyramid on the planet...

There seem to be the larger one in Europe - Bosnia: http://piramidasunca.ba

3
ap3 2 ago 3 replies      
- the pyramid in Chichen Itz (mayan) also has a smaller pyramid inside

- the spanish continuously built church temples on top of the natives' pyramids. Like a cruel switch of religions. The Mexico City Basilica sits on top of aztec ruins.

4
b34r 1 ago 0 replies      
I think I remember reading a fictional book about this. Something about the pyramid being a necropolis and there being a machine taking you up to a different area that was supposed to be the "heaven" in the duality. Alien God thing with blue skin... I think it was related to the "Excavation" series.
5
hackaflocka 1 ago 1 reply      
I just want to clarify that we are talking about mass here. In terms of height, it's much much lower than the pyramids in Giza.

When I first heard about that pyramid, I was a little confused because it took me a while to realize this.

6
Pythran: a Python to C++ compiler with a focus on scientific computing pythonhosted.org
51 points by vmorgulis  6 ago   29 comments top 5
1
m_mueller 5 ago 2 replies      
What is the advantage of doing this compared to writing Fortran (90+) kernels, use F2Py bindings and combining them with NumPy? For HPC you generally need to be able to have full control over compiler flags. Plus, Fortran already has way better multi-dim array handling built-in - NumPy is basically just a wrapper around that.
2
nerdponx 5 ago 1 reply      
What does this do that I can't already do with some combination of Numpy, Cython, and Numba?
3
akandiah 5 ago 3 replies      
The problem with a lot of these optimizing tools for Python is that they only support Python 2.7 not Python 3.
4
_ZeD_ 1 ago 1 reply      
What's the difference between this and nuitka?
5
vonnik 5 ago 0 replies      
7
Housing Plans in California & New York face resistance from construction unions wsj.com
33 points by jseliger  5 ago   30 comments top 4
1
twblalock 4 ago 2 replies      
Unions shot themselves in the foot here. Some of the new construction projects would have been run by union shops. Now, they won't happen at all. Net result: fewer union jobs than there could have been.

At some point, the unions will need to realize that stuff is going to get done with or without them. They can be part of the process, or not. If they are part of the process, they get some jobs out of it. This should have been a no-brainer.

2
gameofdrones 3 ago 2 replies      
This one instance might be so (unions are imperfect human endeavors), but this article comes across as thinly-veiled MSM/establishment strawman/false equivocation via the pernicious, irrational, data-free worldview which completely ignores the net positive force unions had in the bloody struggle for worker pay and working conditions in the 19th and 20th centuries.

See also: "Inequality for All" and "Where to Invade Next?"

3
cpncrunch 4 ago 4 replies      
Impossible to read without subscribing. Incognito doesnt work. Clicking on google search link doesnt work.
4
joe_the_user 3 ago 2 replies      
"In California last week, legislators and interest groups declared dead a measure pushed by Gov. Jerry Brown to allow certain apartments with some low-income units to sidestep the states environmental review process. "

It sounds like unions objected to using "low income housing" as an excuse to "sidestep the states environmental review process." I would also.

Environmental review shouldn't be an excuse to engage in NIMYBism but environmental review is important to prevent projects that are environmentally destructive.

It sounds like every interest group involved here is using "low incoming housing" to ram through the "reforms" they're after.

8
There are no particles, there are only fields (2012) arxiv.org
213 points by monort  13 ago   108 comments top 19
1
atemerev 10 ago 5 replies      
And in another 100 years or so, this will finally make it to textbooks...

Quantum field theory is weird, but there are much more compelling analogies in classical world than particles. (Feynman was a fan of particles, but I presume he was aware of the problems with this representation).

When you speak of fields and wave packets, you eliminate the uncertainty principle, and double-slit experiment is no longer a paradox no small feat to achieve.

2
okket 12 ago 2 replies      
Sean M. Carrol always mentions this fact when he talks about QFT, which can be very entertaining like this one from 2013

"Particles, Fields and The Future of Physics"

https://www.youtube.com/watch?v=gEKSpZPByD0

(Audio starts at 19 sec, Lecture starts at 2:00)

3
ScottBurson 11 ago 1 reply      
Interesting that the paper starts by attacking "quantum mysticism". Seems to me that the argument it's making renders quantum mysticism easier to believe rather than harder. The concept of particles, after all, appeals to our Newtonian "billiard ball" intuitions; particles are the essence of locality, and our intuitions suggest that a particle universe should be deterministic.

On the other hand, if particles are epiphenomenal, and everything is really infinite fields which only have a certain probability of interacting in certain ways, it seems like, intuitively, there's a lot more room for consciousness to influence those fields in a nonlocal manner. No?

Just playing devil's advocate here :-)

4
kkylin 2 ago 1 reply      
The abstract already lost me: "Thus the Schroedinger field is a space-filling physical field whose value at any spatial point is the probability amplitude for an interaction to occur at that point." But the wave function lives on the configuration space of the system: if you have $N$ particles, the wave function lives on $R^{3N}$. In what way is this a "space-filling physical field"? Admittedly I haven't had time to do more than skim the article; perhaps it's explained more carefully later on.

(Off-topic, but since this has come up a number of times on HN: this point is also where Bohmian pilot wave theory has never been wholly satisfying for me. If you accept the pilot wave picture, then the double slit loses a little bit of its mystery, but many-body theory still seems just as weird as before.)

5
datihein 12 ago 2 replies      
This article did get published in the American Journal of Physics, and there was some back and forth discussion also published in the Journal. Unfortunately, the published version and the ensuing discussion is effectively inaccessible ... they want 30 USD from me to read each published response.
6
gpsx 5 ago 2 replies      
From the paper, at the top of page 10 of the PDF, at the end of section A:

"Some authors conclude, incorrectly, that the countability of quanta implies aparticle interpretation of the quantized system. Discreteness is a necessary butnot sufficient condition for particles. Quanta are countable, but they are spatiallyextended and certainly not particles. Eq. (3) implies that a single mode's spatialdependence is sinusoidal and fills all space, so that adding a monochromaticquantum to a field uniformly increases the entire field's energy (uniformlydistributed throughout all space!) by hf. This is nothing like adding a particle.Quanta that are superpositions of different frequencies can be more spatiallybunched and in this sense more localized, but they are always of infinite extent. Soit's hard to see how photons could be particles."

As mentioned above, you can take linear combinations of these different single particle states at different energies and come up with various energy/location spreads. Doesn't one such combination have a spatial spread of zero? This would correspond to a single quanta at a single location in space.

My physics may be a bit rusty since I have been out for a while. Combining the different frequency components from the different field configurations is not _exactly_ the same as simple Fourier analysis, on the face of it. However, the individual contribution from a given field configuration (meaning a single frequency) is very small since there are so many different field configurations contributing (an infinite number). I believe the Fourier result does apply to the expectation value of the particles location here.

If I am thinking correctly this seems to be a very critical error in the paper. Someone correct me if I am wrong.

* * *

EDIT: I believe I said something incorrectly. Where I said "I believe the Fourier result does apply to the expectation value of the particle's location here." I meant to make a stronger statement, "I believe the Fourier result does apply to the effective value of the particles wave function in this location in this case." (The expectation value being zero would not bean the field does not extend to that location.)

7
andrewflnr 11 ago 0 replies      
That was eye-opening, and not just with regard to the titular subject. I'd never thought of energy stored in fields as a consequence of energy conservation.
8
WhitneyLand 4 ago 0 replies      
Here another physicist challenges Hobson for not respecting realism, and he has a pretty good come back:

http://physics.uark.edu/hobson/pubs/13.09.a.AJP.pdf

9
Ono-Sendai 8 ago 5 replies      
Generally I think the field idea is more plausible than the particle idea. But I think there are some things that the field can't explain yet (to my satisfaction at least).Why, whenever we measure the charge of an electron, do we measure the same value? Why not one half, or one third of that charge sometimes? After all, if an electron is just a disturbance in a field, why might we not capture just part of that field in our measuring apparatus?

This is of course trivially explained by the particle idea.

10
platz 9 ago 1 reply      
as water waves are 'epiphenomena'/emergent from the underlying form, are the 'waves' that are used to describe light also epiphenomena (i.e. emergent) or are light waves the EM field exactly ? If the latter, I don't see how to interpret the photoelectric effect.
11
kmm 12 ago 3 replies      
I always enjoy a discussion about semantics, but only when both parties are very clear about the fact that it's semantics they're talking about, and not fundamental nature of existence. I'm very wary of people trying to use physics to further an ontology, as physics almost by definition allows for multiple, completely equivalent descriptions of reality. That's not to say I think physics teaches us nothing about how the universe really works, but I don't think you can conclude from his interpretation of the mathematics of QFT that particles (whatever they are) don't exist, just as much as the Fermat principle[0] doesn't imply that light has a sentient mind which seeks out the shortest path. There exists a consistent, fully equivalent interpretation of (non-relativistic or relativistic) quantum mechanics that includes particles at the core of its ontology, Bohmian mechanics[1]. I'm personally not an adherent of it, but it shows that by nature, it's very hard to use physics to show what something fundamentally is.

Besides, the article doesn't define clearly what it means by particle, which is a priori just an English word, nor does it justify it well. I don't share the authors' problem with the excitations of a field being spread out all over the universe (by virtue of them being momentum-eigenstates). It's discrete, has a mass, has a momentum, and energy and interacts as a whole. The article calls these properties necessary but not sufficient, but doesn't explain why this doesn't suffice.

Particles are at least a useful abstraction. They emerge naturally at the classical level, interactions between fields are even at extremely high energy levels still very localised, electrons "scatter" a lot like they're bouncing off other particles, they leave neat tracks in bubble chambers, excitations of fields are discrete even at the lowest level, ... Feynman diagrams[2] are extremely handy, even if they don't "actually" happen, but are just a term in the series expansion of an interaction Hamiltonian between two fields.

What's the use of contorting oneself to the limit to fit every observation in a single mold, a field. Sure, classical particles are nothing like what we see at the quantum level, but classical fields are absolutely nothing like the fields in quantum field theory either. Why pick one term over the other?

0: https://en.wikipedia.org/wiki/Fermat%27s_principle1: https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory2: https://upload.wikimedia.org/wikipedia/en/f/fb/Feynman-diagr...

12
Animats 11 ago 1 reply      
There is only the mathematics. "Shut up and calculate", as one physicist put it.

Here's Feynman talking about it.[1] "We interpret the intensity of the wave as the probability of finding a photon".

[1] https://www.youtube.com/watch?v=_7OEzyEfzgg

13
Gnarl 8 ago 0 replies      
Nothing new. This guy published a book about it in 2006: http://transfinitemind.com/tapestryindex.php
14
dschiptsov 1 ago 0 replies      
Particles is just result of [self-centric] human perception bias and the resulting naive concept of matter or a substance as perceptual conditioning, evolved in a certain physical environment (everything what we are, including our mind and consciousness is shaped by the environment, and our perception of reality, in turn, is shaped by our mind, conditioned by perception).

We think that there are solids and atoms - grains of matter - basic building blocks. This model corresponds to what our sense organs give us. It is hard to convince oneself that there is no matter at all, only energy and our perception grossly zoomed out of what is really going on.

Matter is an appearance to perception. A wrong model due to limitation of the sensory system. There is no matter when there is no observer, only states of energy, or fields, which is a better concept, but still mere concept. We could say that atoms are "stable" fields, but it is much better to do not apply "human" predicates to the nature.

Particles is a good-enough model, which allowed us to sequence a genome or build a CPU, but it is only a crude model nevertheless.

15
dmfdmf 6 ago 0 replies      
Platonism and Kantianism are still alive and strong in Western thought.
16
S_Daedalus 12 ago 0 replies      
It's either this, or something like objective collapse really does happen, physically, which seems less likely.
17
rrggrr 8 ago 0 replies      
Can someone ELI5 or possibly 15 on this for me?
18
fu9ar 8 ago 0 replies      
the map is not the territory.

the model is not Absolute, but it gives us a really, really good idea.

19
sbussard 9 ago 4 replies      
Anyone who's studied beginning grad level physics should know that. I don't get why it's trending on HN
9
RE:DOM Tiny DOM library js.org
86 points by pkstn  9 ago   53 comments top 16
1
ams6110 7 ago 5 replies      
Honest question, how is:

 el.email = input(props({ type: 'email' }))
better or easier than:

 <input type="email">
I think the example needs to be more compelling.

2
rocky1138 6 ago 3 replies      
Why are we creating HTML from within Javascript?

It would be much better to use HTML for that and simply toggle the display/visibility of DOM elements via CSS or JS.

3
tjallingt 8 ago 5 replies      
I've never seen this before:

 children(el => [ el.email = input(props({ type: 'email' })), el.pass = input(props({ type: 'pass' })), el.submit = button(text('Sign in')) ])
Can someone explains what this does and why it is used here?As i understand it this function both modifies the `el` object (whatever that is) and returns an array containing the input elements, but why would you want to do both those things at the same time?

4
voiper1 2 ago 1 reply      
RiotJS lets you keep the "normal" HTML and just add easily-called functions. Looks much more simple and "normal" to me than all this react stuff and HTML generation from javascript.
5
rajangdavis 7 ago 0 replies      
Reminds me little bit like Mithril...

Edit: Not sure why I got downvoted, but if you look at the Mithril API (sample code on http://mithril.js.org/), it's very similar to this RE:DOM library with the major difference being that there is a controller function in the Mithril example and that Mithril is more of a functional language.

6
lexicality 7 ago 3 replies      
I'm torn.

On the one hand this seems far too clever for it's own good and in the same vein as coffeescript where it's really easy for you to write efficient & compact code which somehow turns into complete garbage when you try to read it two weeks later.

On the other hand it looks super neat, has 0 dependencies and looks like React but without requiring any kind of build system.

I think I might need to try it in a side project.

7
tasnimreza 1 ago 1 reply      
Still i don't understand why people writing simple HTML in complex way!
8
TazeTSchnitzel 7 ago 0 replies      
I've made a similar sort of DOM DSL: https://github.com/TazeTSchnitzel/jInsert

e.g.

 document.body.appendChild($('form', {action: '/submit', method: 'POST'}, [ $('input', {type: 'text', name: 'username'}), $('br'), $('input', {type: 'password', name: 'password'}), $('br'), $('input', {type: 'submit'}) ]));
RE:DOM looks cool too.

Since using Haskell's Blaze, I've wanted something close in other languages.

9
syntex 6 ago 1 reply      
There is a trend in javascript community to use es2015 for everything and sometimes developers forget to keep a very fragile balance between expressiveness and simplicity.
10
RubyPinch 5 ago 0 replies      
is there any compile-to-javascript languages which would be termed as "disgusting" and "impure"?

https://github.com/dropbox/pyxl for python comes to mind as something more enjoyable than manually doing tree-shaped OOP

 login = : //"read variable from indented block" symbol of magicness <form> <input type=email> <input type=pass> <button>{_("Sign in")}<button> //gotta have an excuse to inline code! </form> login.events({ onsubmit (whatever, i dunno) { blap } });
seems a lot more fun!

11
korynunn 4 ago 1 reply      
12
snehesht 7 ago 0 replies      
the website's UI looks great. simple and to the point.
13
chuangbo 8 ago 0 replies      
Good try. Although jQuery chaining api seems better for me.
14
aurelianito 4 ago 0 replies      
Why is this better than working with d3 selections?
15
codedokode 2 ago 1 reply      
Unreadable.
16
daviddahl 6 ago 0 replies      
Nice! reminds me of hyperscript-helpers
10
Farmers Plant Beehive Guard Posts to Repel Elephants 99percentinvisible.org
72 points by samsolomon  9 ago   5 comments top
1
threesixandnine 8 ago 1 reply      
I kept beehives years ago and also tried Kenyan Top-Bar style beehives. The bees in those were the most productive and most gentle. Love to see those again and at this very moment I am itching to get at least 2 beehives and start beekeeping again.
11
Major red flags to look out for when choosing to work for a startup medium.com
330 points by davidkhess  5 ago   124 comments top 38
1
harwoodleon 1 ago 0 replies      
Penny, if you do end up reading the comments on here, be sure of a few things:

You have generated more traffic to their site than Jess ever will.

You are not alone in your experiences.

You are obviously a great professional and you will probably do well.

You have shamed the company into probable closure (which is a good thing) as the guy would get worse with more money and bigger teams.

For everyone involved in a startup there are huge risks. Most of the risks are borne by the founders, they get handsomely rewarded if things go well.

But they often don't. I wish the startup community would take heed at this great story and be more honest about the risks, instead of following the hype train.

Macho ego bullshit is to blame for a lot of wasted effort.

But it's really good to hear that you are back with your cat.

2
GuiA 3 ago 5 replies      
Welcome to the club. It's pretty much a rite of passage here to spend some time with a psychopath VC, a completely self absorbed CTO with a rich investor dad that fuels his fantasies, or an idiotic CEO with an ego problem, and to pay the price for it (just time if you're lucky, time+money if you're not).

I've encountered this myself several times (down to the CEO using hire fast fire fast as a mantra), and judging from my friends' stories, most people have or will. When there are large amounts of money at stake, I guess it makes sense for charlatans and sharks to flock.

There are very few ways to tell accurately from the outset at first who's going to screw you over - I've heard horror stories of the sort from friends at startups backed by high profile entities like YC, famous startups that are often in the media for being "the best places to work at", companies with celebrity founders who have reputations for being "the nicest guys in the world", etc.

It's the kind of thing that you can only learn through a few painful experiences, I think. You do learn your lessons: never pay in advance for anything, don't put your own savings or core livelihood on the line for someone else's dreams, get everything in writing, talk to former employees of a company/colleagues of a founder before getting involved, never ever assume that what you have is anything more than an employer/employee relationship, etc.

I have to say that I'm impressed by how the poster handled it - keeping documentation, filing wage claims, etc. - the only thing she could have done better was not staying on so long when she wasn't getting paid, but it's an understandable mistake when you're in the moment.

I for one am glad I learned my lessons at 22 rather than at 45 with a family to care for and a mortgage to pay. The upside is that there are plenty of genuinely nice, passionate people - when you find them, stay close to them.

3
Animats 54 ago 0 replies      
The fake wire transfer receipt is fraud. That's a criminal offense. Have a talk with the local DA. California has a large prison system, with cell space available.

I've never heard of a startup going that far.

5
cyberferret 4 ago 1 reply      
Clicked on this expecting to see another millennial self entitled "why I left" rant, but found a well written, horrifying article on deception and underhanded shenanigans.

The CEO of this startup qualifies be listed under the other discussion on here about psychopaths running companies.

6
uiri 2 ago 1 reply      
I think the title should be changed back. The author glosses over the single, real biggest red flag:

She never received any paystubs and the company was late on payroll as of before she was hired

I have never, ever been paid without either a paystub (as an employee) or generating an invoice which the business then paid (as a freelancer). The outright refusal to share how they arrived at the figure on her cashier's cheque should have sent her looking for a new job in the Bay Area or back to Texas.

The real lesson from this story is to always have a backup plan when making a big move: what do you do if you arrive and there is no job/no money? what do you do if you arrive and there is no apartment/room/house/etc.? Scammers exist and there is only so much you can do from a distance to avoid them.

7
mkoryak 2 ago 0 replies      
I once worked for a startup here in boston that didn't pay me for a few months until I threatened to file a wage claim (did't work) and to quit (worked). I got the hell out of there as soon I got paid - some of my coworkers weren't so lucky.

It seems crazy to me now that I wound up in such a situation, but what it is happening to you, it is usually accompanied by a healthy helping of lies, misinformation and hope.

It wont happen again. But it was fun getting ~3 months salary in cash and going to Jordan's to buy furniture with my girlfriend like a gangsta'

8
desdiv 2 ago 3 replies      
I've dealt with late payroll before and the fear, uncertainty, and stress really sucked. I was even considering offering the company a 50% haircut on the wages that I rightfully earned already just so the stress would go away.

Is there any bank or payroll company that offers a wage escrow service? As in, the company pays X months of wages in advance into a per-employee escrow account that's FDIC insured, and the employee can log in at any time to verify its balance.

9
dboreham 3 ago 0 replies      
Interesting, because I saw a very similar scenario play out at the very first job I had after college. A coworker's paycheck was delayed then when the check did arrive it bounced. Coworker complained to CEO. Coworker was then fired. As a sibling comment says: this kind of thing happens often enough that everyone has seen it at least once.

Definite bonus points for the fake wire transfer receipts though. Above and beyond!

10
8KjRu5VAAeMBIZm 1 ago 0 replies      
I've been burned repeatedly by startups recruiting me for one role which matches my experience/skillset, then after a month or so radically changing the role to one which I'd had no experience with. I get pivoting, I get needing to be flexible, but why would you hire someone skilled in networking operations and four weeks later decide that person need to design/develop your Windows application instead?

I've always been paid, however I now check to see if startups have actually filed the SEC paperwork when they claim to have raised a round, and verify with the investors they claim to have backing from that they've actually invested in the company. One startup I worked for lead the employees to believe we had a solid 18 month runway, when in fact the founders were covering payroll from home equity lines of credit. They didn't actually close the round until a year after most of the initial employees left as payroll became erratic.

Another startup I worked for on the basis of a handshake...never do that. After a year of developing the company's MVP the founder formalized the structure and equity of the company, cutting the four early employees out as cofounders and reducing our equity from 2% to 0.5%. As were were all working on handshakes, none of us had legally committed to working for him, so we all walked away. He lost the MVP since I had the only copy.

The last startup I worked for (and will ever work for), I was recruited by the CEO to come in and build a mixed-discipline technical team in a supporting role. Within a month it became fairly clear that I'd been hired over the objections of pretty much the entire management team, which had I known I wouldn't have taken the role. I was clearly pegged as "a bad hire" which would not have happened had anyone I'd interviewed with spoke up.

Throwaway account

11
halestock 1 ago 1 reply      
How did a company with 17 employees manage to get 9 H1Bs?
12
ara24 6 ago 0 replies      
well written story.

with all the troubles, I wonder how the founder could just keep going. may be it is as the story says, "default human condition to not give up" but at what cost.

13
foxylad 1 ago 2 replies      
> The names have been changed to protect the innocent and guilty.

Assuming the writer is as scrupulously honest as they seem to be, how legally exposed would they be in actually naming names? And assuming they are exposed, are astute commenters who use clues from the article to reveal the company also liable?

It seems to me that it would be a HUGE public good to name the names. Employees and investors absolutely need the right to know about the people doing this; being able to safely expose them would go a long way to stopping such scum.

14
SakiWatanabe 2 ago 2 replies      
>Michael said under his breath in our second language, Look at those Chinese kids. Theyre pretty happy they got paid, huh?

Kim (the writer) is a korean last name, is the company CEO korean too?

15
pyre 2 ago 1 reply      
I'm curious what the "H1Bs cannot be used to control employees because H1Bs have a process to move to another company" crowd thinks of this situation.
16
throway29816129 1 ago 0 replies      
As a cheapo freelancer born and working in a country that ranks quite low in transparency, trust and justice system ratings, I'm used to getting shafted by people who hire my services, fail to pay, and leave me with no option to turn to for help.

But I've to say I'm shocked something like that happens even in Silicon Valley, and based on other comments here, quite often too. I don't know whether to cry for my fellow shaftees or laugh at them out of cynical schadenfreude. All I can ask is please do something to fix it effectively, because the broken window effect only makes things worse with time.

17
sp527 2 ago 0 replies      
This must have been a traumatizing experience but the one thing I would want to add to it based upon my own experience is that things don't get that much better at the unicorns. Having worked for one that's still chugging along, I've seen the fake-it-til-you-make-it paradigm persist well into the 'mature' phase of a company. The industry at large feels like such a house of cards. It's not that value isn't being generated, but rather that it has been so dramatically overvalued.
19
gozur88 3 ago 0 replies      
If I'm understanding the article correctly this goes beyond not getting paid because the CEO is juggling accounts - the Wells Fargo thing is probably criminal.
20
iamleppert 54 ago 0 replies      
On Isaac Choi's LinkedIn page he also has the following "jobs platform": http://1for.one/
21
jimmywanger 3 ago 1 reply      
"Sunshine is the best disinfectant."

This might be counterproductive in some instances, but if something doesn't smell right, I'll try to blow it up and make all the dealings very public.

I mean, there are no personal feelings involved. You are paying me to be a worker. No pay? No work.

22
BradRuderman 3 ago 1 reply      
The biggest red flag for me is when no one at the company (even the founders) have experienced the problem their product is solving!
23
chejazi 3 ago 1 reply      
Crunchbase company profile, as determined from the comments on HN: https://www.crunchbase.com/organization/1for-one
24
h4nkoslo 2 ago 2 replies      
Has there ever been an instance of late payroll (as opposed to eg sales commission checks being calculated wrong) where the company actually pulled out of the nosedive? It seems like it's almost always the death knell.
25
liquidise 2 ago 3 replies      
I found the bit about the negotiated salary of particular note. I have never included severance in my negotiations, mostly because i have always left my jobs willingly. Is that something more common than i know of? What is a reasonable target to aim for?
26
35bge57dtjku 2 ago 1 reply      
Is it a bad sign that all SW engineers were on H1B visas? Why wouldn't they hire any from the US?
27
a_small_island 2 ago 0 replies      
I feel bad for the employees. This song will repeat itself many times over. Selfish deluded egomaniac taking advantage of people and their livelihood. Good luck to the rank and file.
28
derek1800 3 ago 1 reply      
How common is this behavior at startups in Silicon Valley? Can anyone share similar stories or point me at other posts?
29
swang 3 ago 0 replies      
okay i can understand why the title got changed. but why to what it is currently? feels like a "missed the lede" kinda title which isn't any better than the original title.
30
mrhektor 3 ago 0 replies      
Singaporean startup founder here who moved from the US. I have yet to meet a psychopathic CEO here in Singapore, but I've met and heard about quite a few in Silicon Valley / Bay Area. It may just be selection bias (a lot more companies are founded in the Bay Area) but I wonder if there's something more. Any other insights into Bay Area startups vs. international startups here?
31
bambax 50 ago 2 replies      
> Jessica would ignore my best practices recommendations (...) and promote her gonzo style writings that were often filled with typos and grammatical errors.

> I dont when or how the money became an issue

Typos happen.

32
tbrooks 3 ago 1 reply      
What's the real identity of the startup?
33
taneq 3 ago 0 replies      
Wow, that whole tale of woe is giving me flashbacks to a startup game studio I worked at once upon a time. They never resorted to outright fraud in the 'lets not pay them, but say we did' sense but there was plenty of the other sketchy behaviour.
34
MichaelBurge 3 ago 1 reply      
Isn't the photoshopped wire transfer literally outright criminal fraud? The government should move fast and break him.
35
st3v3r 2 ago 0 replies      
Why was the company not named? Everyone needs to be warned against working with these scumbags, both employees and clients.

I hope the state prosecutes the hell out of them, and the founders are left living under a bridge.

36
zappo2938 2 ago 0 replies      
If all correspondence is done on the phone and not through email. I communicate through writing much better than on the phone so I wrote long and explicit emails. I don't feel like I have anything to hide. The phone only people are the ones who go back on their word.

The other thing is one size fits all NDA / non compete contract. Maybe it makes sense for the sales people but for writing code every day inventing things it doesn't make sense for the employee. If people don't want to take the time to write an appropriate NDA / non compete contract I have no time for them.

37
hackaflocka 2 ago 1 reply      
38
programmer_guy 2 ago 12 replies      
Hey guys, just wanted to clear the air on some of the speculation: I'm actually with WrkRiot and can guarantee you we're not the company in question: I've got many weeks of paid on time pay stubs sitting right here. Plus we didn't have anyone matching any of the descriptions in this article.

Interesting article though, it's really shitty some have bad experiences like this.

12
The French Number Connect to a random French person and talk about anything thefrenchnumber.fr
217 points by davinov  15 ago   103 comments top 27
1
tuna-piano 11 ago 5 replies      
When I first heard about Chat Roulette, I thought the idea was amazing. Imagine hearing about Chat Roulette in 1970, or better yet 1800! How cool. Get connected with random people anywhere in the world! The possibilities! I could talk about war with someone in pakistan, healthcare with someone in Britain, Chavez with someone in Venezuela, learn about cuisine they eat in Uruguay! Maybe I could make a new friend who'd I'd end up visiting some day.

But of course when I say Chat Roulette now, all of you probably just chuckle inside - because it's a good example of how anonymous things on the internet turn out (for those who don't know, Chat Roulette is pretty NSFW, with many nude men on it... breaking down borders, but not in the way I'd have hoped).

I hope services like this become successful as a way to break down borders and form connections across the world - but I'm not holding my breath.

Edit: Maybe I'm part of the problem because I guess I could say the same thing about having a digital personal assistant (Siri) - and I just end up just asking her things like "How many calories are in a cubic light year of butter"? (5.83 x 10^54 kcals for those on MyFitnessPal)

2
brbsix 14 ago 3 replies      
This reminds me of #CALLBRUSSELS, a Brussels tourism initiative wherein anyone could call public phones throughout Brussels via https://call.brussels and chat about anything. There were live video feeds as well. IIRC this was in January, right around the time Trump and others were making negative comments, so it was meant to dispel concerns about safety in the city.

Here's an article discussing it in detail: http://www.ibtimes.com/callbrussels-new-tourism-initiative-d...

There's also some video content: https://www.youtube.com/watch?v=eRnybwEvQsU

3
danieka 15 ago 3 replies      
Seems inspired by the swedish number.http://theswedishnumber.com/
4
richardw 2 ago 0 replies      
Reminds me of this. Brazilian kids learning English by talking to US seniors, who love the company:

https://www.cna.com.br/speakingexchange/

5
Fiahil 13 ago 1 reply      
I'm almost tempted to try becoming an "ambassadeur tlphonique pour la France", but I would need an extra temporary number " la google voice". I really don't trust them enough to let them have my real number, as they would probably sell it to the highest paying advertising company they can find.
6
Normal_gaussian 15 ago 1 reply      
This used to exist for Sweden, at

http://theswedishnumber.com/

7
vardhanw 14 ago 1 reply      
> this fonctionnality is offered to you only if you call the plateform via a special number - see if your country is in the list

Classic French English "mistakes" on the page.

8
vbsteven 12 ago 1 reply      
I would love a service similar to this one but with a match making algorithm based on interests and/or self selected topics.

This way I could make a lonely commute a lot more interesting by calling a random person also interested in for example Ruby programming

9
calimac 3 ago 0 replies      
The French or doing anything they can to revive there stalled and dismal tourism Industry that is suffered from the Islamic terrorist attacks and over bearing waves of "Muslim refugees" who have harassed tourists, actually, Muslim terrorists who mowed sightseers down with a semi truck on basil day.
10
WheelsAtLarge 13 ago 0 replies      
Tried this, it was lots of fun. Talked to a 19 yr old student learning English. His English was excellent. Every one should try it. I used google voice. It cost me 8 cents for about 5 mins, not sure of the time.
11
OJFord 13 ago 2 replies      
134 calls since July 18? (And how many of those, I wonder, occurred in the last hour for which it's been on HN and accrued 81 points..)

So far I'd say it's probably not justifying it's cost/effort. I don't really understand why the Swedish one (which I understand to be the original) was made - but I understand clones even less. Is it just PR; does the tourism company behind it really think it will be effective?

12
microcolonel 15 ago 0 replies      
It really is impressive to see people making productive use of human loneliness, a sadly abundant resource.

They missed the opportunity to call it the Francophone, though.

13
guessmyname 15 ago 4 replies      
This will be helpful for people like me that want to improve their French language skills. I started studying French four years ago or so and during the first two years the learning curve was painful because this language is not very popular in my country, so socializing in order to practice the skills gained in class was mostly between the same classmates. I wish this project was launched during that time though, but better late than sorry.
14
lisper 14 ago 0 replies      
Do I have to speak French?

[UPDATE] Ah, I see the answer is on the site: no.

15
truth_sentinell 8 ago 0 replies      
This is pure gold. Thanks for giving me this. I will enjoy talking to random people and improve my conversation skills, plus I will get to know more French. Again, this is gold.
16
weberc2 13 ago 1 reply      
Presumably calling this number from a U.S. number (Verizon in my case) would incur lots of long distance charges, no?

EDIT: Just read the disclaimer:

> The call will be charged as an international call. Please check with your phone operator what your calling rates for France are. You may prefer to call from a special number (see if your country is on the list).

17
katzgrau 15 ago 0 replies      
I'm definitely going to do this and have a little fun, although I'm not sure that's the intended purpose #cestlavie
18
hiimnate 9 ago 0 replies      
According to the site, they've only had 150 calls since July 18th
19
coldcode 14 ago 0 replies      
I've always wanted something like this with full two way translation that worked. That would be a great way to learn how other people live their lives without the wall of language.
20
xiii1408 11 ago 0 replies      
Neat idea!

Added ten bucks of credit to my Google account, then feverishly dialed the number... "You are about to be connected to a random French." Ooh, how charming. Upbeat hold music, then silence. "Hello?" I call out. Then, my call is dropped. DISAPPOINT.

21
welanes 14 ago 2 replies      
"Hello, France? Which way does the water turn in your toilet?"
22
a-l-c-o 11 ago 0 replies      
"Your call may be recorded."

Non merci.

23
trevorg75 15 ago 2 replies      
Whoever named this should be fired for missing out on calling it "The French Connection"
24
nkjoep 14 ago 0 replies      
Cool initiative.

Btw, that blue (#000099) with white is so disturbing.

25
throwaway000002 15 ago 6 replies      
26
ConroyBumpus 14 ago 2 replies      
"Um, bonjour. Est en cours d'excution de votre rfrigrateur?"
27
campuscodi 6 ago 0 replies      
That would imply talking in French... so no, thanks.
13
How connected cars are turning into revenue-generating machines techcrunch.com
24 points by prostoalex  7 ago   14 comments top 9
1
wsetchell 4 ago 2 replies      
This article was really hard to read. Here's my attempt at a summary:

* The author expects new cars to be networked and to have beefy CPUs.

* There are lots of cars, and each car will use lots of data. To network all these cars, you'll need investment from telecom companies like carriers.

* Today the car's systems don't talk to each other, or the internet that much. Once you network them, it'll be good for companies who make cars to see how their product is used in the wild.

* Now that you have smart cars, people can write apps for them. The author thinks this will be a large market with a few killer apps.

* Car's are really complex, and were not designed to be networked. You should expect a lot of security issues during this transition.

2
manyxcxi 4 ago 2 replies      
Just saw a Buick commercial touting their in-car wifi and my first thought was, why is this something I want to pay extra for? Whatever crappy mobile network device it has is already older than my iPhone/Hotspot and I'm tied to whoever their provider is for however much they want to charge.

Let me run it in reverse. Let my connect my phone using my frequently updated hardware and the network that I want to use. I've got power so I don't care about the battery drain, and I gaurantee I will have a better speed/data cap/price relationship than whatever tiers they will have.

Better yet, instead of the crappy infotainment system make it Android and/or AirPlay capable and let the phone run the head unit and you just have a dumb display.

3
bonniemuffin 1 ago 0 replies      
I feel like such a luddite for saying this, but this article makes me want to go torch a car dealership just to sabotage the idea of connected cars. This just sounds like marketers looking for new ways to shove ads where they don't belong.

Even well-established connected car technology like built-in navigation still sucks. I know people with built-in nav, but they all use google maps on their phones instead, because it works better and is more up-to-date. What does that say, if car companies haven't even gotten navigation systems right yet?

4
Animats 1 ago 0 replies      
That article lacks any useful ideas on how to generate revenue with a connected car. It's suggested that payment at gas stations could be automated, but that was first deployed in 1997 as Mobil Speedpass, and still has only about 3 million users.

Ads on the dashboard maps are likely, but you can get a map overlay with fast food restaurants now.

Monetizing tracking data on where the car goes seems almost inevitable. GM has such data from Onstar, but they're hesitant to do it. "If we can monetize that connection at some point then there is revenue potential, but selling a $20,000 or $30,000 vehicle, thats where the value to GM is" - GM exec. They don't want to jeopardize the car business by trying to get a little extra revenue by selling tracking data.

5
nerdponx 5 ago 1 reply      
That saying about how if you're not paying for the product, then you are the product bothers me. Because it implies that if you do pay for the product, you somehow get to avoid being the product.
6
Spooky23 1 ago 0 replies      
If you think of it from an old-school, consumer viewpoint, "connected cars" are a solution looking for a problem.

This seems unusual, since automakers will eagerly save <$5/unit to place a deadly Takata airbag or that faulty GM ignition switch in a car. Why spend $50 to give me some shitty functionality that I don't need?

So obviously these investments are being made to address some tangible problem -- and "let's make more money" is a problem that everyone has.

7
S_Daedalus 5 ago 0 replies      
This is going to be a dumpster fire, at least for a while until the best overtakes the best-funded and marketed.
8
bitwize 5 ago 0 replies      
Does it speak TCP/IP? Someone is figuring out a way to use it to sell you to advertisers.
9
joe_the_user 1 ago 0 replies      
The last American Airlines flight I took, the plane had no built-in screens at all. No TV, no movie, nothing. It was great. I think they realized that sinking money into their screens was just dumb - everyone who wanted to watch a movie or TV just their own device, etc.

The idea of cars using more and more realistic to bombard their users with "infotainment" is just awful. I want a car to tells me less and less because it needs to tell me less and less. Speed and maybe which lights are on and maybe fuel/energy gauge. I shouldn't need rpms or fuel mileage or whatever because the car should take care of this normally - emergency information is a different matter but the car should avoid emergencies.

14
The Fall of a High-End Wine Scammer bloomberg.com
77 points by kspaans  11 ago   45 comments top 7
1
TazeTSchnitzel 5 ago 1 reply      
> Meanwhile, the store was becoming even slower to deliver. In February 2012 a participant on Wineberserkers.com, a popular discussion board, started a thread titled Why does Premier Cru take so long that would continue unspooling into outrage for the next four years.

If you want to amuse yourself, here's the thread: http://www.wineberserkers.com/forum/viewtopic.php?t=61257

It's interesting to me that even the mere hypothetical suggestion that Premier Cru was a Ponzi scheme produced such outrage.

2
S_Daedalus 5 ago 3 replies      
It's almost as though the world of fine wine is full of rich suckers who truly believe they have a rare and golden palette.

Delightful.

3
h4nkoslo 4 ago 1 reply      
Any business that has a long delay between collecting revenue and delivering product is vulnerable to this - it's one of the reasons, for instance, that life insurance is so highly regulated. Grey-market auto sales (eg of high-end sports cars that need to be imported & converted), real estate development (Galt's Gulch in Chile), and sales of silencers or other highly regulated firearms stuff (lots of paperwork to wait for) are other examples. Hell, even Kickstarters are notorious for this.

One problem is that it's really difficult to distinguish business incompetence from never intending to follow through, so often the guy pulling the scam won't be prosecuted and pops back up pulling the same scheme in a different market. It's totally legal to solicit orders and only then try to fulfil them, and depending on the market sometimes there is an issue actually getting the product.

4
hownottowrite 6 ago 2 replies      
Watch Red Obsession for a little background on this market. http://m.imdb.com/title/tt2419284/

Should be on Netflix - edit: but it is not :(

5
biocomputation 7 ago 1 reply      
Sounds like I'm not the only one who gets headaches from wine.
6
ImTalking 6 ago 2 replies      
If the guy is a scammer, then why is his fall disastrous?
7
revelation 8 ago 1 reply      
This seems like a generic scammer. He didn't even have the brain to run it as a pyramid scheme, you certainly don't want your big clients to only ever receive 1 bottle.
15
KickSat: open source spacecraft project github.com
162 points by jimsojim  17 ago   47 comments top 19
1
dolske 15 ago 1 reply      
Hey, Kicksat! I backed the first launch on Kickstarter a few years ago -- it was exciting when they finally launched, and a bit disappointing when it didn't deploy the Sprites.

But it was a super fun opportunity to get involved with with tracking the carrier satellite and decoding telemetry. And with a really inexpensive setup based around a $20 software-defined radio dongle.

I blogged about here: https://dolske.wordpress.com/2014/04/21/satellite-radio/

As a bit of a followup, I was successful in capturing and decoding a number of its passes over the following weeks. Including what seems to have been the last received signals from it, on 5/13/2014: https://groups.google.com/d/msg/kicksat-gs/U_svX4f2xY8/StfEl... Shortly after that it burned up upon reentering the atmosphere, likely over Africa.

AIUI the Kicksat-2 re-flight is close to launching; the last Kickstarter update said it missed the upcoming (September) OA-5 launch to ISS due to a last-minute issue with a radio license. fingers crossed soon!

2
harperlee 16 ago 2 replies      
I was wondering if it really was a good idea to spread so many little things in orbit, and was subsequently very happy to read this in the paper linked by jimsojim:

 Due to their extremely low ballistic coefficient, the Sprites are Expected to remain in orbit for only a few days before reentering and burning up in the atmosphere, alleviating debris concerns.

3
kartikkumar 15 ago 1 reply      
Lots of open-source initiatives going on in the (small) satellite world at the moment. We're hopefully going to be releasing an online platform for preliminary design and trade-off analysis in the next few months. I know of a few other companies that are working on open-sourcing different parts of the technology stack. Predicting New Space to look quite different in another 5 years, with hopefully next to no barriers to entry.
4
nbadg 11 ago 0 replies      
On a tangential note, I was happy to see that they're using Solidworks files within git. There are precious few examples of this "in the wild", and after guiding my previous (small, <20-person) company to using git for hardware version control, I always get a bit happy seeing other projects doing the same.

Git gets a bad rap with binary blobs like Solidworks files because merge conflicts are extremely opaque, but that's really not much worse than anything else you've used as a mechanical engineer (at least not in my experience). And, unlike other options -- like the litany of built-in solutions that are "integrated" (if you can call it that) with the CAD program itself -- you get the benefit of branching and sandboxing. Really the only downside (from my perspective) is that you have to be very careful about communicating who is working on what, and being absolutely meticulous about sizing subassemblies in a way that minimizes work conflicts. Beyond that, the biggest hiccup is that Solidworks assembly files are rather... poor abstractions... and that changes to individual files within them will result in changes to the assembly file, even if the assembly itself never changed [1].

Tools like openscad get a lot of love from software engineers, but programmatic definition of geometry like that is just orders of magnitude less productive than, for example, the Solidworks UI. I think the primary reason they get as much appreciation (aside from using a familiar interface to software devs, namely code) is their compatibility with source control and collaborative work, which is in just a profoundly abysmal state with mainstream CAx tools. The CAx world is in dire need of a software-independent, merge-friendly formats.

Note that STL, IGES, STEP, etc don't count; they're package independent, but only transfer the "compiled" final geometry of the part, and not the history ("source code") you used to create it, so they're essentially immutable snapshots. That makes them great for sending a release to manufacturing, but utterly unusable if there's any design left to do.

[1] This is a result of the assembly file also storing a "compiled" version of the final assembly geometry, in addition to the various rules -- Solidworks calls them "mates" -- that define the relationships between the parts. So if you change the geometry in any of the parts, the assembly file changes, even though none of the mates did. Very frustrating.

5
chrissnell 5 ago 0 replies      
I wrote a payload controller in Go for a high altitude balloon. Like KickSat, it included a native KISS/AX.25 implementation along with an APRS library for sending and receiving most packet types. It was my very first Go project and as such, it's pretty dreadful and embarrassing to review but it did work:

https://github.com/chrissnell/GoBalloon

I even wrote a curses-based flight control console for it:

https://github.com/chrissnell/gophertrak

6
jimsojim 17 ago 1 reply      
7
ef4 16 ago 0 replies      
Fitting into a bulk launch with hundreds of other tiny craft makes the hardest part -- getting into low earth orbit -- relatively affordable. Once you're there, by being small and patient you can gradually lift your orbit using slow-and-steady methods like solar sails or electrodynamic tethers.
8
algorithm314 16 ago 1 reply      
Also there is an open source cubesat: https://upsat.gr/
9
JoeDaDude 5 ago 1 reply      
Those folk interested in receiving telemetry signals from this and similar satellites may be interested in SATNOGS [1], the DIY Satellite Ground Station, likewise open source.[1] https://satnogs.org/
10
zacinaction 8 ago 0 replies      
Hi Everyone,

This is Zac from KickSat. I'm glad there's interest in our project. We have KickSat-2 almost ready to go. It was supposed to launch this summer but got held up by the FCC (long story - not our fault). We're currently working with NASA to get on another launch, hopefully in the next 6 months or so. If anyone has any questions, feel free to reach out by email (not hard to find if you look for me on GitHub) or find me on twitter (@zacinaction).

- Zac

11
deanclatworthy 16 ago 1 reply      
What is the point of having thousands of these things floating around? (Serious)
12
optforfon 15 ago 2 replies      
So are there are any cool applications?

Maybe I'm having a lack of imagination... but I have no idea what I'd do with one even if I wasn't constrained by size. Even on the ISS they seem to have run out of interesting things to do (last I heard they were growing lettuce in space...). Just wondering if anyone has got any cool ideas

13
JshWright 16 ago 0 replies      
KickSat was launched in 2014 (piggybacking on the SpaceX CRS3 ISS resupply mission). The launcher failed to deploy the sprites and eventually re-entered the atmosphere.
14
wslh 16 ago 2 replies      
The main cost of this type of initiatives is sending them to space. In this case they were awarded by the NASA but will cost millions otherwise.

Other open software software (not updated for two years) that was used for three nano satellites is here https://github.com/satellogic/canopus .

15
nshm 14 ago 1 reply      
Arduino probably won't stand radiation for a long. In case people interested, in Russia there is a crowdfunded project running to send a space probe to the moon.
16
fitzwatermellow 16 ago 0 replies      
Congrats! Stunning animation of the orbital deployment ;)

What data are you planning to collect from this cubesat network?

17
mkagenius 12 ago 0 replies      
Why is there a gyro? for proper sunlight to the solar receiver?
18
iamgopal 13 ago 0 replies      
How to Launch nano sats ?
19
andrewfromx 13 ago 1 reply      
this going to let us listen to signals that have traveled lightyears much better than ever before. i think well find there is life (intelligent) everywhere you look. that the universe is literally infested with life everywhere you look once you look correctly. And the notion that humans are the only intelligent ones around we'll be like the earth is flat.
16
Docker not ready for primetime goodstuff.im
400 points by erlend_sh  13 ago   241 comments top 47
1
markbnj 11 ago 1 reply      
I have run docker in production at past employers, and am getting ready to do so again at my current employer. However I don't run it as a native install on bare metal. I prefer to let a cloud provider such as Google deal with the infrastructure issues, and use a more mature orchestration platform (kubernetes). The author's complaints are valid, and the Docker team needs to do a better job on these issues. Personally I am going to be taking a close look at rkt and other technologies as they come along. Docker blew this technology open by making it more approachable but there is no reason to think they are going to own it. It's more like databases than operating systems.
2
siliconc0w 11 ago 7 replies      
We use it in production.

It generally works if:

* you don't use it to store data

* don't use 'ambassador', 'buddy', or 'data' container patterns.

* use tooling available to quickly and easily nuke and rebuild docker hosts on a daily or more frequent basis.

* use tooling available to 'orchestrate' what gets run where - if you're manually running containers you're doing it wrong.

* wrap docker pulls with 'flock' so they don't deadlock

* don't use swarm - use mesos, kube, or fleet(simpler, smaller clusters)

3
dawnerd 12 ago 4 replies      
I spent close to 12 hours yesterday trying to get a fairly simple node app to run on my mac yesterday. Turned out I had to wipe out docker completely and reinstall. Keep in mind this is their stable version thats no longer in beta. I've just run into too many documented bugs for me to consider it stable. I wouldn't even say it should be out of beta.

The issues here are the real telling story: I spent close to 12 hours yesterday trying to get a fairly simple node app to run on my mac yesterday. Turned out I had to wipe out docker completely and reinstall. Keep in mind this is their stable version thats no longer in beta. I've just run into too many documented bugs for me to consider it stable. I wouldn't even say it should be out of beta.

The issues here are the real telling story. https://github.com/docker/for-mac/issues

I love docker, it's amazing when it works. It's just really not there yet. I get that their focus is on making money right now, but they need to nail their core product first. I honestly don't care about whatever cloud platform they're building if their core app doesn't even work reliably.

4
jcoffland 11 ago 5 replies      
One of Docker's biggest problems is that internally they have fomented a culture of "users are stupid" which is immediately apparent if you interact with their developers on GitHub.
5
kozikow 12 ago 3 replies      
I am using kubernetes instead of docker swarm to orchestrate docker images and all points mentioned in this article do not apply. My cluster is small - I have <100 machines at peak, but so far it feels ready for prime time.

There are parts of docker that are relatively stable, have many other companies involved, and have been around for a while. There are also "got VC money, gotta monetise" parts that damage the reputation of stable parts.

6
SilverSurfer972 1 ago 0 replies      
I think we should stop falling into the marketing greed of Docker to manage container at large scale.They want to get a grip of the corporate market and catch up with kubernetes at the cost of their containerization quality.Unfortunately with their current strategy they are loosing credibility as a containerization tool AND as the container orchestration tool they try to become.Using it solely as a containerization tool with k8s/ecs for the heavy lifting is the relevant way to go as for today IMO.
7
urvader 11 ago 0 replies      
The title should say: Docker Swarm is not ready for primetime. We have used Docker in production for more than two years and there has been very few issues overall.
8
okket 12 ago 0 replies      
Two days ago there was a fairly long discussion about a similar argument ("The sad state of Docker")

https://news.ycombinator.com/item?id=12364123 (217 comments)

9
bjt 1 ago 0 replies      
I think the underlying issue here is that no two people agree on what "Docker" is. Is it the CLI? Is it Docker Machine? Is it Docker Swarm?

The container part of Docker works well. And they've ridden that hype wave to try to run a lot of other pieces of your infrastructure by writing another app and calling it Docker Something. Now everybody means a different subset when they say "Docker".

10
morgante 11 ago 1 reply      
This article should really be about Docker Swarm not being ready for production. It's much newer technology than Docker and is predictably brittle.

The only points made against Docker proper are rather laughable. You shouldn't be remotely administering Docker clusters from the CLI (use a proper cluster tool like Kubernetes), and copying entire credentials files from machine to machine is extremely unlikely/esoteric.

Docker, with Kubernetes or ECS, is totally suitable for production at this point. Lots and lots of companies are successfully running production workloads using it.

11
perturbation 12 ago 1 reply      
Most of the complaints I've seen recently about using Docker are about the immaturity of Docker Swarm. Can this be mitigated by using Docker with Kubernetes / Mesos / Yarn?

If it's truly a problem with the containerization format / stability with the core product, I'm not sure what a good alternative would be. I see a lot of praise for rkt but the ecosystem and tooling around it are so much smaller than that for Docker.

12
_jezell_ 7 ago 1 reply      
The Docker team is doing more than anyone to move container technology forward, but orchestration is a much harder problem to solve than wrapping OS APIs. I wish they would stick to the core, and let others like the Kubernetes team handle the orchestration pieces. Swarm is hard to take seriously right now. I'm not sure bundling it into the core was the best way to handle it.
13
forktheweb 7 ago 1 reply      
I would say that my experience with Docker has been fantastic. I run over 10 Ubuntu Trusty instances on EC2 as 8G instances, mounted with NFS4 to EFS. This makes it super simple to manage data across multiple hosts. From that you can run as many containers as you like, and either mount them to the EFS folder, or just spawn them with data-containers, then export backups regularly with something like duplicity.

I use rancher with it, and it's retarded simple using rancher/docker compose.

For a quick run-down see: https://github.com/forktheweb/amazon-docker-devopsMore advanced run-down of where I'm going with my setup:https://labs.stackfork.com:2003/dockistry-devexp/exp-stacker...

14
joshka 12 ago 4 replies      
> Each version of the CLI is incompatible with the last version of the CLI.

Run the previous version of the cli in a container on your local machine. https://hub.docker.com/_/docker/

 $ docker run -it --rm docker:1.9 version

15
dcosson 11 ago 0 replies      
> Each version of the CLI is incompatible with the last version of the CLI.

I'm pretty sure that as long as the CLI version is >= the server version, you can set the DOCKER_VERSION env var to the server version and everything works.

I haven't used this extensively, so maybe there are edge cases or some minimal supported version of backwards compatibility?

16
opHASnoName 53 ago 0 replies      
You can set the CLI via environment variables to use newer clients with older machines:

export DOCKER_API_VERSION=1.23

You can export and import machines with this handy node js tool: https://www.npmjs.com/package/machine-share

17
kev009 10 ago 1 reply      
I wonder how much suffering would be alleviated in most mid-level IT organizations if they just used Joyent/SmartDataCenter, and I say this as a FreeBSD developer with no affiliation.
18
coding123 7 ago 0 replies      
Just read through most of the thread, seems like a very large disconnect between people that are happy with Docker and those that are not. Personally, I've been extremely happy with it. We have one product in production using pre-1.12 swarm (will be upgrading in the next couple months) and most of our dev -> uat environments are now fully docker. It's been stable. On my personal projects I used Docker 1.12 and yes, after a few days things kerploded, but after upgrading to 1.12.1 things have been incredibly stable. For Nodejs apps I have been able to use Docker service replicas instead of Nodejs clustering, and been very happy with the results.
19
callumjones 7 ago 1 reply      
If you're truly running Docker in production you're probably not making use of both of these issues taken with Docker. No one would dare interact with a production cluster via the basic Docker CLI, instead you should be interacting with the orchestration technology like ECS, Mesos or Kubernetes. We are running ECS and we only interact with the Docker CLI to query specific containers or shut down specific containers that ECS has weirdly forgotten about.

It definitely sounds like Swarm is not ready but I wouldn't say this is representative of running Docker in production: instead you should be running one of the many battle tested cluster tools like ECS, Mesos or Kubernetes (or GCE).

20
forktheweb 7 ago 0 replies      
I would say that my experience with Docker has been fantastic. I run over 10 Ubuntu Trusty instances on EC2 as 8G instances, mounted with NFS4 to EFS. This makes it super simple to manage data across multiple hosts. From that you can run as many containers as you like, and either mount them to the EFS folder, or just spawn them with data-containers, then export backups regularly with something like duplicity.

I use rancher with it, and it's retarded simple using rancher/docker compose.

For a quick run-down see: https://github.com/forktheweb/amazon-docker-devops

21
sergiotapia 8 ago 0 replies      
We've tried to use Docker a couple of times, but it was always much more trouble than it was worth. It was always some edge case that caused it to not work as expected one developer machine or another.

After about 2 years of giving it another shot on and off, we just gave up. And it's not like we were doing something crazy, just a typical documented "run this rails app" type thing. I would definitely not use this in production for anything based on my experience.

22
sandGorgon 11 ago 3 replies      
it seems that none of the container frameworks are generally ready.

Take for example k8s - I just started exploring it as something we could move to. https://github.com/kubernetes/kubernetes/issues/24677 - logging of application logs is an unsolved problem.

And most of the proposals talk about creating yet another logger...rather than patching journald or whatever exists out there.

For those of you who are running k8s in production - how are you doing logging ? does everyone roll their own ?

23
acd 9 ago 0 replies      
Here are alternatives to Docker

One can use Ubuntu LXD which is Linux containers built on top of LXC but with ZFS as storage backend. LXD can also run Docker containers.http://www.ubuntu.com/cloud/lxd

One can also use Linux containers via Kubernetes by Google.http://kubernetes.io/

24
rjurney 8 ago 0 replies      
Docker Swarm is definitely not production ready. Try to run any service that requires communication among nodes and you will agree. It works fine for web servers, but that is about it.

DC/OS is emerging as the go-to way to deploy docker containers at scale in complex service combinations. It 'just works' with one simple config per service.

25
mrmondo 7 ago 0 replies      
We've been running in production across thousands of containers for long over a year now and it's been fantastic, not only a life saver of a deployment method but it's allowed for reliable, repeatable application builds.
26
mkagenius 12 ago 6 replies      
Would love to hear thought from people who use it in production?
27
jasoncchild 10 ago 2 replies      
"Just imagine if someone emailed you a PDF or an Excel file and you didn't have the exact verion of the PDF reader or Excel that would open that file. Your head would blow clean off."

Obviously this person has never spent a good deal of time dealing with AutoCAD...

28
zwischenzug 11 ago 1 reply      
Can anyone using Swarm seriously in production post any account of their experiences here? Thanks.
29
ktamiola 12 ago 0 replies      
Fair enough! There is also performance penalty and annoyingly complicated setups in certain cloud environments.
30
damm 5 ago 0 replies      
Docker's far from ready from primetime. I'm sure everyone likes taking down their site to upgrade Docker to the latest release.

Their mindset is very tool driven; if there's a problem let me just write a new tool to do that.

Ease of use or KISS isn't a part of their philosophy

31
stevesun21 8 ago 0 replies      
As I worked for my ex-employer, we use docker based Elatic beanstalk serve over millions requests across three services per seconds.
32
StreamBright 11 ago 0 replies      
I have the same experience. I am trying to set a new node where I accidentally installed 1.10 and CLI does not work. Looking for some thing on the internet how do X with Docker, only articles available for previous versions. I mean seriously, command line development is not supposed to be this hard, get a set of switches and stick to it unless something major forces you to change. If you shuffle around CLI switches between minor releases nobody is going to be happy.
33
hellofunk 10 ago 0 replies      
I agree in general, and find Docker one of those technologies that does not (yet) live up to the hype.
34
brint 10 ago 0 replies      
For the versions issue, check out dvm: https://github.com/getcarina/dvm

While versions are an issue, it's at least a reasonable way to work around it.

35
exratione 11 ago 1 reply      
So far I'm feeling pretty good about the decision to skip the first generation containerization infrastructure.

At the outset it had the look of something that wasn't an advance over standard issue virtualization, in that it just shuffled the complexity around a bit. It doesn't do enough to abstract away the ops complexity of setting up environments.

I'm still of the mind, a few years later, that the time to move on from whatever virtualization approach you're currently using for infrastructure and development (cloud instances, virtualbox, etc), is when the second generation of serverless/aws-lambda-like platforms arrive. The first generation is a nice adjunct to virtual servers for wrapping small tools, but it is too limited and clunky in its surrounding ops-essential infrastructure to build real, entire applications easily.

So the real leap I see ahead is the move from cloud servers to a server-free abstraction in which your codebase, from your perspective, is deployed to run as a matter of functions and compute time and you see nothing of what is under that layer, and need to do no meaningful ops management at all.

36
jokoon 9 ago 0 replies      
I watched again an explanation of what docker really is, it just seems to be this awesome thing that solve the very hard problem of inter-compatiblity between systems. I always tend to question how and why a developer had to use docker instead of making choice that would avoid it.

It's not surprising docker can't always work, but it's nice to see that programmers are winning. I guess future OS designers and developers will try to encourage for more inter compatibility if possible. That's really a big nerve.

37
skrowl 5 ago 0 replies      
Is lxc any better? Are any of the issues in OP solved in lxc vs Docker?
38
ycombinatorMan 11 ago 0 replies      
Aye, theres a lot of important bugs that are just sitting there
39
ldehaan 10 ago 0 replies      
I have several past clients running docker in production just fine.

At my current job we run nearly all our services in docker.

I've replied to this type of comment on here at least a dozen times, it has nothing to do with docker, it is a lack of understanding how it all works.

Understand the history, understand the underlying concepts and this is no more complex than an advanced chroot.

Now on the tooling side, I personally stay away from any plug-ins and tools created by the docker team, they do docker best, let other tools manage dockers externalities.

I've used weave since it came out, and it's perfect for network management and service discovery.

I prefer to use mesos to manage container deploys.

There is an entirely usable workflow with docker but I like to let the specialists specialize, and so I just use docker (.10.1 even), because all the extra stuff is just making it bloated.

I'm testing newer versions on a case by case basis, but nothing new has come out that makes me want to upgrade yet.

And I'll probably keep using docker as long as it stays separate from all the cruft being added to the ecosystem.

40
ilaksh 7 ago 0 replies      
Creating and maintaining a service cluster is hard. I dont think you should just take it back to the store if your magic wand has a hiccup.
41
20yrs_no_equity 9 ago 2 replies      
Network partitions really do happen! They are often short, but if you can't recover from them, then you shouldn't call yourself a distributed system.

I am shocked at how fragile etcd is in this way. I was hoping docker swarm was better, but I'm not surprised (alas) to find out that it has the same problem.

I'm about ready to build my own solution, because I know a way to do it that will be really robust in the face of partitions (and it doesn't use RAFT, you probably should not be using RAFT, I've seen lots of complaints about zookeeper too. I've done this before in other contexts so I know how to make it work, but so have others so why are people who don't know how to make it work reinventing the wheel all the time?)

42
bogomipz 10 ago 1 reply      
Docker not being ready for primetime and Swarm not being ready for primetime are two different things no? As for the cli compatibility issues, don't most people use an orchestration tool like ansible/chef/puppet etc to manage their fleet? I'm not sure I think the title of the post is accurate.
43
asitdhal 12 ago 1 reply      
Do they speak English or the caller is expected to know French ?
44
jacques_chester 11 ago 3 replies      
Make no mistake: Docker Inc has a lot of excellent engineers.

But there's also a landrush going on. Everyone has worked out that owning building blocks isn't where the money is. The money is in the platform. Businesses don't want to pay people to assemble a snowflake. They want a turnkey.

CoreOS, Docker and Red Hat are in the mix. So too my employers, Pivotal, who are (disclosure) the majority donors of engineering for Cloud Foundry. IBM is also betting on Cloud Foundry with BlueMix, GE with Predix, HPE with Helion and SAP with HANA Cloud Platform.

You're probably sick of me turning up in threads like these, resembling one of the beardier desert prophets, constantly muttering "Cloud Foundry, Cloud Foundry".

It's because we've already built the platform. I feel like Mugatu pointing this out over and over. We're there! No need to wait!

A distributed, in-place upgradeable, 12-factor oriented, containerising, log-draining, routing platform. The intellectual lovechild of Google and Heroku. Basically, it's like someone installed all the cool things (Docker, Kubernetes, some sort of distributed log system, a router, a service injection framework) for you. Done. Dusted. You can just focus on the apps and services you're building, because that's usually what end users actually care about.

And we know it works really well. We know of stable 10k app instance scale production instances that are running right now. That's real app instances, by the way: fully loaded, fully staged, fully logging, fully routed, fully service injected, fully distributed across execution cells. Real, critical-path business workloads. Front-page-of-the-WSJ-if-it-dies workloads.

Our next stretch goal is to benchmark 250k real app instances. If you need more than 250,000 copies of your apps and services running, then you probably have more engineers than we do. Though I guess you could probably stretch to running two copies of Cloud Foundry, if you really had to.

OK, a big downside: BOSH is the deployment and upgrade system. It's not very approachable, in the way that an Abrams main battle tank is less approachable than a Honda Civic (and for a similar reason). We're working on that.

The other downside: it's not sexy front-page-of-HN tech. We didn't use Docker, it didn't exist. Or Kubernetes, it didn't exist. We didn't use Terraform or CloudFormation ... they didn't exist.

Docker will get all this stuff right. I mean that sincerely. They've got too many smart engineers to fail to hammer it down. More to the point, Docker have an unassailable brand position. Not to be discounted. Microsoft regularly pulled did this kind of thing for decades and made mad, crazy cash money the whole way along.

45
jbverschoor 11 ago 1 reply      
Docker is the new mongodb.Let's just use freebsd jails + postgresql.
46
MrFurious 10 ago 0 replies      
Docker, container for hipsters that doesn't know deploy a simple linux virtual server.
47
anotherdpk 12 ago 2 replies      
> Each version of the CLI is incompatible with the last version of the CLI.

Sure, but I don't think this is a show stopper. You can and should only carefully upgrade between versions of Docker (and other mission-critical software). The process is functionally identical to the process you'd use to perform a zero-downtime migration between versions of the Linux kernel -- bring up a new server with the version of Docker you want to use, start your services on that new server, stop them on the old server, shut down the old server, done.

I don't mean to suggest this is trivial, only to suggest that it is no more complicated than tasks that you/we are already expected to perform.

17
User-friendly language for programming efficient simulations mit.edu
55 points by muhic  9 ago   16 comments top 6
1
sanxiyn 5 ago 1 reply      
It is worth noting that Simit shares an author with Halide: http://halide-lang.org/

I'd say Halide was definitely a success of domain specific high performance language, so hopefully Simit is too.

2
hclgckxjtxjfxur 7 ago 2 replies      
It seems like it would be much more straightforward to write a graph-based FEM library for MATLAB than to make an entirely new language around a feature with a very specific and very narrow use case. Still, the high performance aspects of simit seem interesting, assuming that they weren't too badly cherrypicked.
3
faizshah 7 ago 0 replies      
Much more info on their site/github: http://simit-lang.org/language

Also, the paper is available here: http://simit-lang.org/tog16

4
santaclaus 7 ago 1 reply      
Also coming out this year with the same goal is Ebb from Stanford: http://ebblang.org

Both languages look cool, but seem fairly limited at the moment -- they are basically DSLs for assembling and solving PSD systems.

5
lowestwhisper 4 ago 0 replies      
A presentation of this work is available at: https://www.youtube.com/watch?v=raPkxhHy5ro
6
pasbesoin 7 ago 0 replies      
Server housing publication PDF's appears to have problems, at the moment.

Archived copies:

http://wayback.archive.org/web/20160828234821/http://groups....

http://wayback.archive.org/web/20160828234958/http://groups....

18
An Interesting SETI Candidate in Hercules centauri-dreams.org
144 points by phreeza  15 ago   35 comments top 10
1
f2f 14 ago 0 replies      
Great, another wow signal! Now we can rest easy that the first one wasn't a fluke... Regardless of whether this is the making of sentients or a physical phenomena we have one more reason to keep looking up in the sky :)

here's a good description of the wow signal and why it's important. if this one has similar characteristics we just increased chances of observing a really interesting phenomena by 100%:

http://www.universetoday.com/93754/35-years-later-the-wow-si...

2
adrianratnapala 12 ago 2 replies      
The article talks about the frequency and strength of the signal but I didn't spot anything about its structure.

What is it about this signal that is supposed to make it seem (possibly) unnatural?

3
mercurialshark 5 ago 2 replies      
This is possibly exciting. Its metallicity is almost identical to that of the Sun.

If hypothetically it were the best case scenario, something broadcasted by a sentient being, then the position that the universe simply isn't old enough to be densely populated becomes a lessor issue.

Considering that our sun is a third or fourth generation star, based on its metallicity/age and if heavier elements are necessary for advanced civilizations to evolve - it's possible that the intergalactic space race is only just beginning.

4
jloughry 10 ago 0 replies      
TIL the unit of amplitude, mJy, refers to Janskys [1].

[1] https://en.wikipedia.org/wiki/Jansky

5
hoodoof 12 ago 2 replies      
It says it could be a Kardashian Type II civilization. Scary thought.
7
sevenless 6 ago 2 replies      
I always thought optical SETI makes more sense. With relatively inexpensive lasers we can send an extremely bright message, vastly outshining our sun, to every star within a few thousand light years. Surely that's how you'd say 'We're here' to aliens.

http://seti.harvard.edu/oseti/

8
okket 13 ago 3 replies      
Don't get carried away like with Tabetha's Star and the alien megastructures a few months ago...

http://www.skyandtelescope.com/astronomy-blogs/cosmic-relief...

9
obvio171 10 ago 1 reply      
I was expecting SETI to find intelligence on Earth before space. They've been looking at the entropy of bee hives and other disembodied signal-passing collectives for a while now.
10
eggy 13 ago 0 replies      
I want to be first to pilot the machine Herculeans are sending in code for us to build just like in the movie 'Contact'!
19
Mapping the Mercantilist World Economy ericrossacademic.wordpress.com
58 points by colinprince  12 ago   2 comments top 2
1
JacobAldridge 20 ago 0 replies      
Great read. I've studied a fair amount of history (by the end of university, I realised I'd done at least one formal course in every time period from the sacking of Rome in 410 to the sacking of Nixon in 1974), so I always enjoy these 'larger context' pieces.

One thing this helped me understand better was the Dutch Tulip-mania. Tulips, as I learnt when they were blooming everywhere on a visit to Istanbul, are native to modern-day Turkey, historically the Ottoman Empire. The ongoing trade / power struggle between the Muslim / Arab world and the emerging European Powers would thus have restricted trade from one to the other, and driven up the scarcity of Tulip bulbs. I had wondered how something that grows so naturally on one side of a continent could be so valuable for as long as it was on the other side - this helped me piece more of the story together.

2
jmickey 38 ago 0 replies      
Thank you for the insightful article! Sadly it only covers historical trade routes. Are similar maps available for present day? I.e. What are the main trade routes for different types of goods?
20
FreeSense: Indoor Human Identification with WiFi Signals arxiv.org
134 points by brakmic  17 ago   30 comments top 11
1
noobiemcfoob 15 ago 0 replies      
This is a type of passive identification I hadn't imagined before. It's pretty impressive to see 90% identification for a set of 6 users.

I can't imagine it's accurate enough to use for secure verification. I could see it's application for a shared entertainment system (ps4, netflix, etc) where identification is primarily for configuration purposes, not security.

2
SEJeff 4 ago 0 replies      
Note that this isn't all that dissimilar to Xandem's tomographic motion detection. Their "Xandem Home" product makes a Harry Potter Marauder Map style overlay on a map of your home showing where all moving people are in realtime. It is really cool stuff that I'm about to have installed in my own home:

http://www.securityelectronicsandnetworks.com/articles/2014/...

http://www.xandem.com/motion-detection

Compared to crappy PIRs from companies like ADT, it is great stuff.

3
BetaCygni 16 ago 0 replies      
Very cool, and somehow very creepy! This is how we will be hunted when the machines rise up ;)
4
infodroid 12 ago 0 replies      
There was a good/creepy article in The Atlantic about FreeSense and WiKey, which was covered a few days ago: https://news.ycombinator.com/item?id=12353605
5
droopybuns 15 ago 2 replies      
The only good application of this work I can come up with is to reduce the danger that comes from surprised cops in no-knock warrants.

Still kinda evil though.

6
7
lwis 14 ago 1 reply      
Is this much different to FIND?
8
dynofuz 5 ago 0 replies      
I building a business around this stuff in boston. If anyone's interested send me an email (in my profile)
9
EGreg 16 ago 5 replies      
How would a person be able to avoid this?
10
notduncansmith 13 ago 0 replies      
Can this be executed from phones (which can act as WiFi router, for tethering purposes) by this ubiquitous baseband RCE vulnerability I always hear about on HN?
11
danielmorozoff 15 ago 0 replies      
Wasn't this what wifi slam worked on at Stanford?
21
Major next steps for fusion energy based on the spherical tokamak design pppl.gov
17 points by jonbaer  4 ago   7 comments top 2
1
ChuckMcM 1 ago 1 reply      
This may be cynical but every time I read the Princeton Lab's press I am reminded of how different fusion science is from fusion engineering. In the former the words are like "explore ways of doing x ..." or "Compare different forms of y ...". In the engineer press it is more like, "Once we achieve x which we expect to achieve by p, q, or worst case r, we will move on to y leaving us one step away from fully operational fusion plants."

As an engineer I much prefer the latter, something with a bit of urgency that "we need to develop this as quickly as possible because it unlocks a lot of solutions to problems which are threatening billions of people." Not "Ohh look at all the shiny ways we can turn three two small atoms into one slightly bigger atom!"

2
paws 2 ago 1 reply      
Interesting to see the trend continue towards spherical tokamaks.
22
The origins of the Nazis secret horse breeding project longreads.com
40 points by Hooke  11 ago   24 comments top 5
1
smnscu 10 ago 2 replies      
Slightly off topic, my wife just sent me this link after a friend mentioned today how Bayer had links to the Nazis. The last paragraph about IBM is particularly funny.

http://www.11points.com/News-Politics/11_Companies_That_Surp...

2
gravypod 3 ago 0 replies      
I wish there was a place I could go to read translated versions of notes kept by German scientists. They did horrible things but some of the other research was really cool like what I've learned about their attempts in creating nuclear power.
3
sandworm101 9 ago 4 replies      
>> Gustav Rau pulled a pistol from his hip and pointed it directly at the SS officer.You have no authority here, he said. This horse farm is under the jurisdiction of the German Army. [...] Neither man moved until, with a curt nod, the officer stepped back. He agreed to remove his men."

I like this story. It emphasizes the error in the title. These seem to have been German horses, not Nazi horses. I see so many articles and books reference everyone and everything in the German armed forces of the time as "Nazi". The reality was complicated. There was much conflict as many in the armed forces felt they should remain politically neutral. This idea is not uncommon today. Many members of various armies go so far as to not vote while in active service. Others, notably in the US forces, see political detachment unpatriotic. This boils down to an officer within an older, politically detached force pulling a gun on an officer of a new and fanatically active force. There must have been many such stand-offs.

https://en.wikipedia.org/wiki/Nazism_and_the_Wehrmacht

4
aab0 9 ago 1 reply      
Cuts out at the most interesting point.
5
cloudjacker 10 ago 1 reply      
Secret Nazi attempt lol

You guys act like you wouldn't jump at the chance to have your pet project subsidized by ANYBODY, literally any government or organization with any ideology if it meant the chance for you to pursue your dream

Aren't most of you guys here interested in VCs, for example.

Interesting story though, any more details about the guys that successfully scammed Nazi taxpayers?

23
What It Takes for an Independent Record Store to Survive Now pitchfork.com
57 points by pmcpinto  14 ago   19 comments top 8
1
noobermin 8 ago 1 reply      
So, I pass by Used Kids and the "High Street" they mention everyday. A little bit of back story that was hinted at here. Essentially, High Street borders Ohio State, and developers are looking to cash in and build more apartments for the growing university. Recently, about four or so blocks the near southern corner called "the Gateway" was demolished...along with a number of lower income houses...in order to build large apartments specifically geared towards students. I was assuming the end of Used Kids (and all the shops along that block) is related to this development. I am not sure if they have the same owners, but after seeing Used Kids and the other record shop (which often played music on the street...added a nice flair) I assumed this is what was happening when it was sectioned off by concrete dividers.

Minwage and rent and demand ain't the only issue, it's developers feeding a growing university which is eating that neighborhood alive. My paycheck comes from that university and I am for OSU's advancement, but I can't deny the reality of what is happening.

2
techsupporter 2 ago 0 replies      
Meanwhile, there are two record stores almost directly across the street from each other in the Ballard neighborhood of Seattle (and two more in the U District, as I recall). After reading this article--actually, I finished reading this article on the bus towards Ballard--I went over to visit them both. Happily, both had quite a few people inside and almost everyone bought at least one record.

According to The Stranger, one of those stores, Sonic Boom Records, recently sold to a longtime customer[0] and "is on solid ground having year over year record breaking sales figures at [their] Ballard location." Hopefully they continue. (Though I did take a quick glance at property records and neither store is in owned space.)

0 - http://www.thestranger.com/slog/2016/07/11/24331367/sonic-bo...

3
icantdrive55 8 ago 0 replies      
I worked at a Rainbow Records in the 90's, after graduating with a useless BA degree in Business, and recovering from a nervous breakdown. Rainbow records was not nearly as big as Tower Records, but big for the Bay Area.

I saw the writing on the wall. This was slightly before Napster, and downloading. Cd's were a big deal. The business just seemed destined to close.

I look back, and it was one of my better jobs. It only paid minimum wage, but the people, and friendships I cultivated were priceless. There was one employee who gave me a bad time, but I still liked her. She would berate me, in a joking manner, but I honestly didn't care. I missed her on her days off. She used to remember holidays, and buy employees gift baskets/little gifts. I was always the last person she would give a gift to, and it was always the same verbs banter. Me, "Now I know you don't want to give me this? Her, "Well, I couldn't just leave you out?". She would walk away, with that punk rock hair, look back at me, and say, "Stop looking at my ass." Me, "Sorry, but you just have good genes--meaning I like your denim--Levis? She would laugh, and think about a come back. I really hope she's happy now-

I look back, and don't know what could have saved that store. I couldn't imagine opening any store these days, especially around here.

(I do like the idea of a nonprofit business model for used record stores, and book stores. I think there's a few nostalgic guys who might donate a store to the right group of people? I would--if I was a landlord.)

4
noonespecial 11 ago 3 replies      
Minimum wage and rent (high in the very places we most want such stores to exist) seem to set a lower bound on performance below which such things simply can't exist today.

Perhaps if one operated as a non-profit or a co-op so people who loved this stuff could realistically volunteer and keep it going?

5
greggman 4 ago 0 replies      
I went to Ameoba in LA 2 years ago. It had been a awhile since I was in a record store.

It seemed sooooo dated. No way to listen to the CDs to see if I want one. No way to check reviews like if I wanted to see which of 5 albums I should consider. Plus I don't even own a CD player at the moment. My laptop doesn't have one so I'd have to ask someone to rip it.

I used to love records stores and some of my favorite music is stuff I discovered at the store but it feels like snail mail to email at this point. Sure album covers and liner notes are awesome but I've been online since 2003 with Rhapsody and other stuff since and just being able to play any album and then look at the influences lists and follow those totally killed doing it at the store for me.

In Japan there are a couple of CD stores where all the CDs are open and there are 20-30 CD players for you to listen to them in. It's fun as nostalgia but even then it's not as convenient as online.

It all feels like from another era like a livery or something.

6
hughperkins 11 ago 3 replies      
on a related note, i went into a brick and mortar shop to buy earphones the other day. turns out they wont let you try them, for various reasons. in the end, i looked on the internet for reviews, and bought some headphones on amazon instead, and they are perfect, awesome. so, the brick and mortar electronic equipment store adds what value?
7
sotojuan 11 ago 1 reply      
Sorta off topic Other Music closed/is closing? Wow, I thought its location (close to NYU in a fairly "hip" area of Manhattan) would keep it in business. I remember visiting it often in my freshman year of college when I was really into musicit was rarely empty. NYC rents may be though, but I thought they were doing fine.
8
daodedickinson 11 ago 0 replies      
A minimum wage hike just killed the main one near UC Berkeley.
24
Optimizing matrix multiplication in C attractivechaos.wordpress.com
61 points by attractivechaos  13 ago   17 comments top 8
1
makmanalp 8 ago 1 reply      
Attractivechaos's stuff blows my mind. Shameless plug - I've started dissecting his header-only hashmap library (khash.h) bit by bit, and I've been documenting my adventure here:

https://medium.com/@makmanalp/dissecting-khash-h-part-1-orig...

edit: and part 2:

https://medium.com/@makmanalp/dissecting-khash-h-part-2-scou...

2
Const-me 2 ago 0 replies      
Apparently, the main reason for your results GCC optimizer aint good.

Heres Visual C++ port: https://github.com/Const-me/matmul/

Eugen is still faster than nave implementations, but not that faster, just 30-40% compared to SSE+tiling sdot.

3
apathy 9 ago 0 replies      
Good write up. It is very rare to outperform decades of numeric analysts (and also avoid nasty machine precision issues) by shooting from the hip, and Eigen is amazingly easy to use (plus it is a headers-only implementation: no DLL).
4
santaclaus 8 ago 0 replies      
I'd like to see how MKL stacks up -- if you are on Intel hardware MKL often beats out Eigen.
5
em3rgent0rdr 2 ago 0 replies      
I don't like how the author has labeled tables as "Linux" and "Mac", when really most of the differences between those columns are the result of the compiler used, and to a lesser extent, the fact that the Mac was a local machine while the "Linux" tests were done on a remote server.

A much more useful comparison would keep everything constant except the single variable that is different. This could have been done by utilizing the same hardware, and only using a different compiler. Since both GCC & Clang work on both linux & mac, there is no excuse.

6
rurban 8 ago 3 replies      
The latest scatter/gather vectorization tricks are missing. The SSE improvements are only minimal.

Maybe a very new compiler, like Polly or ICC can vectorize this automatically.

ICC has a special -qopt-matmul option. https://software.intel.com/en-us/node/524953

7
mamcx 2 ago 0 replies      
Similar tricks in a managed language like .NET?
8
sickboy 8 ago 1 reply      
fma may help to double performanceif you control memoryrw wrll
25
Using Apache Spark to Analyze Large Neuroimaging Datasets dominodatalab.com
19 points by gk1  7 ago   1 comment top
1
cottonseed 3 ago 0 replies      
If you're doing neuroscience image analysis, you probably want to take a look at Bolt, Thunder, Lightning:

http://bolt-project.org/http://thunder-project.org/http://lightning-viz.org/

and associated work going on at the Freeman lab at HMMI:

https://www.janelia.org/lab/freeman-lab

26
Show HN: A Bot to Deploy to AWS, Digital Ocean Etc. deploybot.com
99 points by LukeFitzpatrick  16 ago   31 comments top 11
1
vemv 9 ago 5 replies      
There's a number of startups doing some variation of this.

What many don't seem aware of is that plain pull requests, in combination with CI, entirely kill the need for a deploy app/bot.

This is how I do it at my current company:

 * use plain git flow (master/develop, hotfixes, etc) * use additional explicit branches per deployment target (e.g. master-spain for http://myapp.es, master-mexico for http://myapp.mx). * Protect these branches using github/bitbucket 'protected branches'. * open a PR from master to master-spain for performing a deploy of said target, detailng nicely what is being deployed and why. * instruct CI to deploy my app on each build of master-spain. master and develop are never deployed.
This setup has the same benefits (and then some more) than competitors:

 * Explicit deployment authors, reasons, timestamps * Impossible to deploy red code * Impossible to deploy code not in master * Impossible to deploy concurrently to the same target
Hope it helps someone!

2
avtar 12 ago 3 replies      
Seems like a pretty neat service. To save others some time, they don't have a free tier, you can't host it yourself, and they use Docker for builds before deployments:

http://support.deploybot.com/article/1028-plans-and-pricing

3
schappim 5 ago 0 replies      
We[1] use DeployBot every day and we can't endorse them enough!

The combination of DeployBot, Github and AWS Elastic Beanstalk is awesome and is the closest thing to having Heroku in Australia.

We used to just use Elastic Beanstalk, but when AWS moved their deploy method away from git to using zips of S3 bundles, it meant that you needed to reupload the entire app whenever you made a change (not just the delta). This can take a long time on ADSL. DeployBot saved the day here, and allowed us to pull the code from Github.

[1] http://littlebirdelectronics.com

4
obisw4n 6 ago 0 replies      
Migrated a complex Jenkins setup to Deploybot in 2015, saves our company a ton of time managing deploys. I'd highly recommend deploybot to anyone.

If I could critique even just one thing it would probably be its pricing structure for personal use, I can't justify $15/m just for deployments. I'd love if they had some kind of personal "developer" tier with support for more repos. On the business side, $15/m is ridiculously cheap for what service we're getting.

5
riffic 3 ago 0 replies      
Happy user of DeployBot here. Does exactly what it's advertised it does.
6
lsiebert 5 ago 0 replies      
Heh, my company uses this. I didn't realise it was so new that it deserved a HN post to it's front page.
7
jszymborski 9 ago 0 replies      
How does this compare to something like Laravel Forge[0]? Is it just that Forge focuses on PHP?

[0] https://forge.laravel.com

8
parasanti 8 ago 2 replies      
Any suggestions on reading material/designs for deploying a complete CI process for a new development team using these newer processes/applications?
9
joshmn 11 ago 1 reply      
Kudos to WildBit. They're undeniably great in all the ways.
10
vs2 12 ago 0 replies      
I have used deploybot for over a year, great engineering. My favourite deploy tool
11
sandstrom 10 ago 0 replies      
Any suggestions on similar open-source tools?
27
Ask HN: What kind of projects should I build for a front-end portfolio?
63 points by Calist0  7 ago   29 comments top 15
1
lopatin 5 ago 2 replies      
I can only speak from personal experience but here's what worked for me. Build one thing that will truly impress the senior engineers at whatever companies you're applying to. In order to be impressive, it has to solve an interesting engineering problem. Try to build it yourself. Fail (probably). Now go pick up a library that does the hard thing for you. What's great about this is that you're now very familiar with the hard problem and have ideas/opinions about it. That will make you a valuable contributor to that particular open source library. Your project will be the centerpiece of your resume, you have the lower level knowledge that will give you points during eng interviews, you've got an open source contribution that is actually impactful, and a war story for how you failed at some very hard problem (engineers love this). That is, go deep. Not wide. For me, the hard problem was real time sync of a multiplayer game but yours can be anything you're interested in. Drop your portfolio, call it a resume. Ditch the "wireframes" and don't even think about adding parallax to your site.

Edit:Your idea of recreating popular websites is also good though, I just think it's more of a shot in the dark. If I saw a resume with a couple projects recreating some sites, even if they're good, it just doesn't tell me much, just that you're basically competent and most companies are looking for more than that. But if I saw a re-creation of Slack or Gmail that is comprehensive and actually looks and feels like the real thing, and handles errors correctly, and handles off-line mode, and has the performance of the original and is open source ... I might just literally throw money at you.

2
bigiain 6 ago 1 reply      
What are your showcase-able skills?

Who are you intending to showcase them to?

These "junior positions" you'll be applying for - what sort of companies are they with and what sort of work are they likely to ask a new junior FEDev to do?

Since you sound like you're just starting to get this portfolio together - and it seems like it's major objective is to land you your first FE Dev role - target it like crazy at the actual roles you're applying for, keeping in mind the sort of work they'll expect and permit a first-time junior FE Dev to do.

First thought there, if you're working anywhere bigger than a startup or 3-5 person agency, you're probably not going to be asked or even allowed to "change things so you think they're improved", you're much more likely to be required to "build another page for an existing site that fits in with all the other pages - both in styles/designs, as well as using the same framework/js-libraries/css files". Example: if you're applying to an agency that runs an automaker's website, as a junior you won't be asked to redesign their flagship vehicle's page, you'll be asked to add a new model or variant to something in their mid-level or entry-level range. You'd be better off (if I were interviewing you) having something in your portfolio showing an imaginary new Corolla model that uses the existing Toyota website's css/js-includes/bootstrap-files/whatever and would _obviously_ fit in with the existing site - compared to a innovative and game-changing new marketing strategy for the top of the line Hilux or Landcruiser - because that's _not_ what junior FE Devs get asked to do...

3
technojunkie 6 ago 1 reply      
You should decide what type of front-end position you're interested to get into; the front-end is now so broad that it's tough to know everything.

First, prove you are proficient with the following:

http://learn.shayhowe.com/html-css/

http://learn.shayhowe.com/advanced-html-css/

Here's a contest today that can test your skills

https://a-k-apart.com

Learn javascript and show you know ES5 inside and out. If you already know ES6/ES2015, awesome, show that too.

Any project where you've written the code from scratch (not using Bootstrap), where you teach others what you did, will show you're on the right track.

Want to contribute to Github? Look for a language, project, framework, library you're interested in, fork the project and improve the code. Doing this regularly, every day if you can, will show you're eager to learn and contribute.

If you're already at this level, pick up a specialty. It could be templating within WordPress, .NET or Java, or it could be MVC based coding using React+Redux, Angular, or Ember. Pick your favorite from these, get super proficient and even blog about your progress.

Finally, once you've gotten this stuff done, you will set yourself apart by learning cutting edge tech like Service Workers, Offline first, progressive web apps and just about anything the Google Chrome Developers are talking about here:https://www.youtube.com/channel/UCnUYZLuoy1rq1aVMwx4aTzw

My favorite is Totally Tooling Tips (all three seasons are gold).

4
restlessdesign 6 ago 2 replies      
From a JS perspective, I would be interested to know that you understand how to make requests, parse them into a data structure, and manipulate them DOM. Preferably, one project which demonstrates that you can do this without the help of a framework; another project, that you can.
5
aaronbrethorst 5 ago 0 replies      
Things that you care about. Things that you are passionate about. Things that you can talk about literally for hours with anyone who will simply give you the opportunity.
6
rbrcurtis 2 ago 0 replies      
FWIW, I'm a hiring manager.

Build ANYTHING. It's mildly more interesting to me if you build something because you were interested in the problem, but if you take the time to learn a framework (or 3) and build something with it, you're proving to me that you are capable of learning and are interested in doing so as opposed to just showing me the projects you worked on in college.

7
natnai 5 ago 0 replies      
I was in a similar position to you just a year ago. I landed a job at a great start up as an FE dev. I think the best thing you can do to help yourself is really to build solutions with technology you're interested in. There's no point in learning framework X just because it's the hot thing in the industry -- learn to solve problems and demonstrate that you can do so with the appropriate tools. Solve problems you're interested in solving and the right job will come to you, because solving problems that interest you means you'll do it better, and companies interested in solving those types of proteins will also he interested in hiring you. In view of this, I say, do whatever the hell you want, but make sure you DO IT WELL.

Always remember that developers are first and foremost problem solvers. We solve problems and code is our primary tool. That's all there is to it. No more, no less. Good companies hire problem solvers, not developers who can remember the redux and webpack docs verbatim.

8
nfriedly 5 ago 0 replies      
You could try taking a few jobs on freelance sites. The upsides are money and real-world experience. The downside is that you (usually) can't put the code on GitHub (although you can put screenshots and links in your portfolio.)

Feel free to reach out to me for advice if you try this but need a little help.

9
skraelingjar 1 ago 0 replies      
I am also just beginning to build my portfolio as a FEDev. I volunteer at a local non-profit and when I found they were having trouble finding an affordable solution so the public could fill out and sign forms digitally, I jumped at the chance to build it. Maybe you can find a way to volunteer and create something that will highlight your skills?
10
xeniak 6 ago 0 replies      
The best contributions you can make on GitHub are legitimate ones: start using various libraries etc. and when you find bugs or missing documentation, open an issue or try and provide a PR.
11
ja27 3 ago 0 replies      
Could reach out to local non-profits and offer to update their sites.
12
tootie 5 ago 0 replies      
I judge developers on their ability to solve problems. Aimless coding does nothing to satisfy that. Get some PRs approved for useful open source projects and I'll be impressed. Even adding test coverage or documentation would be good.
13
thomasedwards 6 ago 1 reply      
I would love to see how much you can do with not much at all. Anybody can make a great-looking site, but can they do it in under 500kb? See https://a-k-apart.com/ for tips!
14
m1sta_ 1 ago 0 replies      
Are you stronger as a designer or a programmer?
15
DTrejo 6 ago 2 replies      
@Calist0 You would like my book on this, it answers all three of your questions: https://gum.co/CSGETMONEY - if you're not a white man you can email me and I'll send it to you for free!

Cheers!

D

28
Ursula K. Le Guin, the emissary from Orsinia, challenges expectations loa.org
58 points by lermontov  12 ago   39 comments top 4
1
bmer 10 ago 8 replies      
For someone who was/is excited by reading the Orsinia tales: can you share why, besides just telling me repeatedly that you were excited by it?

I am someone who has been often disappointed by vaunted authors of the past (Vonnegut, Clarke, etc.), because their work suffers from the "it's not novel (anymore)" feeling one often has when watching old movies. I get that these were ground breaking when they first came out, but they don't seem to shine anymore, because I was first exposed to works that have iterated upon them...no nostalgia factor to sweeten the deal for me.

Put another way: I totally get that muskets were ground-breaking for the time, but gosh-darnit, I have seen nuclear submarines.

When reading the linked article, I felt a lot of the same: a lot of the "amazing" things I have come across in other novels, and they feel like par for the course for a good book.

Also the whole spiel about Ursula wanting to change the world, and needing a lever and a place to stand...well, she didn't change the world, in hindsight. Did she?

------------------

A really bothersome quote from the article: "By speaking from Orsinia, as its only authorized emissary, Le Guin reminds us of everything we take for granted and everything we have neglected."

What the heck does that sentence mean? I can't imagine everything I have taken for granted and neglected as a single meaningful thought or concept. In fact, if I tried to imagine everything I have taken for granted and neglected, it becomes a thought stretched so thin, that it loses all meaning. It becomes nothing because its too many things.

So am I fair in discounting this sentence (and most of the article) as fan gibberish?

2
Animats 11 ago 1 reply      
"The Ones Who Walk Away from Omelas" is perhaps her most biting short story. It's not one of the Orsinian tales, although it could have been set in that world.

Anyone who writes fantasy should read her essays in "The Language of the Night". (Also Poul Anderson's "On Thud and Blunder".)

3
jcoffland 11 ago 2 replies      
One of my favorite authors. The Dispossessed is also a fantastic read which goes far beyond the simple classification of SiFi.
4
10 hours ago 10 ago 1 reply      
29
Why bad scientific code beats code following best practices (2014) yosefk.com
243 points by ingve  13 ago   212 comments top 52
1
modeless 13 ago 10 replies      
I think there is a growing rebellion against the kind of software development "best practices" that result in the kind of problems noted in the article. I see senior developers in the game industry coming out against sacred principles like object orientation and function size limits. A few examples:

Casey Muratori on "Compression Oriented Programming":https://mollyrocket.com/casey/stream_0019.html

John Carmack on inlined code:http://number-none.com/blow/john_carmack_on_inlined_code.htm...

Mike Acton on "Data-Oriented Design and C++" [video]: https://www.youtube.com/watch?v=rX0ItVEVjHc

Jonathan Blow on Software Quality [video]: https://www.youtube.com/watch?v=k56wra39lwA

2
ThePhysicist 12 ago 2 replies      
What most people seem to forget is that "best practices" are not universal: Depending on the size and scope of the software project, some best practices are actually worst practices and can slow you down. For example, unit testing and extensive documentation might be irrelevant for a short term project / prototype while they will be indispensable for code that should be understood and used by other people. Also, for software projects that have an exploratory nature (which is often the case for scientific projects) it's usually no use trying to define a complete code architecture at the start of the project, as the assumptions about how the code should work and how to structure it will probably change during the project as you get a better understanding of the problem that you try to solve. Trying to follow a given paradigm here (e.g. OOP or MVC) can even lead to architecture-induced damage.

The size of the project is also a very important factor. From my own experience, most software engineering methods start to have a positive return-on-investment only as you go beyond 5.000-10.000 lines of code, as at this point the code base is usually too large to be understandable by a single person (depending on the complexity of course), so making changes will be much easier with a good suite of unit tests that makes sure you don't break anything when you change code (this is especially true for dynamically typed languages).

So I'd say that instead of memorizing best practices you need to develop a good feeling for how code bases behave at different sizes and complexities (including how they react to changes), as this will allow you to make a good decision on which "best practices" to adopt.

Also, scientists are -from my own experience- not always the worst software developers as they are less hindered by most of the paradigms / cargo cults that the modern programmer has to put up with (being test-driven, agile, always separating concerns, doing MVP, using OOP [or not], being scalable, ...). They therefore tend to approach projects in a more naive and playful way, which is not always a bad thing.

3
jcoffland 13 ago 2 replies      
This article is anecdotal and ranty but I will respond anyway. I've spent the last 15 years working on various projects involving cleaning up scientific code bases. Messy unengineered code is fine if only a very few people ever use it. However, if the code base is meant to evolve over time you need good software engineering or it will become fragile and unmaintainable.

That said, there are many "programmers" who apply design concepts willy nilly with out really understanding why. They often make a bigger mess of things. There is an art to quality software engineering which takes time to learn and is a skill which must be continually improved.

The claim in the article that programmers have too much free time on their hands because they aren't doing real work, like a scientist does, is obviously ridiculous. Any programmer worth their salt is busy as hell and spends a lot of thought on optimizing their time.

Conclusion, scientists should work with software engineers for projects that are meant to grow into something larger but hire programmers with a proven track record of creating maintainable software.

4
jeffdavis 12 ago 0 replies      
"try rather hard to keep things boringly simple"

Good engineering does mean keeping things boringly simple. You should only make things complex to hit a performance target, match complex requirements, or avoid greater complexity somewhere else.

Some types of complexity are subjective. If you need to parse something, bison/yacc is often a great choice; but for a simple grammar I could see how someone who doesn't know it could say it introduces needless complexity.

Programming is writing, and like all writing, you are communicating with some audience (in the case of software, it's other developers). If you lose track of who you are writing for, you'll not succeed.

5
whorleater 12 ago 5 replies      
Disclosure: I'm a recent astronomy grad who specialized in computational astrophysics. Definitely biased.

The issue is that at least for many scientists and mathematicians, mathematical abstraction and code abstraction are topics that oftentimes run orthogonal to each other.

Mathematical abstractions (integration, mathematical vernacular, etc) are abstractions hundreds of years old, with an extremely precise, austere, and well defined domain, meant to manage complexity in a mathematical manner. Code abstractions are recent, flexible, and much more prone to wiggly definitions, meant to manage complexity in an architectural manner.

Scientists often times have already solved a problem using mathematical abstractions, e.g. each step of the Runge-Kutta [1] method. The integrations and function values for each step is well defined, and results in scientists wanting to map these steps one-to-one with their code, oftentimes resulting in blobs of code with if/else statements strewn about. This is awful by software engineering standards, but in the view of the scientist, the code simply follows the abstraction laid out by the mathematics themselves. This is also why it's often times correct to trust results derived from spaghetti code, since the methods that the code implements themselves are often times verified.

Software engineers see this complexity as something that's malleable, something that should be able to handle future changes. This is why it code abstractions play bumper cars with mathematical abstractions, simply because mathematical abstractions are meant to be unchanging by default, which makes tools like inheritance, templates, and even naming standards poorly suited for scientific applications. It's extremely unlikely I'll ever rewrite a step of symplectic integrators [2], meaning that I won't need to worry about whether this code is future proof against architectural changes or not. Functions, by and large in mathematics, are meant to be immutable.

Tl; dr: Scientists want to play with Hot Wheels tracks while software engineers want to play with Lego blocks.

[1]: https://en.wikipedia.org/wiki/RungeKutta_methods

[2]: https://en.wikipedia.org/wiki/Symplectic_integrator

6
mcguire 12 ago 2 replies      
"Crashes (null pointers, bounds errors), largely mitigated by valgrind/massive testing"

Once upon a time I had lunch with a friend-of-a-friend whose entire job, as a contractor for NASA, was running one program, a launch vehicle simulation. People would contact her, give her the parameters (payload, etc.) and she would provide the results, including launch parameters for how to get the launch to work. Now, you may be thinking, that seems a little suboptimal. Why couldn't they run the program themselves; they're rocket scientists, after all?

Unfortunately, running the program was a dark art. The knowledge of initial parameter settings to get reasonable results out of the back end had to be learned before it would provide, well, reasonable results. One example: she had to tell the simulation to "turn off" the atmosphere above a certain altitude or the simulation would simply crash. She had one funny story about a group at Georga Tech who wanted to use the program, so they dutifully packed off a copy to them. They came back wondering why they couldn't match the results she was getting. It turns out that they had sent the grad students a later version of the program than she was using.

Anyway, who's up for a trip to Mars?

7
sseagull 12 ago 1 reply      
His first list really, really hand-waves the problems that style of coding can cause. Just use better tools or run valgrind? It never is that simple.

One aspect of scientific coding is that it can have very long lifetimes. I sometimes work on some code > 20 years old. Technology can change a lot in that time frame. For example, using global data (common back then) can completely destroy parallel capability.

The 'old' style also makes the code sensitive to small changes in theory. Need to support a new theory that is basically the same as the old one with a few tweaks? Copy and paste, change a few things, and get working on that paper! Who cares if you just copied a whole bunch of global data - you successfully avoided the conflict by putting "2" at the end of every variable. You've got better things to do than proper coding.

Obviously, over-engineering is a problem. But science does need a bit of "engineering" to begin with.

Anecdote: A friend of mine wanted my help with parsing some outputs and replacing some text in input files. Simple stuff. He showed me what he had. It was written in fortran because that's what his advisor knew :(

Note: I'm currently part of a group trying to help with best practices in computational chemistry. We'll see how it goes, but the field seems kind of open to the idea (ie, there is starting to be funding for software maintenance, etc).

8
The_suffocated 11 ago 0 replies      
I think some of the author's criticisms are misplaced.

Long functions Yes, functions in scientific programming tend to be longer than your usual ones, but that's often because they cannot be split into smaller functions that are meaningful on their own. In other words, there's simply nothing to "refactor". Splitting them into smaller chunks would simply result in a lot of small functions with unclear purposes. Every function should be made as small as possible, but not smaller.

Bad names The author gives 'm' and 'k' as examples of bad variable names. I think this is a very misplaced criticism. Unless we are talking about a scientific library, many scientific programs are just implementations of some algorithms that appear in published papers. For such programs, the MAIN documentations are not in the comments but the published papers themselves. The correct way to name the variables is to use exactly the symbols in the paper, but not to use your favourite Hungarian or Utopian notations. (Some programming languages such as Rust or Ruby are by design very inconvenient in this respect.) As for long variable names, I think they are rather infrequent (unless in Java code); the author was perhaps unlucky enough to meet many.

9
mamon 13 ago 3 replies      
This is so true:

"Many programmers have no real substance in their work the job is trivial so they have too much time on their hands, which they use to dwell on "API design" and thus monstrosities are born"

It also explains proliferation of "cool" MVC and web frameworks, like Node.js, Angular, React, Backbone, Ember, etc.

10
adrianratnapala 12 ago 1 reply      
Mostly I agree, bad naive code is better than bad sophisiticated code.

Also science very frequenly only requires small programs that are used for one analisys and then thrown away. It's OK to have a snarl of bad Fortran or Numpy if it only 400 lines long.

BUT: scientific projects are often (in my old field, usually) also engineering projects. Such experiments are complex automated data gathering machines hardware and take rougly similar data runs tens of thousands of times.

There should be some engineering professionalism at the start to design and plan such a machine. Especially the software, since it is mostly a question of integrating off-the shelf hardware.

But PIs think:

(A) engineering is done most cheaply by PhD students -- a penny pinching fallacy.

(B) that their needs will grow unpredictably over time.

B is true, but is actually is a reason to have a good custom platform designed at the start, so that changes are less costly. Your part time programmer is going to develop many thousand of lines of code no one can understand or extend. (I've done it, I should know.)

11
ska 13 ago 0 replies      
I believe this post is fundamentally misguided, but I can see how the author got there. In fact I see it as a sort of category error. When you talk about a style of programming being "good" or "bad", I always want to ask "for what?". I wonder if the author has thought about what would happen if everyone adopted the "scientific" style they are alluding too.

Most of what the author describes as the problems of code generated by scientist are what I would call symptoms. The real problems are things like: incorrect abstractions, deep coupling, overly clever approaches with unclear implicit assumptions. Of course this causes maintenance and debugging to be more difficult than it should but the real problem is that such code does not scale well and is poor at managing complexity of the code base.

So long as your code (if not necessarily its domain) is simple, you are fine. Luckily this describes a huge swath of scientific code. However system complexity is largely limited by the the tools and approaches you use .. all systems eventually grow to become almost unmodifiable eventually.

The point is, this will happen to you faster if you follow the "scientific coder" approaches the author describes. Now it turns out that programmers have come up with architectural approaches that help manage complexity over the last several decades. The bad news for scientific coders is that to be successful with these techniques you actually have to dedicate some significant amount of time to learning to become a better programmer and designer, and learning how to use these techniques. It also often has a cost in terms of the amount of time needed to introduce a small change. And sometimes you make design choices that don't help your development at all. They help your ability to release, or audit for regulatory purposes, or build cross-platform, or ... you get the idea. So these approaches absolutely have costs. You have to ask yourself what you are buying with this cost, and do you need it for your project.

The real pain comes when you have people who only understand the "scientific" style already bumping up against their systems ability to handle complexity, but doubling down on the approach and just doing it harder. Those systems really aren't any fun to repair.

12
raverbashing 13 ago 1 reply      
It's an interesting discussion, and as the article points out, "Software Engineer" code has some issues as well

There's also an issue that code ends up reflecting the initial process of the scientific calculation needed, which might not be a good idea (but if you depart from that, it causes other problems as well)

Also, I'm going to be honest, a lot of software engineers are bad at math (or just don't care). In theory a/b + c/b is the same as (a+c)/b, in practice you might near some precision edge that you can't deal directly and hence you need to calculate this in another way

Try solving a PDE in C/C++ for extra fun

13
joseraul 12 ago 0 replies      
In his excellent book [1], Andy Hunt explains what expertise is with a multi-level model [2], where a novice needs rules that describe what to do (to get started) while an expert chooses patterns according to his goal.

So, "best practices" are patterns that work in most situations, and an expert can adapt to several (and new) situations.

[1] https://pragprog.com/book/ahptl/pragmatic-thinking-and-learn...

[2] https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisi...

14
dibanez 13 ago 0 replies      
I'm 80% "software engineer" and 20% "researcher" and have to play both roles to write supercomputer code (I'm the minority, most peers are more researchers). These issues are important right now, as the govt is investing in software engineering due to recent hardware changes that require porting efforts. We recognize the pitfalls of naive software engineering applied to scientific code, and would like to do things more carefully. I don't think we should have to choose one or the other; with proper communication we can achieve a better balance.
15
nolemurs 10 ago 0 replies      
The title of this article should really be "Why bad scientific code beats bad software engineer code."

It contrasts a bunch of bad things scientific coders do, and a bunch of bad things bad software engineers do. There's no "best practices" to be seen on either side.

16
gcc_programmer 7 ago 0 replies      
I am sure that the person who wrote this article did it for a reason and has been frustrated by "programmers". However, this is very anecdotal and, to be honest, doesn't desever more than a mere acknowledgement - yes, blindly applying software practices and adding more indirections is not always good, but creating robust, maintainable, non-ad-hoc software requires abstractions, indirections, and programmers.
17
pyrale 12 ago 0 replies      
The article overlooks a massive source of problems : the problems he describes in engineers' code usually starts to become annoying at larger scale. The problems he describes in scientists' code rarely happens at scale, because it can't be extended significantly. I feel it's weird to compare codebases that probably count in the thousands, and codebases that count in the hundreds of thousands or million lines of code.

Also it is worth noting that every single problem he has with engineers' code is described at length in the litterature (working effectively with legacy code, DDD blue book, etc). Of course, these problems exist. But this is linked to the fact that hiring bad programmers still yields benefits. I believe this is not something that we can change, but if the guy is interested in reducing his pain with crappy code, there are solutions out there.

18
xapata 12 ago 0 replies      
The meat is in the footnote, as always.

> (In fact, when the job is far from trivial technically and/or socially, programmers' horrible training shifts their focus away from their immediate duty is the goddamn thing actually working, nice to use, efficient/cheap, etc.? and instead they declare themselves as responsible for nothing but the sacred APIs which they proceed to complexify beyond belief. Meanwhile, functionally the thing barely works.)

It seems the author has been plagued with programmers who avoid taking responsibility. One strategy for creating job security is to build a system too complex for anyone else to maintain it. Perhaps the author's colleagues are using this strategy.

It's hard to take complaints about "best practices" seriously when the practices described are not best.

19
Rainymood 12 ago 1 reply      
I recently followed a course on "Principles of programming for Econometrics" and although I knew a lot about programming already I learned a lot about being structured and documentation. The professor ran some example code which he wrote 10 years ago! He wasn't really sure what the function did again and BAM it was there in the documentation (i.e. comment header of the function).

I used to just hack stuff together in either R or Python but that course really got me thinking about what I want to accomplish first. Write that down on paper. And then and only then after you have the whole program outlined in your head start writing functions with well defined inputs and outputs.

20
cdevs 12 ago 0 replies      
I know a lot of math majors thrown into c++ jobs that write unreadable code almost forgetting they are allowed to use words and not just single letters (though they would probably be fine in the functional programming scene). There's a learning curve either way, write like your co-workers unless you have the experience to know your co-workers suck.
21
thearn4 12 ago 0 replies      
Working in this area (and coming from a math background), the biggest issues that I have with most scientific and engineering code are:

1) lack of version control

2) lack of testing

Everything else (including the occasional bad language fit) is usually a distant 3rd.

22
taeric 9 ago 1 reply      

 > Simple-minded, care-free near-incompetence can be > better than industrial-strength good intentions > paving a superhighway to hell.
Love this line.

I think the thing about bad scientific code that makes it good is that you can often get really good walls around what goes in and what comes out. To the point that you can then mitigate the danger of bad code to just that component.

Software architects, on the other hand, often try to pull everything in to the single "program" so that, in the end, you sum all of the weak parts. All too often, I have seen workflows where people used to postprocess output data get pulled into doing it in the same run as the generation of the data.

23
lilbobbytables 10 ago 2 replies      
> Long functionsThis isn't the worst thing. As long it gets refactored when there is a need for parts of that function to be used in multiple places.

> Bad names (m, k, longWindedNameThatYouCantReallyReadBTWProgrammersDoThatALotToo)I can live with long winded names, while slightly annoying, they at least still help with figuring out what's going on.

What I can't stand are one or two letter variable names. They're just so unnecessary. Be mildly descriptive and your code becomes so much easier to follow, compared to alphabet soup.

What annoys me about stuff like this is that it just feels like pure laziness and disregard for others. Having done code reviews of data scientists they just don't want to hear it. They adamantly don't care - compared to my software engineer compatriots who would at least sit there and consider it.

But this is just my own anecdotal experience.

24
BurningFrog 12 ago 1 reply      
I think the article makes a decent case for "simple bad code" for small projects. In a bigger project, this approach collapses, but in small to medium sized ones, you can do fine, and the uglyness of the code is "shallow", as I like to call it. That is, the problems are local and done in simple straightforward ways.

The "software engineer" code he describes sounds like the over engineered crap most of us did when getting out of the clever novice stage and learned about cool and sophisticated patterns which we then applied EVERYWHERE.

I guess some never come out of that phase, but the code of real master programmers is simple and readable, only uses complex patterns when truly needed, and has no need to show off how clever the author is in the code.

You know, the people who made it necessary to invent "POJO" (http://www.martinfowler.com/bliki/POJO.html).

25
firethief 13 ago 0 replies      
The inexperienced-CS-grad errors he describes are a maintenance nightmare, but those non-programmer errors cast a lot more doubt on the accuracy of the results. The importance of correctness depends on the problem I guess.
26
okket 13 ago 1 reply      
Previous discussion: https://news.ycombinator.com/item?id=7731624 (2 years ago, 168 comments)
27
ef4 9 ago 0 replies      
This is describing two stages in the growth of programmer skill.

The researchers are at beginner stage and make classic beginner-stage mistakes. The developers are at intermediate stage, and they make classic intermediate-stage mistakes.

There is a later stage of people who can avoid both, but the author probably hasn't worked with anyone in that stage. Which is not surprising, because once you're that experienced there are big financial incentives to get out of academia.

28
collyw 13 ago 0 replies      
The examples he gives seem like using complex features of programming languages for the sake of it rather than best practices.
29
overgard 12 ago 0 replies      
I think the fundamental problem is that programmers have been taught that "abstract = good" in all things.

How often do you hear someone say they "abstracted" a piece of code or "generalized" it, without anyone asking why? Or how often do people "refactor" things by taking a piece of code that did something specific, and giving it the unused potential to do more things while creating a lot of complexity? The problem with "abstracting" things is it means behaviors that were previously statically decidable can now only be determined by testing run-time behavior, or the key behaviors are now all driven from outside the system (configuration, data, etc.)

Also by making things more flexible, your verbs suddenly become a lot more general and so readability suffers.

Kind of an aside, but whenever I see code where a single class is split into one interface and one "impl" I've taken to calling it code acne (because Impl rhymes with pimple). If you're only using an interface for ONE class it's a huge waste of time to edit two files! The defense is always something like "well what if we need a mock version for tests". Fine, write the interface when you actually do that.

30
bastijn 12 ago 0 replies      
This article itches me on so many levels. It is not wrong directly but it is definitely not the truth either. I expected more from someone who claims to be a scientist.

The main issue I have with the piece is the oversimplification of the equation to such an extend important variables of the equation are removed without mention or explanation of their removal.

An example would be project size. Yes for FizzBuzz globals are fine probably and FizzBuzz enterprise shows beautifully that overengineering is a thing (https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...). All of the authors statements would hold here. We all agree and smile. But the same architectural choices make sense in many large enterprise projects. Take the comment on large number of small files for example. This gives less merge conflicts (amongst many other things). Yes you working alone on your tiny project won't notice but try working in a single large file with 100 devs committing to it. Good luck with the merge conflicts! Large methods? Same issue. Everybody has to change code in that one method, merge conflict. Inheritance? Nice thing if you build an sdk for others to use and want to hand them a default base version. They can extend and override your virtual methods to get custom behaviour. No code duplication which you have to maintain and keep in sync! Wow!

Next up I would like to address the difficult naming. Everybody nodding that that was bad. Nice to write it down. However, from a scientist I would expect a disclaimer that this was based on personal experience with programmers and not the ground truth for pprogrammers or cite a credible source. I'd say there is only a small fraction who do that. Disclaimer on this is that both programmer and scientist should work together if one side does not understand the naming conventions for the project.

Simple-minded care-free can give you a prototype which is the scientist job. Enterprise programmers (who often are computer scientists) give you your product.

Tl;dr stop comparing apples and oranges. Or as a true scientist at least describe context and omittance of various variables. O, and share your gdamn ugly code so we don't need to read your papers and implement it ourselves from scratch. That's the true waste here ;).

31
levbrie 12 ago 0 replies      
I think you can reframe this debate pragmatically and widen its applicability significantly: At what point is "bad" code more effective than the alternatives. If you get down into a debate about "best practices" you'll have to concede that anyone writing the code the author is talking about might be using "best practices" in some explicit way, but isn't "following best practices", which are designed to avoid precisely the difficulties he outlines. On the other hand, it's true that most code out there is bad code, and that heavily architecting a system with bad code can be even more of a nightmare than more straightforward bad code. The real question is, when should scientists favor bad code? I'm a huge fan of best practices and of thoughtful and elegant coding, but I could see an argument being made that in most circumstances, scientific code is better off being bad code, as long as you keep it isolated. I'd love to see someone make that argument.
32
zby 11 ago 0 replies      
OK - not to defend all professional programmers - but it seems quite reasonable that perhaps the tasks where people are hired specifically to write code are perhaps bigger and more complicated than programming tasks that are completed by people who do programming only as a small part of their job.
33
dankohn1 10 ago 0 replies      
They're talking about a different kind of best practices, but I highly recommend taking a look at the Core Infrastructure Initiative's Best Practices Project [0], which was created partially in response to the Heartbleed disaster. It's a list of 66 practices that all open source software, including scientific software, should be following.

[0] https://github.com/linuxfoundation/cii-best-practices-badge/...

(Disclosure: I'm a co-founder of the project. It's completely free and open source, and the online BadgeApp itself earns the best practices badge.)

34
makecheck 8 ago 0 replies      
Remember that algorithms, data structure design and API experience are also crucial parts of coding. These are not necessarily things that will be learned by iterative hacking.

Scientific data sets can be huge, and there are all kinds of ways to write code that doesnt scale well.

If the scientific code is trying to display graphics, then you really have to know all the tricks for the APIs you are using, how to minimize updates, how to arrange data in a way that gives quick access to a subset of objects in a rectangle, etc.

35
engine_jim 10 ago 0 replies      
This is a debate I engage in often. You can write "prototype" code to solve an "algorithmic" or "scientific" problem and it can be sloppy, but if you are planning on integrating it into a large project your team will run into problems unless the code is extremely contained.

It's true that there is a growing rebellion against best practices and design patterns, and I think in many cases some practices are dogmatic. However, the part that disturbs me is that inexperienced programmers are using it as an excuse to not apply basic principles they don't understand in the first place.

I've seen experienced software engineers that are lazy and spend more time criticizing the work of others than actually producing anything themselves, and I've seen novices that have poor fundamentals but grind for weeks to solve difficult "scientific" problems albeit with horrendous code that proves to be not maintainable in the long run. I find that in the latter case (I'll call them "grinders"), the programmer takes much longer to solve their problem because they have such limited coding experience (I've been asked many times to help debug trivial problems that result from not understanding basic concepts like how recursion works).

The author of this article does a good job at identifying the characteristics of this low quality "scientific" code, especially that it uses a lot of globals, bugs from parallelism, and has other bugs and crashes that are not understood. The author seems to insinuate that testing is the way to mitigate the bugs and crashes, this is partially true but it's better to write code you understand in the first place instead of relying on testing to fix everything so you don't continually introduce new bugs.

Grinders can benefit from understanding best practices and learning programming and computer science fundamentals. That way they can make their code more robust, code faster, and truly understand when they should and shouldn't apply a best practice. Software engineers can improve by matching the work ethic of the grinders and explaining where the grinders are making mistakes.

36
hyperion2010 10 ago 0 replies      
> the products of my misguided cleverness.

To me this is the take home. For a long time I would try to find clever solutions to problems, or just try to be clever in general, and it is not just other people but your own future self that has to deal with it. This also applies to other parts of academic life as well such as grant writing. Code is also about communication with other people and if you are clever then you had bettered be able to explain your cleverness in a way others can understand. KISS.

37
mwest 7 ago 0 replies      
Flaws uncovered in the software researchers use to analyze fM.R.I. data: https://news.ycombinator.com/item?id=12378791

Wonder whether or not the software followed "best practices"...

38
fitzwatermellow 12 ago 0 replies      
Might have been true a decade ago. When simulations performed on a laptop in Matlab were enough for dissertation quality research. But data set size has exploded. If you are currently in school, learn how to move your research to the cloud, and learn some best cloud practices. Best prep for the future to come. And if you decide to leave academia you can possibly nab an interview at Netflix ;)
39
alecbenzer 11 ago 1 reply      
From the comments:

> Of course, design patterns have no place in a simple data-driven pipeline or in your numerical recipes-inspired PDE solver.But the same people that write this sort of simple code are also not the ones that write the next Facebook or Google.

> post author: Google is kinda more about PageRank than "design patterns".

wut

40
Smaointe 9 ago 0 replies      
It's because the programmers aren't involed in the science being undertaken. They're put in a position where they are just programming for programming's sake
41
cerisier 12 ago 0 replies      
Not sure how this articles brings constructive critique... Comparing the hardly avoidable issues brought by specific scope and priorities of scientific work vs dumb "bad practices" has little value to me...
42
mkagenius 12 ago 1 reply      
> so they have too much time on their hands, which they use to dwell on "API design" and thus monstrosities are born.

In free time, they mostly go for refactoring the code, don't they?

43
Sylos 11 ago 0 replies      
Well, yeah, because Software Engineers are trained for building large projects and those "best practices" are aimed at exactly that, too.

Long functions, bad names, accesses all over the place and using complex libraries, those are errors which are acceptable at a small scale, but become horrendous when you build a larger project.

Many abstraction layers and a detailed folder structure, those might add a lot of complexity in the beginning, but there's not much worse than having to restructure your entire project at a later date.

44
moron4hire 12 ago 0 replies      
The worst thing to ever happen to "best practices" was when managers found out about them. Suddenly, we were not allowed to think for ourselves and solve the problem at hand, we also had to figure out what "best practice" to use to implement our solution.

And it's not like you can argue against "best practices". They're the "best" after all. So that makes you less than best, to oppose them!

45
hifier 12 ago 0 replies      
This person has obviously never worked on a project of any scale. See where your ad-hoc practices get you when you have millions of LOC.

Can we all agree that there is good code and bad code and the difference between the two is often contextual, then move on. Geez.

46
darksky13 12 ago 0 replies      
As someone who feels like I always complain about quality, I feel like I don't know how to actually write quality code. All code eventually turns into a nightmare. A lot of the code I see by coworkers and myself is super hacky. I really wonder if we're all just terrible programmers or if that's the natural evolution of code.

Apart from having a mentor, what are the best ways to learn about code quality l? Books to read for example that I can then use to look at my own code and fix it? I really have no idea when making decisions what ends up being the best over the long run.

47
p4wnc6 11 ago 0 replies      
This is a mess of an essay and does little to persuade me that allowing domain experts to have free reign to make software messes is in any way a good idea.

One of the criticisms applied to software engineers -- the one about bad abstractions like "DriverController" and "ControllerManager" etc. -- is a huge pet peeve of mine because it's basically a manifestation of Conway's Law [0]. It indicates that the communication channels of the organization are problematically ill-suited for the type of system that is needed. The organization won't be able to design it right because it is constrained by its own internal communication hierarchy, and so everyone is thinking in terms of "Handlers" and "Managers" and pieces of code literally end up becoming reflections of the specific humans and committees to which certain deliverables are due for judgement. This is not a problem regarding best practices at all -- it's a sociological problem with the way companies manage developers.

Domain specific programmers aren't immune to this either. You'll get things like "ModelFactory" and "FactoryManager" and "EquationObject" or "OptimizerHandler" or whatever. It's precisely the same problem, except that the manager sitting above the domain-specific programmers is some diehard quadratic programming PhD from the 70s who made a name by solving some crazy finite element physics problem using solely FORTRAN or pure C, and so that defines the communication hierarchy that the domain scientists are embedded in, and hence defines the possible design space their minds can gravitate towards.

There is definitely a risk on the software development side of over-engineering -- I think this is what the essay is getting at with the cheeky comments about too much abstraction or too much tricky run-time dispatching or dynamic behavior. But this is part of the learning path for crafting good code. You go through a period when everything you do balloons in scope because you are a sweaty hot mess of stereotyped design ideas, and then slowly you learn how only one or two things are needed at a time, how it's just as much about what to leave out as what to put in. The domain programmers who are given free reign to be terrible and are never made to wear the programming equivalent of orthopedic shoes to fix their bad patterns will never go through that phase and never get any better.

[0] < https://en.wikipedia.org/wiki/Conway%27s_law >

48
CyberDildonics 11 ago 0 replies      
This has nothing to do with scientific programming and everything to do with "best practices" being mind blowingly awful. Coupling execution and data is good for data structure initialization, cleanup, and interface. Everywhere else they should just be kept separate. Data structures should be as absolutely simple as possible, not as full and generic as possible.

Where people get into trouble many times is thinking that every transformation or modification of data should be inside one data structure or another, when really none of them should be except for a minimal interface.

49
shitgoose 10 ago 0 replies      
I know what you mean by 'messy scientific code', hairy stuff. Deal with it almost on a daily basis. 10 element tuples, weird names etc. Makes you wanna puke at the beginning. But then, as I get to understand what they are trying to say (i.e. Business Purpose) things get easier. Somehow I remember what 6th element in the tuple is and where approximately in 2000 LOC function should I look for something. BUT... When it comes to 'properly engineered' piece of infrastructure OOP shit filled with frameworks and factories, I have no idea. No matter how hard I try I cannot remember nor understand what the fuck are they trying to say. My guess, this is because they have got nothing to say, really.
50
parenthephobia 11 ago 0 replies      
I'd love to hear about the tools which almost completely mitigate parallelism errors.

The author's list of things that are wrong with "software engineers"' code is 50% "things that are just language features" and 50% "bad ways to use language features that nobody thinks is best practice in software engineering".

Part of the irony is that lot of the more hairy software engineering techniques that he decries are used by the people writing platforms and libraries that scientist programmers use, to make it possible for their "value of everything, cost of nothing" code to actually run well.

There is a big difference in attitude between scientist programmers and software engineers.

Often, a scientist already has the solution to the problem, and is just transcribing it into a program. The program doesn't need to be easy to understand in isolation, because a scientist doesn't read programs to understand somebody else's science, she reads the published peer-reviewed paper. After all, if you wanted to understand Newtonian dynamics, you wouldn't start by reading Bullet's source, even if it's very well written. (I don't know if it is.)

Conversely, for a software engineer the program is a tool for finding the solution. Even though they're in a scientific field, if it's accurate to call them software engineers they'll be from a background where the program itself is the product, rather than the knowledge underlying the program.

51
jlarocco 11 ago 0 replies      
Both sides of this argument are correct because both sets of practices are used for different purposes.

A mid-sized or large software project (say 100k+ LOC) with single letter variables all over, global variables, etc. would be an absolute maintenance nightmare. So the software engineering perspective is correct there. And in large projects it really is helpful to split projects up into multiple directories, use higher level abstractions, etc.

At the same time, most scientific code bases are not in that category. They don't have dozens (or hundreds) of people working on them, they're not going to be expanded much beyond their original use case, and they're mostly used by the people writing the code and/or a small group around those people.

52
DanielBMarkham 13 ago 2 replies      
This is funny because the author is exactly right, but I think he's misidentified the poor coders. The folks he's complaining about are academic coders without a lot of commercial experience, which tend to make all of those errors.

He also nails it when he says "idleness is the source of much trouble"

In the commercial world, you code to do something, not just to code (hopefully). So you get in there, get it done right, then all go out for a beer. You don't sit around wondering if there's some cool CS construct that might be fun to try out here (At least hopefully not!) Clever code is dangerous code.

Good essay.

30
Decoding the Civil War zooniverse.org
51 points by Hooke  15 ago   3 comments top
1
dmix 10 ago 2 replies      
So, they are looking for free labour from history nerds?

I'd be curious to know more about the ROI here. What they hope to gain. Or even just examples of what decoding this will offer societies understanding of the war that we don't already know.

       cached 29 August 2016 07:02:02 GMT