hacker news with inline top comments    .. more ..    30 Aug 2016 News
home   ask   best   12 months ago   
visited
1
California bans ITT tech from accepting new students latimes.com
113 points by emeraldd  2 ago   84 comments top 18
1
crazy1van 1 ago 2 replies      
I agree with a lot of the complaints I've heard about ITT. They seem to broadly fall into three camps:

1) They are expensive and rely on their students getting gov't loans,

2) They don't adequately prepare students for the work place, and

3) The teachers aren't very good.

I think those are all great reasons to not go to ITT. But, I think those reasons could also apply to a whole lot of other schools, profit, non-profit, private, and public.

Certainly, I think #1 applies to a whole lot of schools and student #2 applies to any school that has majors with poor job prospects. And I think nearly every college student has experienced #3 with at least some of their professors -- sometimes because they dont know the material well and other times because they are completely uninterested in the teaching aspect of their job.

2
rdtsc 1 ago 5 replies      
Are there any good ITT / University of Phoenix type schools? They all seem like scams.

I don't count community colleges here, I know those can be very good. My wife went to one, got great education, was able to pay it by just budgeting money every month out of her part time work. Then switched to a 4 years univeristy, transfered credits and graduated with honors after 2 more years (with minimal loans).

Would it be hard for any of these for-profit school to also do a good job?

3
electic 1 ago 7 replies      
What about coding bootcamps? Aren't they worse? They try to make you a "coder" in a couple of months and promise people jobs if they finish the course. It sounds eerily the same thing as ITT...
4
jorts 1 ago 2 replies      
At my last company we had a slew of ITT graduates apply for a role one time. I talked to probably 10 or so of them. They seemed to have received little to no educational value from their time at ITT. I assume they were all promised great jobs, but all of them were not remotely qualified to get even a basic entry level technical role. I felt terrible for them paying money for the "education" that they received.
5
WhitneyLand 6 ago 0 replies      
Btw to avoid confusion I've seen happen when a new guy shows up in the workplace: ITT != IIT

ITT = ~University of Phoenix, everyone gets in

IIT = Elite programs in a variety of sciences, almost no one gets in

6
imh 1 ago 1 reply      
I think this is a fantastic step in the right direction, but what is the legal justification here? ITT is in no way alone in being evil and predatory, so I'm surprised I keep hearing about crackdowns for them but not for anyone else. I'd love to see some new rules to keep all of these predatory places from operating instead of what kinda seems like a picking out a scapegoat. (Or is all this recent news more general and everyone just mentions ITT?)
7
gdwatson 1 ago 6 replies      
Does anyone have references to more specific accusations against ITT? There are lots of vague claims in the article, but the only semi-specific ones -- misleading students about program quality and pushing them into irresponsible loans -- could just as easily be laid at the feet of public and private nonprofit colleges.

I have no particular reason to trust or distrust ITT. But it strikes me as a trade school that presents itself as a college, and that seems like one viable approach to our credentialism issues, so I want to know if it's being attacked for legitimate or political reasons.

8
grej 36 ago 0 replies      
ITT seems like an easy first place to go because of their shady rep, but I think the risk here is that well entrenched interests in will ultimately give bootcamps and other online training courses the same treatment if they start eating into the educational establishment's pie too much. We have to be careful of a slippery slope.
9
beenfired 16 ago 0 replies      
According to the ITT Tech Website they have now stopped accepting new enrollments nationwide.
10
wallace_f 24 ago 0 replies      
This is just the tip of the iceberg with problems in education, though. Let's not act like higher education in the US is exemplary.
11
electriclove 15 ago 0 replies      
Bravo, now let's ban the rest of the fake schools!
12
roymurdock 1 ago 0 replies      
For those interested you can find reactions from students, teachers, trolls, and the peanut gallery here:

https://www.thelayoff.com/itt-educational-services

On the front page of the website you'll notice that Education Management Corp, University of Phoenix, Corinthian, Zenith Education, etc. are receiving an increased amount of attention.

13
serg_chernata 1 ago 2 replies      
Does anyone know which other national institutions something like this may affect?
14
h4nkoslo 53 ago 0 replies      
One aspect of ITT and its ilk is that the students often only sign up so they can take out "education" loans for living expenses, with the class work as a very secondary concern.
15
twblalock 29 ago 0 replies      
I hope DeVry is next.
16
beedogs 1 ago 0 replies      
Great. Now how will I learn TV/VCR repair?
17
cloudjacker 1 ago 0 replies      
wow thats horrible news for all those motivated people that "were the first in their family to go to college"
18
trengrj 1 ago 1 reply      
This site is absolutely unusable without an ad blocker..
2
How I Built a Custom Camper Van (2015) syntheti.cc
530 points by pvsukale3  10 ago   247 comments top 71
1
grecy 9 ago 9 replies      
I did something similar.

I wanted a vehicle I could explore the world with, so I turned my Jeep into a house on wheels with fridge, drinking water and filtration, solar and dual batteries, interior cabinets and a custom modified pop-up roof so I can stand up and walk around in the Jeep.

I joked about applying for a home owners grant :)

The full pictures and story are in this album - http://imgur.com/a/OLK3o

I'm driving it around Africa now.

EDIT: I'm a Software Engineer too, and I decided there is more to life than sitting at a desk - a few years back I drove Alaska->Argentina, now it's around Africa for 2 years.

EDIT2: I've hit my posting limit.

Yes, I'm still alive!

Follow along if you want to see if I stay that way!

Facebook: https://facebook.com/theroadchoseme

Instagram: https://www.instagram.com/theroadchoseme

Twitter: https://twitter.com/dangrec

YouTube: http://youtube.com/c/theroadchoseme

And my website: http://theroadchoseme.com

2
patcheudor 6 ago 2 replies      
A couple comments for your own safety and the safety of the vehicle:

1) Those batteries should be in battery boxes. You can find them at any marine supply store. Note that for boats where batteries are commonly stored like you have them there, it's the law. For RV's it's a good practice and may be required by some insurers and in some states.

2) H2S also known as hydrogen sulfide. It's explosive and it's possible for even the best sealed batteries to have a problem whereby H2S is released. If those batteries have vent ports, you need to ensure they are connected to a vent tube and run out of the vehicle. If they don't have vent tubes, don't assume they won't vent. I run sealed batteries in my boat and it came with a H2S detector connected bilge ventilator. If the H2S detector senses a build-up of the gas it sets off an audible alarm and kicks the ventilator on. I've seen the aftermath of battery compartment explosions. Trust me, it's not something you want to experience. The cheapest option here is to get batteries which allow for the connection of a vent tube.

UPDATE: here's a decent article on the issue with a picture of a vented battery box (I didn't know those were a thing - cool!):

http://blog.rvshare.com/6-things-need-know-rv-battery-box/

UPDATE #2: just went out and looked at my boat. This is what's in the battery compartment attached to the bilge fan:

https://www.zoro.com/macurco-fixed-gas-detector-h2s-4-12inhx...

3
gnarcoregrizz 8 ago 1 reply      
This resonated with me: "Life is easy. Humans are fucking badass -- we absolutely dominate our environment and are so smart and powerful."

I really understood that in the desert in Utah, where I got the feeling that I wasn't supposed to be there, far away from any semblance of civilization, but there I was surviving just fine with the help of our machinations.

I bought my RV for what you did, and its a perfectly comfortable home... a home that goes 80mph! I've been to almost every state now, and lived on hilltops with "million dollar" views, been in the desert under the stars, worked from deep in the rainforest in the pacific northwest, all for less money than rent for my apartment was. We can live comfortably for about a week completely off the grid. I would have bought a smaller, more offroad capable van, but I live in it with my fiance, so that was untenable.

I don't know how long you've been doing it, but there are definitely stressors and downsides that accompany the lifestyle. My RV was broken into once and I had everything stolen, and since then I've been constantly on edge when being away from my vehicle, so I often wish it looked beat to shit to deter people from messing with it. Also, staying in parking lots sucks and is sad if you're doing it for any extended period of time. I definitely have a missing sense of community and permanence, but its been a great journey!

4
jdpigeon 9 ago 2 replies      
This would have appealed to me about two years ago, but not that much anymore, and I'm still close to a decade away from paying off my student debt.

I'm more interested in 'settling down' and 'getting to work' these days, realizing that my sense of personal success is mostly dependent on quality relationships, productivity, and a sense of community belonging. Now, I've done my fair share of living life on the road, and I always enjoyed the experience, but just like the comedown from a psychedelic drug high I was always grateful at the end to be back home squared away in my "real world."

My issue is not with the self-determinism or the low-impact tiny house living, just with the transience of it. Is he certain that he'll be able to be productive working out of the back of a van or in random cafes around the country? What about stimulating interactions with colleagues? Girlfriend??

5
jordanlev 8 ago 4 replies      
I absolutely loved reading this. I liked how he went into it cautiously, testing out whether he could get by with a small fridge, small bed, less possessions, etc. And I also appreciate the web page design itself -- one long vertically-scrolling piece, very easy to read through!

One thing I find ironic though is the attitude towards other people who make a different decision about the worth of a home and the mortgage. Does he not realize that his van was only possible because his parents owned a home, raised him there, and let him park the van in their carport for 40 days while building it out?

6
aresant 9 ago 5 replies      
His "Can I live without my precious possessions?" answer is the most engineering LOL thing I've read all day:

"Pile up my crap. Anytime I need something in the pile, take it out of the pile and save it for later. Monitor usage."

Thank you for posting this.

7
syntex 13 ago 0 replies      
I am really jealous, it's my not fulfilled dream. But right now is kind of difficult with wife and little kid to carry such life.. offtopicThe guy would like to write games. Then the first 6 months he spent to write his own programming language, then some time to write own scripting language "sink" ( why not LUA ). I would love to hear from the author what are the motives to write all these tools.
8
jws 6 ago 0 replies      
Great article, small technical issue:

First, cut all the 0 AWG wire. Why 0 AWG? Because I had a 1500 watt inverter, which meant I could be pulling 150 amps (1500 watts output / 120 volts output 12 volts input = 150 amps input).

That's nearly the right answer, but "watts / volts volts" is not going to end in "amps" as an answer. I'd suggest: 1500 watts / 12 volts = 125 amps.

I also whole heartedly agree with him with statements like "By far the most beautiful place I've driven through has been the drive from Butte, MT to Idaho Falls, ID.". I drive mostly across the country twice a year. I avoid interstate highways. The evening routine is to look at satellite imagery for interesting terrain, look at something like Panoramio to see where people take pictures and of what, then piece together some travel for the next day. Pull over and take a mini-hike if anything looks interesting.

9
cko 7 ago 2 replies      
I'm a full-time pharmacist working 50 hours a week, with several investment rental properties.

Since April of this year I've been sleeping in my 2002 Toyota 4Runner in the parking lot at work. Shower at the gym, infrequent laundry runs, hang out all day at the library with all the other strange people. Pros: feeling of simplicity and freedom; enough said. Cons: a mid-sized SUV is too small and not private enough. I want privacy when I first wake up and put on my contacts and get dressed. I want to wake up, sit up and meditate for 30 minutes without anyone seeing me.

I'm getting a Ford E-150 van for $1500 next week. Going to put in hardwood flooring, maybe insulation and plywood on the walls. Excited.

10
scarecrowbob 5 ago 1 reply      
As cool as this, like a lot of folks here I don't see how a pickup and a gooseneck wouldn't be a better (probably cheaper) option, even if you had to renovate / shop around for the gooseneck.

I know a whole lot of folks who live this way, mostly itinerant musicians.

While this is a much nicer build-- I think it's quite beautiful-- it is a lot closer to a custom conversion van most folks I know have much different, less successful experiences with DIY RVs.

To the folks who cite "stealth" as a rationale here, there are a lot of reasons why you might get kicked off a patch of ground... one persons "hack" is another person's criminal trespass. There are a lot of great places that you can camp out without getting hassled and without relying on other folks footing the bill for your plumbing and pavement.

To the folks citing mobility, I still don't see how that kind of van is more mobile than a pickup.

So while I think that it's really cool-- I gotta say that I think it would have to be cheaper / easier / more reliable to buy a pickup and 5th wheel or similar.

11
scotty79 6 ago 2 replies      
Or you could just move to Poland. $33,750 could buy you studio in any medium town in Poland, even in sub-million population cities.

You'd get: no mortgage, apartment with a toilet, clean running water, wifi and all the electricity that you'll ever need. 5-10 times more area for your stuff. Monthly cost of utilities, tax and fee towards building maintenance of about $150 in total, access to a lot of young, English speaking people you could hire for cheap to help you with your projects.

12
prawn 29 ago 0 replies      
I remember reading of someone else doing something like this. They went to huge amounts of effort with a custom timber interior, fan, lighting, cooktop, water pump, etc. In the end, they said it probably would've been better to just have a blank-slate truck with portable cooking and water.
13
dominotw 8 ago 3 replies      
>, I have a pee bottle and a 5 gallon jug. Line the jug with two trash bags, and cover the poop with kitty litter. Then toss it in a dumpster.

1.is this legal to dump trash in somone else's dumpster without owners permission?

2. Is it ok in the US to dump poop in dumpsters? I know nursing homes incinerate poop but not sure if there is a law specifically against dumping human waste.

14
cassidyclawson 7 ago 0 replies      
Awesome build!

I am a product designer working in tech in San Francisco. I also live in a stealth camper van, mostly by the Whole Foods in Potrero. I ride a folding bike to work downtown. Life is very good and I wouldn't trade this setup for anything.

Here is my build out:http://wonderbywonder.tumblr.com/tagged/chrono/chrono

And here I am:http://i.imgur.com/s4ZpdaO.jpg

15
CPLX 9 ago 2 replies      
That was pretty awesome, I enjoyed reading all the way to the end.

I wonder how long it'll take him to regret the fact that his bed only fits one person in it.

16
dexterdog 6 ago 1 reply      
I'm actually curious about the insurance situation. If you are living in your van I would expect the insurance to either be a lot more or worse, to decide not to cover you because you didn't get a special policy. Then there's the issue of what happens if you are in an accident because now your wheels and your bed are in the shop, a shop which is not going to be able to restore your situation properly.
17
rubicon33 8 ago 1 reply      
I am having a hard time with this article. On the one hand, it resonates with me DEEPLY.

"Sure, it's clich, but it's clich for a reason -- this subconscious drive for freedom is hard-wired in our DNA. No modern comfort or toy can take the place of true autonomy."

On the other hand, I can't deny certain life comforts. Relationships come to mind when considering a life like this. Sure, living frugally on the road while coding your own project sounds exhilarating. But I wonder how I'd feel without my significant other?

I guess what I want more than a life in a van, is economic freedom with a home.

18
andyidsinga 8 ago 0 replies      
wow - dad is really good with the angle grinder. I would have used a jigsaw. Cheers to those skills!
19
andreasklinger 9 ago 1 reply      
Similar but less extreme version: "Just" a "mobile office"

http://davidmckinney.com/blog/2013/12/29/redesigning-the-off...By mr david awesome mckinney :)

20
kylixz 6 ago 1 reply      
I am about to embark on a similar journey. I started off buying a 1993 33' Diesel pusher motorhome with the intent to travel the US fulltime while working remotely. It was awesome fixing it up and making it livable, modern, beautiful, and adding solar. Working with my hands was extremely rewarding! That said I soon learned that 33' is a huge vehicle which I did not feel comfortable driving regularly over mountains and severely hindered locations I could camp at. Now that big rig is for sale... instead I've founded a really cool travel trailer with loads of solar ready to go! I plan to pull that behind my 4runner equiped for overland adventures and cannot wait to get started! Great article and I hope others can try this lifestyle. I hope to share some of my experiences with others as well.
21
Paul_S 6 ago 1 reply      
This is a heart-warming story but he is definitely conflating 2 separate issues.

If you want to have a gap year and drive around the country then do that and it's clearly what he wanted. If you want to cut down on expenses there are far better ways of doing it without buying a van. It makes as much sense as saying the only way to cross a river is to build a giant sling (fun - yes, but mundane options are available).

22
lesdeuxmagots 8 ago 0 replies      
I did exactly this! Bought a used NV2500, went to town. Took 7 months to build. Have closets, cabinets, cooktop, sink, wood floors, butcher block counters, fridge, electricity via solar, bed, etc. etc.

I knew nothing about insulation, wiring, woodwork, power tools, etc. and learned everything as I built it.

Was not cheap, because I didn't want to give up any luxuries, so breakeven is in a matter of years, not months. However, its been treating me well. I have spots that I prefer in South Bay and in San Francisco depending where I'm working out of.

23
codecamper 5 ago 0 replies      
"use the public facilities"

Yeah, right. That's the plan for the first little while & then you'll be just pooping in the woods.

You see, we're in europe in a motorhome. Every time we see a little camper we know two things are going to happen. They are going to start sliding doors at all hours of the day.

And they are going to go poop in the woods.

And there are hundreds of them.

So be sure to get yourself a proper porta potty. Nobody wants to see your toilet paper.

24
mcone 8 ago 4 replies      
I love the idea of doing something like this, but I'm wondering about how to get dependable, high-speed internet access. Anybody have any ideas?
25
kowdermeister 2 ago 0 replies      
Nice story, I could relate with a beach bamboo tent, but there's a level up :)

Action Mobil Desert Challenger motor home

https://www.quora.com/If-a-plague-wiped-out-most-people-on-t...

https://www.google.hu/search?q=Action+Mobil+%E2%80%9CDesert+...

26
ryandrake 6 ago 2 replies      
I always read these stories with a sense of awe and wonder. "I took 2 years off of my totally boring office job to X" where X is something that is 1. expensive and/or 2. not generating income or not nearly as much income as Boring Office Job. How the hell does one live without their salary for 2 years without going into debt or depleting savings? Don't you people have student loans to pay off, medical bill payments, or other financial obligations that can't be delayed? I don't think I could last much more than 3 months, and I'm quite proud of my meager emergency savings. What the hell do you people do for a living that you can save such a vast amount of money (and presumably blow it during said 2 year activity)?

I'm not criticizing--just very curious. Most of the time when this kind of question is asked, the response is a vague and coy, "Well I got a little savings..." Awesome--how on earth?

27
jordache 22 ago 0 replies      
meh.. his dad is skilled for sure, but the insides looks like the stale interior of a house. Not a fan of the build
28
Hondor 2 ago 1 reply      
Having a campervan without a toilet might bite you in some places. New Zealand used to be a great place for this but a couple of years ago they made it illegal to sleep in such a vehicle just about everywhere except designated pay-per-night campgrounds and certain districts each with their own special rules. Even then you're usually not allowed to linger more than a few days at a time in one place.

I doubt America will go that way with so many independent states and so much wilderness though. I'm amazed he can sleep in Wal-mart's carpark.

29
wallace_f 3 ago 0 replies      
These are amazing, and incrdedibly underrated

In the US these are not as seen as romantic and adventurous as they are in Australia, New Zealand and Europe

One thing I'll say: a pop up conversion can be done while maintaining the possibility of incognito mode, and it is really lovely when you are in proper campgrounds to have the pop up!

So happy to see this post on HN, but also kind of sad because if this because a thing it will no longer be as unique, and they will start drawing more attention. Also, people in these campers are the coolest, nicest, most down to earth, happiest, most respectful, adventurous, amazing people (in my experience), and if this becomes 'cool,' then we'll start having the cool kids driving around in these.

30
overcast 9 ago 0 replies      
My school loans are paid off in less than a year, and the thought of this has certainly crossed my mind. I've got the house, and I'm sick of all the shit in it.
31
markbao 4 ago 1 reply      
For another absolutely stunning van build, check out this one: https://imgur.com/a/RijZM

If you see only one photo, it should be this one: https://i.imgur.com/kTtWZ3f.jpg

32
WhitneyLand 3 ago 0 replies      
How do you date, have a relationship, significant otther? The bed doesn't look big enough for two...
33
factotum 2 ago 0 replies      
Kudos to this guy. I'm in my early 30s. My wife and I sold our house almost 2 years ago, bought an RV, and we've been traveling debt-free ever since. Feels good, man. But it's not without its drawbacks. Loneliness can be a constant battle when you're away from family, friends and coworkers. It took about a year to get comfortable with the travel routine. And then there's the maintenance. If I knew all of this ahead of time, I'd still do it.
34
kqr2 6 ago 0 replies      
One of my favorite conversions is this two-story camper built by Japanese students:

http://www1.ttcn.ne.jp/~gyo/English/index.htm

35
Dowwie 8 ago 0 replies      
Come on, voidqk. We all know your dad built this camper van while you took the selfies. His work shop says it all.
36
SwellJoe 4 ago 1 reply      
I've spent 6 of the past 7 years living in an RV (motorhome first, now an old Avion travel trailer with a big old truck to tow it). I recommend it for anyone who is unencumbered enough of other people and responsibilities to do so (i.e., it may not be the right thing for a family with kids, though I know some families with kids who do it and seem happy).

The freedom to travel is magnificent. It precludes many kinds of opportunities, but if you can work remotely, why not do it at the beach or in the mountains or in the desert or wherever you like? It's not dramatically less expensive than living in fixed housing (though that depends on where you were living in the house and where you're parking your RV; when I first started I moved out of a tiny rental house in Mountain View, CA, which cost $2145/month, so I'm not spending anywhere near that now), or at least it hasn't been for me, but there are many benefits outside of cost.

37
gameofdrones 2 ago 0 replies      
While the website is down, these are also neat:

Hank bought a bus: http://hankboughtabus.com/

Castle truck: http://www.doityourselfrv.com/house-truck-castle/

38
kzisme 9 ago 0 replies      
I ended up reading the whole post - awesome story! Not something I could see myself doing, but damn does he look happy.
39
mavdi 4 ago 0 replies      
I can really relate to him throwing most of his precious stuff out. My life turned upside down a few months ago. Now all I have is a backpack with a laptop and some essentials and Airbnb life as it comes. I've never been happier.

Owning things obeys something similar to Newton's 3rd law. They also end up owning you. They need constant care, attention and maintenance. I'm not saying this the right way to live, but do give it a try if you've been thinking about it.

40
fixxer 6 ago 0 replies      
This is an awesome idea assuming one does not want kids or expect to have sex with anything too discerning.
41
toomanybeersies 5 ago 0 replies      
Speaking of minimal living, I've just recently moved to a new city for work, and I've shacked myself up in a backpackers, and plan on staying at the backpackers probably until the end of the year.

It has a lot going for it. It's cheaper than rent (by a significant amount), and it's literally 3 minutes from work. I also get to meet lots of interesting people.

I have my backpack and a laundry bag of kit, and that's it. It's about as minimal as you can get, which has been an interesting experience for me as I usually have stacks and stacks of stuff.

It does have some disadvantages, such as being rather noisy, and the fact that you have to carry all your valuables around with you wherever you go, since things tend to go missing.

42
nickhalfasleep 7 ago 1 reply      
I think this is the cusp of a big change in America. As the physical industrial base evaporates, in return, many people may not buy into the classic "buy property" plan for their lives.

This is good for them. This may not be so great for all the people who bought property and expect it to always increase in value as there may not be as great a demand for it.

43
jonah 9 ago 0 replies      
My officemate is a cyclist and photographer and built out a Sprinter van as a mobile production/adventure mobile. It's got a couch that converts to a bed, fold-out tables, water tank, sink, electric chest fridge, PV panel and battery, inverter, and roof platform. Super functional. All hand built and I can't imagine he spent more than a couple grand outfitting it.
44
misterbishop 6 ago 0 replies      
I like this guy's spirit an ingenuity, but his attitude is not much different from the Infowars bunker people. There's no room in his van for society. You can tell because he only built a bed for 1.

I'd rather live on a hippie commune than this.

45
virtuexru 7 ago 1 reply      
What about getting laid?
46
Yhippa 6 ago 0 replies      
This is my favorite thing I've read on HN so far this year. I loved his pictures (especially of the plains) and the descriptions. I probably enjoyed those pictures more than highly edited photos taken on a full-frame DSLR.

I hope it works out for him. The main thing I would miss would be having a companion and pets. Not sure I could do without those right now. He's in an excellent time and place for this.

47
kazinator 8 ago 0 replies      
In the 1970's TV series Trapper John, M.D. (https://en.wikipedia.org/wiki/Trapper_John,_M.D.) one of the characters, "Gonzo", is a doctor working alongside Trapper John, while living in a motoro home ("The Titanic") in the hospital's parking lot.

Man, think of all the money you can save if you have a good income, and live in a motor home virtually for free.

Gonzo legitimized the whole concept. :)

48
cylinder 8 ago 3 replies      
Did you consider buying a camper van? They are quite common as a lifestyle traveling around Australia, in the US people use giant RVs but these are not practical at all and not a conscientious selection.
49
musesum 5 ago 0 replies      
Inspiring! Have been wondering when I can tweak a Tesla Van: https://electrek.co/2016/07/31/tesla-all-electric-cargo-van-...
50
sofaofthedamned 8 ago 1 reply      
I would love to do this.

Last year after getting made redundant from Cisco I was looking for work but there was nothing for 4 months as a Devops guy near where I live but there was plenty in London. I was actually considering either getting a van to sleep in, or a narrowboat, and working in London with London rates, then coming home at the weekend.

I'd love to know a cheap way of converting something liveable, bearing in mind most offices have showers so I don't need that, just to provide for my family.

51
tdobson 5 ago 0 replies      
I do something similar in the UK.

Stealth Digital Nomad Sysadmin/Sales Engineer in a converted Mercedes Sprinter LWB

http://instagram.com/tdobsonnet

If you're interested in this kind of thing, /r/vandwellers is the place to be!

Happy to answer any questions. :)

52
jameslk 7 ago 2 replies      
I've been curious about living out of a camper or RV in the Bay Area just to arbitrage the higher salaries that are needed to offset the cost of housing. I've heard of some Google employees doing this for a couple years to save up enough to buy a house. The hard part is finding a place to park the camper. Anyone have any experience or knowledge about doing this in the Bay Area?
54
binarray2000 8 ago 0 replies      
Great writeup! Very enjoyable read but, at least for me, the last part "Thoughts on the Van Life" was the best. All the best!
55
anoplus 7 ago 0 replies      
Beautiful and inspiring read about exploring one's individual freedom. May society find it's freedom by collaboration and sense of community.

We as a society have the resources and technology to achieve much more freedom. Freedom enables the creation of even more freedom.

56
ErikAugust 8 ago 0 replies      
I did something similar a couple years back - but much simpler. I just bought a cap for my truck and stuck my sleeper futon mattress in it:

https://www.instagram.com/p/tLu_9wAR4p/

57
20yrs_no_equity 4 ago 0 replies      
I've spent 11 of the past 20 years "homeless" by choice following various practices from living on a boat, to living in a truck camper, to traveling the world living in AirBnBs, to occasionally renting apartments but never really living there. But I'll come back to that.

I want to address several peoples concerns about this guys lifestyle and the presumed limitations:

0. First off Loved that he was using Soylent. That solves a big problem of needing dried food but not liking freeze dried food. If I were to go back to vehicle living I would use a combo of Soylent and Sous Vide. Sous Vide cookers like the Anova are very small, and you can do it just with boiled water, zip lock bags and a thermometer if you want. The results are really fantasic. 30 seconds searing steaks on the grill then 40 minutes in the bath and you have better steaks than you can get at any restaurant for less than $50-- and you can do that on top of am mountain if you wanted! So the food situation is much better than the days of crates of raman.

1. Sex. Sex is totally possible, and it's not creepy at all. When you get on the road and you're traveling you will run into people who are going the same route multiple times. In this way there's a virtual community. This varies regionally of course, travel by train in europe or in alaska for the summer and it becomes pretty tight nit. The women and men you meet there are not exactly going to turn their nose up at your van because that's how they are traveling to. There's a whole vagabond subculture in the USA that ranges from kids hoping trains to techies in vans like this guy to Oldsters in RVs. And there's nothing sexier than a guy who will break with convention and go do interesting things. FTR, my partner and I picked up a woman in the UK who then travelled with us and lived with us for a couple years in poly triad. IT only lasted three years but I don't think the definition of a successful relationship should only be ones that end in death!

2. Cost- you really can save a lot of money. IT's amazing that you can live around the world traveling full time for less than the cost of living in a major west coast city. If you're doing a startup, that's really nice- be in berlin, then go to london, etc. We ran a three person startup (the triad above) going form england to Romania to Chile. While we didn't live as cheaply as we should have or could have (it's a skill) we didn't live more expensively than we would have if we stayed in Seattle (and we never would have met the woman in the UK). When it costs less or doesn't cost more but you have a better experience, isn't that a much better value?

3. The major factor is movement. When you're still- say at a campground or an AirBnB, or anchored at a dock, you save your movement energy, and thus cost, and you spend time working and enjoying. When you're underway- sailing requires attention as does driving, taking trains and planes costs money, boats and cars take gas. The ideal situation is one where you can stay places for a period of time (we used to stay in a country 90 days- the visa limit) to maximize your productivity on the road. This is a lifestyle, not a vacation from life. You earn money when you go, but you earn less money on tavel days.

4. Settling in- another part of the cost of travel is the settling in time. I need to have a good work chair and in each country we would spend the first week or so getting our spot set up to be productive on our startup.

5. The best thing about traveling is meeting the locals- especially outside the USA. This is the reason for the 90 day visa too. You can build real relationships. 4 countries in a year is much better than 9 countries in 4 days! And it's cheaper per-day, because you can be working during the day, and thus it's sustainable.

6. There are many ways to do it. I like the boat the best- it was only 30 feet but it was center cockpit and huge. If I had the balls of a blue water sailor I never would have left and would be traveling around the world in it. But it takes a rare breed to cross an ocean in a 30 foot cruiser!

This van is very much like my experience in the Truck Camper. The truck camper cost me $5,500 all in- an old Toyota Pickup and a $3,500 SKAMPER. You have to crank it to raise the roof. I travelled all the way to Prudhoe Bay in that truck- spending a couple weeks north of the arctic circle.

You can never forget an experience like that!

7. Eventually I vowed to never stop. I decided this was a philosophy and whatever methodology it doesn't really matter. Am I still traveling full time? I'm on a lease, so many of you would say no, but I think I am. You could be too.

What's the difference in lifestyle between crashing in a French student's flat in Romania for 3 months and being on a lease in the USA for 6? In romania 90 days is the max visa and maximizing productive time was ideal. a 6 month lease in the USA isn't that different from the 6 months we lived in the UK (they have a longer visa for US residents).

I now think in terms of the GPWR - Gross Personal Weight Rating. That is the total weight of me and all my possessions. When I was on the boat it was around 13,000 pounds - most of it boat. For the truck it was about 7,000 pounds, most of it truck.

When we were backpacking it was all in the pack- about 60 pounds. Now I am staying in apartments but restrict myself to only what can fit in my car (so I can move across country at a moments notice if I want.) I don't live in the car so it's a tradeoff, I have to rent a sleeping space.

But I'm still mobile. I don't have a bed frame, for instance, I bought a bunch of Akro Mils plastic crates. Turn them upside down and they make a really damn solid bed frame (best one I've ever had, actually) The mattress fits in the back of my car with the seats folded down. I have a mid sized SUV and camping is easy- just put the mattress in the car. Better than a tent (stays warmer). But when I need to move, I can turn the crates right side up and all my possessions go into them.

So, where should I live next? Once my lease is up, I'm going. (and knowing that also puts the kibosh on silly buying.)

Start thinking of every possession as weight added to your GPWR. Do you want to live in backpack? Pare down. Do you want to live in a van? You don't have to be as careful but you should think about how many TVs you buy.

58
Jemaclus 8 ago 1 reply      
I love this idea in theory, but my wife would never go for it. Ah well... Maybe get an RV for longer camping trips...
59
johngalt 5 ago 0 replies      
I guess what I don't understand is why not use one of the ready made builds already out there? Something like a class B RV, or truck camper?
60
ars 7 ago 3 replies      
He needs a diode between the two batteries in parallel. Otherwise slight differences in voltage between them causes them to cyclically charge and discharge each other, wearing them out and wasting energy.
61
oxryly1 6 ago 1 reply      
I love stories like this. Well documented, well thought out, and with a 6 month update... excellent.

Now I'd love to read one about someone who's done this with a family...

62
marknutter 8 ago 3 replies      
So why didn't he just buy an RV? Not to take away from his accomplishment, but isn't this just the most engineery thing to do? Instead of leaning on another industry that has spent decades perfecting exactly what he is trying to build, he spent all the time he could have used actually exploring the world building what is certainly an inferior solution in every regard.
63
serge2k 8 ago 0 replies      
> Does anyone actually enjoy being in a cubicle, all day

No. But trading it for a van doesn't sound more pleasant.

64
shitgoose 2 ago 0 replies      
thank you.
65
nxzero 6 ago 0 replies      
>> "I thought the idea was genius. Not for me, I said, but genius."

Always find it interesting when people say this to me. I mean you can see the awe in their eyes, the longing to "just do it" - and then, reality settles back in and the resign to living the same life over and over until the end of time.

66
mudil 6 ago 1 reply      
I send my son emails with links to different interesting projects. He is ten. Too bad I can't send this one out. Why do people use foul language everywhere and in between? It's like a disease.
67
estrabd 7 ago 0 replies      
1. sell house

2. buy van

3. get someone to customize your van

4. ???

5. profit

68
ocdtrekkie 7 ago 0 replies      
I am "happy" in my mortgage-limited slave life, but I've always wanted to extend my vehicle a bit. My car is essentially like a little piece of my home I take with me from place to place. I feel as comfortable in my car as I feel at home.

I've been looking into a second battery and solar setup just for the main goal of running a computer in my crossover. But I'll admit, that job does take up a lot of time I might otherwise use for doing it.

69
gambiting 8 ago 1 reply      
I'm genuinely curious - why did he do all the repairs in his house before selling it? Was the market for housing that bad that he couldn't sell it as-is for the new owner to do repairs?
70
puppetmaster3 6 ago 1 reply      
why not get a rv - pre made thing?
71
Qantourisc 9 ago 1 reply      
The fact that one (in a lot of countries) has to do this in a van, because of regulations, is kind of tyranny: Either you get in-prisoned in debt/rent, or you get to live in a van or on the streets.
3
Hardening Compiler Flags for NixOS mayflower.de
48 points by ivank  3 ago   6 comments top 4
1
oconnore 6 ago 0 replies      
If there are any other roadblocks keeping you from using Nix in your infrastructure: keep in mind you can overlay Nix on top of another distribution. This way you can get a more mature base system and still have all the benefits of Nix available when you want it.
2
AstralStorm 39 ago 1 reply      
Talking about stack protector providing any sort of guarantee makes me chuckle. Both this and fortify are relatively easy to work around. PIC/PIE are the real protection, and but still nowhere near foolproof. Also new Address Sanitizer is not mentioned.
3
cstrahan 1 ago 1 reply      
Awesome work, Franz et al!
4
colemickens 46 ago 0 replies      
Excellent! Really, anything to propel Nix(OS) or Guix(SD) forward gets a huge thumbs up in my book. Nix is one of those things that I explain and then have people telling me I'm wrong and/or an idiot because they can't believe that you can have a distribution based on building packages from source, backed by a binary cache. If you think Snappy or Flatpak/Xdg-App are cool, you owe it to yourself to look at Nix/Guix, which provide a superset of the features provided by Snappy. And then some.

NixOS + NixOps means I can have a single declarative file that describes the state of my cloud VMs, cloud load balancer, cloud Traffic Manager... plus the exact software on the VMs (all the way down to the exact checkout of gcc used to compile everything and the nginx configurations)... literally everything is snapshotted and revertable.

If it weren't for a systemd-related bug, I'd have a one-command way of getting a Kubernetes cluster booted in Azure via NixOS, where upgrading a cluster would simply require editting a single file and re-running the nixops tool.

I wish more people knew about NixOS. You know how everyone writes pie-in-the-sky blogposts about the "next-gen" Node.JS package manager? Yeah, well, Nix already packages NodeJS libraries/apps, plus Rust libraries/apps, plus Go libraries/apps, plus Python libraries/apps, etc, etc.

4
The Myth of RAM (2014) ilikebigbits.com
467 points by ddlatham  11 ago   217 comments top 45
1
reikonomusha 9 ago 2 replies      
I think there's some good info in this article covered by various degrees of misinformation. For some reason, the article starts off with this totally wrong definition of big-O, and proceeds to make conclusions with this wrong definition. Let me provide the accurate definition:

The statement "f is O(g)" means there exists some input, call it t, such that for every x >= t, it only takes some constant multiplier M (i.e., constant in x) to always have g absolutely no smaller than f. In notation:

|f(x)| <= M * |g(x)|, where x is at least t.

This bit about "x is at least t" is very important and notifies us that this is "asymptotic behavior".

It does not make a difference how wacky or weird f is compared to g below t. It can contain all these crazy memory hierarchy artifacts, it could contain a short burst of exponential slowdown, it could contain anything.

Furthermore, according to the above definition, big-O has nothing to do with any tangible quantity whatsoever. It's a method for comparing functions. The functions may represent whatever is of tangible or intangible interest: memory, time, money, instructions, ...

Big-O analysis usually posits that the details below t aren't the details that matter. (Of course, there are situations where they do, but in such you would not use big-O.) If you want to have some analysis that is global, you don't need asymptotic analysis (though it might help as a start). You can just talk about functions that are strictly greater than or less than your function of interest everywhere. But these analyses are difficult because a much higher level of understanding of your function of interest is required.

2
Tojot 6 ago 0 replies      
It so happens that a large part of my PhD was on this very subject. The result I've got N log(N), this is more visible when you get to larger RAM (I had 0,5 TB RAM at the time).We have an empirical result, a justification and a rigorous predictive model.

The reason has to do with hashing, but a different type: TLB.

I posted more details as https://news.ycombinator.com/item?id=12385458

3
aaronbwebber 10 ago 10 replies      
The problem with this analysis is that in the graph in the very first part he shows that memory access IS O(1) for pretty substantial scaling factors, and then when you hit some limit(e.g. size of cache, size of RAM) access times increase very rapidly. Sure, if you draw a line across 6 orders of magnitude, it ends up looking like O(n^1/2), but how often do you scale something through 6 orders of magnitude?

The "memory access is O(1)" approximation is pretty good, certainly good enough for almost all every day use. The median size of a hash table I allocate definitely fits in L1 cache, so why shouldn't I think of it as O(1)? If you are reading off of disk, the O(1) approximation holds as long as your dataset stays between 1 MB and 1 GB. That's quite a bit of room to play around in.

Yes, you need to be aware of access times and the changes in them if you are really scaling something way up. But I'm not convinced that I shouldn't just keep thinking of "hash access is O(1)" as a convenient, generally accurate shortcut.

4
ChuckMcM 10 ago 6 replies      
Since it is a topic I'm interested in I took the time to read all 4 parts, the author manages to summarize it in a paragraph which would have been helpful at the beginning:

When somebody says Iterating through a linked list is a O(N) operation what they mean to say is The number of instructions needed to be executed grows linearly with the size of the list.. That is a correct statement. The argument Im trying to make is that it would be a mistake to also assume that the amount of time needed would grow linearly with the size of the list as well. This is an important distinction. If you only care about the number of instructions executed thats fine, you can use Big-O for that! If you care about the time taken, thats fine too, and you can use Big-O for that too!

Sadly, he doesn't take this knowledge to its conclusion. Let's introduce the notation Oi() for the Big-O notation in instructions, and Ot() for the Big-O notation for time.

Lemma: For all f(N), if Oi(f(N)) > Oi(g(N)), Ot(f(N) will be > Ot(g(N)).

Or put another way, it's important not to confuse complexity scaling with time scaling, but the more complex the computation, the longer it will take.

5
wscott 10 ago 0 replies      
Great series of articles and the lessons are very important to someone writing performance system's programs.

Here is another chart I like you show people:https://dl.dropboxusercontent.com/u/4893/mem_lat3.jpg

This is a circular linked list walk where the elements of the list are in order in memory. So in C the list walk looks like this: while (1) p = *p;

Then the time per access was measured as the total length of the array was increased and the stride across that array was increased. The linked-list walk prevents out of order processors from getting ahead. (BTW another huge reason why vectors are better than lists)

(This is from an old processor that didn't have a memory prefetcher with stride detection in the memory controller. A modern x86 will magically go fast.)

From that chart you can read, L1 size, L2 size, cache line size, cache associativly, page size, TLB size. (It also exposed an internal port scheduling bug on the L2. A 16-byte stride should have been faster than a 32-byte stride.)

6
jcoffland 10 ago 1 reply      
Math is pure and not constrained by the real world. Big O analysis begins with the assumption that you have unlimited uniform memory. The author points out that memory is not uniform in the real world. It's equally untrue that we have infinite memory at our disposal. The limits of the real world are good to remember but that does not invalidate Big O analysis.
7
corysama 8 ago 0 replies      
A lot of people are pointing out that BigO is a purely theoretical, mathematical model that should be understood and used properly without regard to silly details like physics.

That is theoretically correct. But, the difference between theory and practice is that in practice there exists a large percentage of programmers writing code for the real world without understanding and using BigO properly. Their mental model of performance begins and ends with BigO. As far as they are aware, its model is reality.

Source: I've been giving a large number of programmer job interviews lately. It's a rare day when I encounter an engineer (even a senior one) who is aware of any of the issues brought up in this series. And, I work in games!

8
MaulingMonkey 7 ago 0 replies      
The article is still wrong - iterating through a linked list is O(N log(N) sqrt(N)). You can't have infinite nodes in a 16-bit, 32-bit, or even a 64-bit address space - to deal truly with N, one must consider the more generic case of a variable address encoding, which has a variable size (log(N)) and associated lookup etc. costs as the number of nodes grows.

This is the motivation behind e.g. the "x32 ABI" in Linux: All the power of x86-64 instructions, with none of the additional cache pressure/overhead of 64-bit pointers - log(32) being cheaper than log(64).

...ahh, being this explicit in your Big-O notation is probably not that useful, usually, although I've seen it occasionally in papers (where they're quite explicit about also counting the number of bits involved). Maybe they're dealing with BigNum s, which would make it a practical concern? The key takeaway is this:

> That I use Big O to analyze time and not operations is important.

Time depends on compiler settings, allocation strategy, and a whole host of other factors that are outside the purview of your algorithm. Operations is a lot easier to contrast and compare between different algorithms, the meat of what you're trying to do most of the time. Both are valid choices, just know which one you're dealing with.

The time factors are good to be aware of, to be sure - the performance pitfalls of (potentially) highly fragmented, hard-to-prefetch linked lists over unfragmented flat arrays should be well known to anyone charged with optimizing code - but it's probably easier to think of them as some nebulous large time constant (as even array iteration is going to hit the same worse-than-O(N) behavior, although with proper prefetching the bottleneck may become memory bandwidth rather than memory latency) and deal with those differences with profiling and other measurements, instead of Big-O notation.

9
jimminy 10 ago 2 replies      
I find this really odd, it's not wrong, but it doesn't invalidate O(1). It's mashing two-things together that are unneccessary and can cause misunderstanding.

Big-O provides a decent tool for generic analysis and an understanding of access times of memory hierarchies. Since memory hierarchies can vary, they shouldn't be considered while doing generic analysis, much anyways.

Both are important to understand. The key thing is setting your Big-O access expectations to the slowest level of your heirarchy. In that way, your expectation remains generic and still proximally accurate across the average cases.

When you consider them together, think of the heirarchy as a series of piecewise functions that modify the value of the constant time based on the speed of the bounds that fit your data.

This square of N notation falls apart in other cases. 128GB's of RAM would have roughly the same access speed as the 8GB's he had available, if he had that much in his system. But having 128GB of RAM would completely destroy the squaring by flattening an entire magnitude from his hypothesis.

But it is a nice display of memory heirarchies, IMO.

10
DanWaterworth 11 ago 1 reply      
> You'll know that iterating through a linked list is O(N), binary search is O(log(N)) and a hash table lookup is O(1). What if I told you that all of the above is wrong?

It's not wrong, it doesn't have enough contextual information to be right or wrong.

11
michaf 10 ago 0 replies      
Interesting read. Researchers in the HPC community have developed a number of performance models to predict real-world performance in more detail than possibe through simple Big-Oh of number of operations, e.g. while OP concentrates on latency, the Roofline model ( https://en.wikipedia.org/wiki/Roofline_model ) mainly considers limited memory bandwidth.
12
hacknat 9 ago 2 replies      
Nah. Sorry cache-misses don't count as part of a theoretical analysis on complexity. Why? Because you're getting into specific access pattern performance. Complexity is about "all things being equal". Is it the only thing you should consider? At first it should be, then if you run into a problem with a specific structure that has remarkable scale or access then go ahead and consider what the underlying hardware might be doing with the specific access patterns your structure is encountering.

It's interesting to see linked-list as his example, because it is the most likely to have cache-misses as you move through it as the allocations are very fragmented. I'd be very curious to see the same chart on a warmed-up hash-table.

Also, if we're considering the hardware, can we take into account pre-fetching and branch prediction? What are your numbers then? Yeah RAM is farther out then the local caches, but the CPU is also not completely ignorant of what it has to do next.

13
kenjackson 9 ago 0 replies      
There's been a fair bit written on the topic. One of the better papers that has a parameterized model is here: https://www.computer.org/web/csdl/index/-/csdl/proceedings/f...

I should note that this paper is more than 25 years old. :-)

14
whack 2 ago 0 replies      
It's a very interesting experiment/conclusion, but it rests upon one assumption: The assumption that the entire dataset has been preloaded into the L1/L2/L3 caches.

This assumption is a shaky one to make, and is easily violated. Imagine if you have a hashmap that is small enough to fit entirely in L3 cache. However, most of it has been evicted from the L1/L2 caches, by other data that the core has been reading/writing to as well. Eventually, the thread returns to the hashmap and performs a single lookup on it. In this scenario, the time required will indeed be O(1).

So what you really have is a best-case-complexity of O(sqrt(N)), if your data has been preloaded in the closest possible caches, and a worst-case-complexity of O(1) if your data is stuck in an outer level cache/DRAM. Given that we usually care more about the worst-case-scenarios, not the best-case-scenario, using the O(1) time complexity seems like a reasonable choice.

Going back to the author's premise that the time-complexity of a single memory access is O(sqrt(N)), not O(1), this is true only where N represents all/most of the dataset being processed. If N represents only a small fraction of the dataset being processed, and your caches are going to be mostly filled with other unrelated data, then the time complexity is closer to O(1).

Clearly the O(sqrt(N)) is more accurate than O(1) under some circumstances, but even so, it's not clear what benefit this accuracy confers. All models are inaccurate simplifications of reality, but simple-inaccurate models can still be useful if they can help in decision-making. Big-O analysis isn't used to estimate the practical running-time of an application. For that, you'd be better off just running the thing. Big-O analysis is more used to compare and decide between different competing algorithms/data-structures. And in that sense, whether you choose to model linked-lists/binary-search/hash-maps as O(Nsqrt(N))/O(log(N)sqrt(N))/O(sqrt(N)), or O(N)/O(logN)/O(1), the recommendation you end up with is the same.

15
StillBored 10 ago 0 replies      
I guess the author is trying to simplify, but its way more complex than that. Simply assuming a few layers of cache completely misses all the other layers that have effects starting with.

Cache lines, RAM Read vs write turnaround, dram pages, number of open dram pages, other CPU's interfering with the same RAM channel, remote NUMA nodes, and probably some I'm forgetting. All this is very similar to secondary storage access rules (even for SSDs)...

16
lorenzhs 7 ago 0 replies      
> At this point some of you may argue that the whole idea of Big-O analysis is to abstract architectural details such as memory latency. This is correct - but I argue that O(1) is the wrong abstraction.

No, your model is wrong. Others have already pointed out some issues with the author's understanding of Big-O notation. However, this is a fundamental misunderstanding. Big-O is a tool to analyse some function's asymptotic behaviour, i.e., how it behaves when the input parameter grows versus infinity. You have to put your model of cost into that function. If your measure is time, and memory access doesn't take constant time in your model, then you have to account for that in your cost function. You can just as well use Big-O notation to describe the asymptotic space complexity of an algorithm (how much memory does it need?). O(1) has no special meaning - it's just the set of all unary functions whose value stays below a constant, no matter how large their input parameter gets.

The author is literally blaming his tools for his own misunderstandings.

17
scott_s 9 ago 1 reply      
> For the purpose of this series of articles I'll be using the O(f(N)) to mean that f(N) is an upper bound (worst case) of the time it takes to accomplish a task accessing N bytes of memory (or, equivalently, N number of equally sized elements).

That's not really valid; it's not how algorithmic analysis works. The author's conclusion for what is happening and why is correct, but I believe he is confused about how to get there.

Simply, when doing complexity analysis on an algorithm, one must always count an operation. It's not okay to point to the time taken for an implementation and say "That's our function." It is a function, but it's a function of time, not a count of how many operations are performed at given sizes of N.

However, he is correct that naive analysis of arrays and linked lists will result in this odd behavior: arrays will tend to outperform lists on real systems. The problem with the naive analysis is in what it counts. For example, on an insert, a naive analysis will count the number of elements accessed in the structure. That's naive because it assume all accesses are the same - which is what he's getting at with the "myth of RAM". Because of the memory hierarchy, they are not all equal.

But the correct response is not to give up counting operations and look at time, the correct response is to find the right thing to count. And the right thing to count is basically going to be last level cache misses - the operations that force one to go to memory. If you do that, then you will find that the operations you are counting will correlate much better to the actual time spent.

In some places, the author gets this mostly correct: "You can also use Big-O to analyze the time it takes to access a piece of memory as a function of the amount of memory you are regularly accessing." That's fine, as you're counting memory accesses.

In other places, it's not correct: "That I use Big O to analyze time and not operations is important." You can't count time, only operations. You want to count the operations that correlate with your actual running time, but the entire point of good analysis is to find those operations. You can't just shortcut it, only measure time, and then call it algorithmic analysis.

The author gets a lot right, but despite the lengthy discussion, I think he still has some confusions about algorithm complexity analysis.

For the record, these lessons should be familiar to anyone who has done serious performance analysis of computer systems, either on their own, or in the context of a course that focused on systems or architecture.

18
captainmuon 9 ago 0 replies      
I think this way of looking at the problem is misleading. O(1) or O(N) always stays O(1) or O(N), just the constant changes. You can always access any element in RAM (on a SSD, HDD) in a bounded amount of time. Use that pessimistic time as the time of one step.

Viewed in this way, O(N) is still O(N), and a processor with caches is a magic device that somehow computes faster than O(N)... or for O(1) computes in sub-constant time (if that can be even well-defined).

19
tailrecursion 7 ago 0 replies      
The author argues that a random access to memory is not O(1) but instead O(root N) because of distance.

The easy reactive response is that with respect to algorithm design the size of RAM, N, is a constant.

On the other hand for very high scaling factors, as input size rises the size of RAM must also rise. In this way N can be thought of as a variable and that seems to be what the author is thinking. Different algorithms will behave differently as they are scaled to infinity and beyond.

I think the author's argument is interesting but maybe it's better to make new models for time complexity analysis. I think Bob Harper's students have done good work on this.

In addition to distance there is also the cost of selection, namely the muxes and decoders, which would multiply the cost of access by log N.

20
falcolas 11 ago 2 replies      
I'm not sure the cost of accessing the storage medium belongs in the complexity of the algorithm, since that cost will change based on the storage medium, not the algorithm itself. It strikes me as more of a constant, (even though it isn't constant).

Still, interesting read, nontheless.

21
Double_Cast 2 ago 0 replies      
Why is information within a sphere bound by m * r? Naively, I'd expect it to be bound by r^3 or m * r^3.
22
jandrewrogers 10 ago 0 replies      
Closely related but unfamiliar to most software geeks, Bldy's work in the 1960s and later on the theoretical limits of operation throughput when using cache hierarchies is very relevant to high-performance software design. The theory generalizes nicely to any topology where you can control how access latencies are distributed, and carefully designed software can get relatively close to the throughput limits (though it is somewhat incompatible with the way most software engineers design systems these days e.g. multithreaded concurrency is a non-starter).
23
Symmetry 8 ago 0 replies      
Thanks to the prefetcher a low-entropy access to memory, like reading the next value in an array, will tend to happen in constant time. For a linked list, tree, or other data structure where the location of the next access can't be predicted easily by something like stride analysis then the author is correct.
24
bryanlarsen 11 ago 0 replies      
Great article. It gets better, too, so make sure you read all 4 parts.
25
truantbuick 9 ago 0 replies      
What the graph really seems to indicate is that time is only linear when working within a cache size on the author's computer (remember that iterating a linked list accounts for the gradual increase in between the cache jumps). If the theoretical upper bound of RAM access was really the important factor at this scale, I wouldn't expect it to be almost flat and to suddenly jerk up every time we have to go to the next cache.

Assuming the author's O(sqrt(n)) is correct, it seems only relevant on much, much larger scales.

In light of that, it really doesn't make sense to pollute the typical use of Big O notation. It should always be understood to be just one metric to understand an algorithm.

26
vvanders 10 ago 0 replies      
Related, Herb Sutter's fantastic talk about arrays:

https://channel9.msdn.com/Events/Build/2014/2-661 @ 23:30

27
joseraul 9 ago 0 replies      
The theoretical discussion is interesting, especially the circular library that gives some intuition of the square root law.

But in practice, you usually know the order of magnitude of your data, so access is rather O(1), for some constant that depends on the size of the data. Jeff Dean's "Numbers Everyone Should Know" quantifies this constant.

http://highscalability.com/numbers-everyone-should-know

28
bjd2385 3 ago 0 replies      
Now I wonder what would happen to our time complexities if we were near a black hole...
29
chris_va 10 ago 1 reply      
The black hole piece in part II was amusing, if you keep reading.
30
geophile 9 ago 0 replies      
Why is this wrong-headed discussion top-rated on HN?

And why is there so much misunderstanding on HN of big-O notation wrt cache misses lately?

All you kids, get off my lawn.

31
faragon 10 ago 0 replies      
If I understood it correctly, the author links cache miss from memory subsystem hierarchy to asymptotic complexity (big O), so if an operation for fixing a cache miss takes higher time complexity, he takes that instead of O(1).

Similar happens when you write an O(1) algorithm while relying on malloc(), which is usually O(n log n), thus your algorithm is not really O(1), but O(n log n).

32
greggyb 10 ago 1 reply      
I think there is a key point in the FAQ (article four, all linked through the series):

> You are conflating Big-O with memory hierarchies

> No, Im applying Big-O to memory hierarchies. Big-O is a tool, and I am applying it to analyze the latency of memory accesses based on the amount of memory you are using.

As some others have pointed out, the line is crossing hierarchies of cache, and that he is not looking at the big O of instructions. Both of these are accurate, and the author is aware of this.

He is using the tool of big O analysis to measure a performance characteristic. That characteristic is not the traditional number of instructions or amount of memory utilized in the computation of an algorithm. It is the latency for access to a random piece of data stored on a system.

There are two cases considered, the practical, and the theoretical.

At the practical level, we do not have a unified physical implementation of the address space in a modern computer. This means that accessing a random address in memory is an action that will most likely cross levels of the cache hierarchy. It is well known that there are order of magnitude jumps crossing these levels. Perhaps it is uninteresting to you, and the importance of cache locality in an algorithm is something that you already have a very strong handle on. That makes his observation of time-to-access a random address trivial, but not wrong.

Big O tells us that a binary search is the most efficient search algorithm for an array (constraint - the array must be sorted), but in practice a linear search with a sentinel value across an unsorted array will be faster if the array fits in cache. Keeping in mind the big O latency of random memory access across cache hierarchy levels would be the theoretical analysis to tell us this. The traditional big O looks at number of instructions. These are both valid tools in choosing an optimal algorithm.

The second point the author makes is the theoretical limit. Assume the ideal storage medium with minimum access latency and maximum information density. This storage medium is matter. The limit of packing is the point at which you would create a black hole.

With this ideal storage medium, you cannot pack an infinite amount of data within a distance that can be traversed at the speed of light within one clock cycle. For this colossal storage array, there are some addresses which cannot be physically reached by a signal moving at the speed of light within the amount of time that a single clock cycle (or single instruction) takes. Accessing a random address is not a constant time operation, though the instruction can be dispatched in a constant time. There is a variable time for the result of that instruction to return to the processor.

At this theoretical limit, we would still end up with a cache hierarchy, though it would be 100% logical. With a single storage medium and unified address space, the cache hierarchy would be determined by physical distance from CPU to physical memory location. Those storage cells (whatever form they take) that can be round-tripped by a speed of light signal in one clock cycle are the first level of cache, and so on. You could have very granular, number-of-clock-cycles cache levels stepping by one at each concentric layer of the sphere, or you could bucket the number of clock cycles. Either would effectively act as a cache.

This theoretical exercise is an extreme limit, but bears out the practical implications that our current physical implementations of cache hierarchy exhibits in practice.

Again, perhaps these observations are trivial, but I believe they do stand up to scrutiny. The key insight is that the performance characteristic being described by big O is time, not the more traditional space or number of instructions.

I think time is a valuable metric in terms of algorithm selection. If we think about end users - they don't care that one instruction or 1,000,000,000 are being executed. They care about how quickly work is done for them by the computer. Instruction-based analysis can be a huge help in this consideration, but so can time-based analysis.

Neither should be ignored, and neither invalidates the other.

33
bastijn 10 ago 0 replies      
Only after reading the last article of the series I checked the link to share it. Only then noticed that I misread the heading on the blog. I read "I like big tits" and though is this page hacked or something? The url corrected my dirty mind :).

Great series. Even if you don't agree with the notation it has still valuable information. Thanks author!

34
lsh123 7 ago 0 replies      
The graph in the article shows the impact of L1, L2, and L3 cashes. If array fits into L1 cache the access will be the fastest and then it degrades with L2 cache, then L3, then generic memory.
35
Skunkleton 10 ago 0 replies      
To me, all this article has show is that depending on the size of a data structure, you will need slower and slower memory. We already know that. The article shows that within the bounds of a particular type of memory the access time is mostly constant, which is exactly what O(1) means.
36
justAlittleCom 10 ago 7 replies      
I am sorry... but no, the article is interesting and well written, but it has nothing to do with big O notation.Random access in memory is still in O(1), it doesn't depend on the size of the data structure (I am assuming that is the "n" the author talk about by pretending that a memory access is O(sqrt(n)).Even if you have a very complex memory architecture with 15 caching levels, spread all over the world, if you have a maximum of 5 day delay for accessing your memory through the mail, it will still be O(1), because 5 day is constant, it does not depend on the size of the data structure.

The "n" the author is really talking about may be the depth of the cache hierarchy.

37
solarexplorer 7 ago 3 replies      
Something that the author seems to be missing is that traditional complexity analysis (with mathematical proofs etc) is done for Turing Machines which have one-dimensional memory (an abstract tape), and reachable memory is linear with time. Current microchips are two-dimensional, so reachable memory increases square with time. If we had three dimensional memory (stacked chips?), then reachable memory would increase cube with time.

It all depends on what kind of machine you are talking about...

38
haddr 10 ago 1 reply      
I think that at some point this O(n * sqrt(n)) is actualy not precise. Maybe it works for the first few GB, but then other mechanisms come into play.

For example processing 100GB of data actually don't have to be O(nsqrt(n)) because if you process it on cluster, then other machines are also using L1, L2, L3 caches and RAM. Then the whole process can be streamlined which means that some operations can be faster than the pessimistic nsqrt(n).

39
grabcocque 10 ago 2 replies      
The Myth of RAM is that you need to have lots of it, but it's bad to use it. Because that's 'bloat'.
40
jlarocco 10 ago 1 reply      
The article is conflating theoretical algorithm analysis and low level implementation details.

Big O analysis is a theoretical measurement of algorithm performance. By definition it ignores details like memory access speed, the exact instructions used, and other details of specific hardware architectures.

Real life algorithm implementations obviously need to deal with those low level implementation details, but that doesn't change the theoretical analysis. It's easy enough to find (or design) machines without cache where this difference in memory speed doesn't exist.

41
rdiddly 9 ago 1 reply      
The library example is a bad one, since it leads to O(N) and not O(N), a conclusion that contradicts the thesis.

"In general, the amount of books N that fits in a library is proportional to the square of the radius r of the library, and we write N r."

No, the number of books N is proportional to the area of the front face of the shelving, not the area enclosed within the circle. Assuming all libraries are the same height, that means N is proportional to the circumference of the circle, which is proportional to r, not r. Meanwhile, assuming that all books are reachable in the same amount of time by the librarian no matter their height on the shelf, that means T r (as before). Since T r and N r, that means T N or T=O(N).

42
wyager 10 ago 2 replies      
"I can vaguely fit a line to this graph that's clearly nonlinear, so that line describes the asymptotic complexity of the system."

Huh? Am I taking crazy pills, or is this a horrible analysis? It looks like the behavior is O(whatever it's supposed to be) times a constant multiplier at a few different regions. The OP conveniently cuts off the graph so you can't see it level off.

43
otterley 10 ago 0 replies      
Editors, can you please date this submission? It's from 2014.
44
dingo_bat 9 ago 0 replies      
My laptop has been frozen for half an hour now after running the benchmark from the article :(
45
fractal618 8 ago 0 replies      
> And so we come to the conclusion that the amount of information contained in a sphere is bounded by the area of that sphere - not the volume!

mindblown.jpg

5
How a Technical Co-Founder Spends His Time jdlm.info
133 points by JohnHammersley  6 ago   27 comments top 10
1
bluetwo 1 ago 3 replies      
The biggest thing I learned when analyzing my time, which I wish I had recognized earlier, was that I can have two types of productive days:

I can have days where I do a million small things and hop quickly from task to task.

Or, I can have days where I work on big issues and should not be interrupted by small issues.

But, if I think I'm going to have a day where I work on a big thing and it turns out I get interrupted by a million little things, I end up doing nothing well and end up very unproductive.

2
beliu 3 ago 1 reply      
Thanks for sharing this. Is the metime code publicly available? I'm the CTO of another small tech company (https://sourcegraph.com) and I'm also a bit obsessive about time tracking.

I wrote a small open-source CLI that gives you a CPU-profile-like view of time spent on your computer: https://github.com/sourcegraph/thyme. Thought I'd share since OP and others here might find it useful, and I'd love to hear any feedback.

3
esalman 3 ago 0 replies      
Off-topic: I really like Overleaf. It removes the barrier of installing all of the compiler, editor and dependencies and allows anyone to jump right in and and start typing manuscripts. That's why I always recommend Overleaf to anybody looking to learn Latex initially.
4
syntex 43 ago 0 replies      
I work as a freelancer (Europe) and I think I work reasonably hard. But is really difficult for me to honestly log more than 10-15 dev-billable hours during the week. The rest of the time is spent in chat and learning / playing with a new things.
5
ones_and_zeros 2 ago 1 reply      
In the 5 seconds of thinking what I'd wish I'd seen before I clicked the link:

Pre MVP: 80% dev, 20% biz dev

Pre Revenue: 20% dev, 80% biz dev

Pre Profit: 80% hiring, 20% dev

Profitable: 100% biz dev

6
fosk 4 ago 1 reply      
It's interesting to note that the meetings time decreased as the management time increased, because often the two are strongly correlated.

In my experience meetings and management time grow proportionally, since meetings are a good way to talk with the team, set expectations and review the results. Or, in other words, some meetings == management.

7
lloydde 3 ago 0 replies      
> My app had some simple charting built in but no real analysis. Its only now, six months later, that Ive had a chance to really get into the dataset

I'm interested to find out if the data influenced the OP from week to week (month-to-month) during the collection phase? Did it influence his keeping up his development by shifting it to weekends? Or was there other catalysts to that? What were they? Were there planned development milestones? were they sized?

Fascinating article.

8
patrickgordon 4 ago 1 reply      
Interesting article, pretty remarkable to be that committed to the time tracking -- I can barely stick with using the pomodoro technique for longer than a few days in a row...

I hope OP continues to grow the dataset. Will be interested in a follow up later on!

9
cup 3 ago 2 replies      
The notion that a 130h work week is admirable, desirable, sustainable or useful is ridiculous and should be criticised every time it's raised.

Unless you're working for yourself or working in a job where your contract compensates you by the hour then investing such huge swathes of time I think is destructive.

There is a reason workers united, fought and were martyred for the 8 hour work week and the creeping clawback by industry is a problem.

That aside, very interesting to see such a consistent time keeping record.

10
dewitt 1 ago 2 replies      
*Or her time.
6
The Fall of Avalon Hill (1998) earthlink.net
25 points by shawndumas  3 ago   22 comments top 8
1
miiiiiike 2 ago 0 replies      
I've been reading the fantastic "Zones of Control: Perspectives on Wargaming" (https://mitpress.mit.edu/zones-control) recently.

Good stuff on everything from modeling counter-insurgencies in games to a history of Amarillo Design Bureau, who acquired the rights to Star Trek TOS. In 1981. When nobody cared. In perpetuity. And have been creating an alternate canon for one of the world's most popular franchises since.

2
hudibras 40 ago 1 reply      
It's interesting how this article thinks it's inevitable that wargames have had their day in the sun and will now die away almost completely, when in fact the wargame world is far stronger and the games far more interesting now than back in 1998.

Someone with more experience on the business side can explain it better than me, I'm sure, but the shift from catalog sales to pre-orders saved the industry. Instead of printing up 10,000 copies of Spices of the World and hoping they sell out, now the publishers have cash money in hand and know that they'll (probably) make a profit once the printing presses start up.

3
StanislavPetrov 1 ago 0 replies      
>Even wargamers would be hard pressed to name more than a few AH computer products, and nothing ever came close to impacting the general public like SimCity or Quake.

I think he is selling Avalon Hill a little short. As a gamer in the 80s I remember Avalon Hill very fondly. This was especially true in the early 80s when games were few and far between. I spent many hours tediously loading B-1 Nuclear Bomber on my tape cassette drive. They certainly never had a huge impact on the general public like SimCity but they were very well-appreciated and well-regarded in the Gaming community. To this day I still occasionally load up Guderian on the emulator, which remains an extremely challenging game even given the limits of AI in 1986.

4
JoeDaDude 2 ago 2 replies      
This is, of course, ancient news. If anyone misses the old wargames, I invite you to come to Board Game Geek (aka BGG) [1] and check out the scene. While they cover a lot more than wargames, the wargame community tends to be very active. War games are alive and well, though perhaps a niche hobby. The companies mentioned in the article are still alive and well, in particular, Multi Man Publishing continues to publish games for the Advanced Squad Leader rule set, however irregularly [2].[1] https://www.boardgamegeek.com/wargames[2] http://www.multimanpublishing.com/Products/tabid/58/Category...
5
kabdib 1 ago 3 replies      
I had a few Avalon Hill games when I was growing up, but none of my friends wanted to play them, and I was terrible at finding new friends.

So I read the rulebooks for Panzer Blitz and various other WWII games. That might have been more fun than playing the games :-)

6
wrigby 49 ago 1 reply      
I have tons of memories of playing Civilization with my brothers. We would leave the game board set up for weeks at a time and play it for a couple hours every day. It's frustrating to know that such a great board game was killed off by a naming conflict.
7
ddp 54 ago 0 replies      
FYI, SPI lives on still in https://www.hexwar.net/
8
rodgerd 2 ago 2 replies      
RuneQuest was hands-down my favourite pencil-and-paper gaming system ever; one thing I think it suffered from was that the world of Glorantha, while very much true to Bronze Age mythos, was a conceptual reach for people used to thinking of D&D's action-movie worldview.
7
Are PhD Students Irrational? lareviewofbooks.org
34 points by jseliger  2 ago   35 comments top 11
1
geebee 3 ago 0 replies      
Hard to say. The RAND institute did a study concluding that there is no meaningful shortage of STEM graduate degrees, that the aversion to these degrees, to the extent it exists, is a rational response to completion times, attrition rates, job prospects, and salaries when compared to other professional degree programs such as MBA, law, or medicine. However, this wasn't quite the same as concluding that it is irrational to pursue these degrees, just that we should stop scratching our heads and wondering why more people don't pursue them.

Payscale has an interesting ranking of graduate degrees by program, which is more useful than raking all holders of a particular graduate degree together.

http://www.payscale.com/college-salary-report/grad?page=49

Unfortunately, it doesn't break out MS and PhD holders by subject studied. So while you do get to see a specific ranking for an MBA or JD holder from UCLA or MIT, you only get overall salary info for PhDs from MIT, not PhD in Computer Science from MIT vs Electrical Engineering from Berkeley. That would be far more useful.

As it stands, a PhD doesn't show up until spot 29 on this list, but then again, a PhD in CS might, so hard to say.

To me, the attrition rate from PhD programs is an under appreciated aspect of this discussion. A lot of people from elite Law, MBA, and MD programs are floored when they hear the attrition rate from PhD programs. Seriously, the attrition rate form an elite law or med school tends to be well below one half of one percent. Attrition rates form elite engineering and science PhD programs range from 35%-50%.

In any case, I'm always glad to see this discussed. The only people who seem to think there is a "shortage" or STEM graduate students are people who have a financial interest in hiring STEM graduate students. Almost every other analysis concludes that people with the freedom to choose their career in the US (free of visa restrictions that limit their career choices) are largely acting rationally by pursuing other graduate degree programs (or no graduate degree program).

2
thr0waway1239 41 ago 0 replies      
When talking only about software and CS, I think there is one more way to look at this issue (and has been mentioned multiple times in this comment thread).

Let us say the typical software engineer's day is filled up with the following types of tasks:

1. What patio11 describes, although in a different context: "Don't try to make a career out of optimizing the SQL queries to display a preference page on a line of business app at a company that no one has ever heard of." [1]

2. Some kind of algorithmic work (e.g. writing a compiler)

3. Big data, machine learning etc. (consuming the results of algorithms, hence different from 2)

4. Software architecting

My view is that after a while, most people want to move from group 1 to one of the other groups. This gives a good way to explain the pursuit of Ph.D. even when it is not economically rational (both in time and money cost) - it is a pursuit of something which is not mundane as long as you make the effort.

A teacher of mine said: "You are going to be spending about 40 years in your career. If you take 5 out of that to do a Ph.D. you are not going to reflect on it with regret. And at the end of it you have a Ph.D. too."

When you combine this with the possibility that you will be mostly hanging out with elastic minded students as PG once put it and you might actually have the time of your (intellectual) life, the decision doesn't look all that irrational.

[1] https://training.kalzumeus.com/newsletters/archive/do-not-en...

3
throw_away_777 1 ago 1 reply      
In my experience people don't enter into PhDs expecting to make a lot of money. And STEM PhDs get paid, so it isn't as if we accumulate debt. Often times a PhD is the only route to doing what you love. It isn't possible to do academic research in industry. Personally when I entered into my PhD I really liked that the research I was doing had a noble purpose.

On the other hand, a PhD is way too long. And if you stop at year 4 or so you have basically wasted 4 years. I did not appreciate how much my values would change over time and sort of got locked into trying to finish my PhD. Trying to finish a PhD when you don't have passion for the subject anymore is very stressful. This experience is disturbingly common.

Finally, the author early on implies that the job market for STEM PhDs is not good. At least in physics, this is only true in academia. I know many people who have transitioned successfully to data science, or who have gotten a post doc. Overall, the unemployment rate of physics PhDs is low.

4
sn41 1 ago 1 reply      
I don't see why being irrational is bad. A "labour of love" is precisely something that does not obey the law of diminishing returns. Passion and interest are much more sustaining and fulfilling than a job in the finance sector just because it is highly paid.

I think "rationality" is a stupid assumption, often wrong. Of course, people need more money, but a lot of people are also willing to make sacrifices for something else - spouses, children, parents, religion or country. Money is important, but not the overriding concern for everyone. I often feels that the correct word for a "rational" individual in the sense of economics is "sociopath".

Ralph Nader once said about his organization, that "you can bring your conscience to work" every day. This actually counts for a lot.

However the basic point of the article, that the Universities are conning students is perfectly valid (source: ex-PhD student, can relate to the frustration). This point needs to be better made, avoiding slinging mud on the intellect of the hapless students. This article is an example of how you can write a dubious article, even in the presence of good data.

5
altoz 1 ago 1 reply      
The article doesn't really touch on this, but for people embarking on a PhD program, school is what they've been doing 8 or so hours a day for 16+ years and school is what they're good at (or else they wouldn't have gotten into the PhD program). It's the least disruptive path post-college for a certain group of college grads and their choice is to trade potential financial gain for stability and familiarity. That may not be the best choice for each one of them, but it's certainly not irrational.
6
loser777 1 ago 1 reply      
In computer science, I feel that this definition of irrational only applies in the same way that doing anything that isn't maximizing my $/hour rate is irrational.

Sure, the academic job market is very competitive (though CS arguably has more opportunities than other fields), and landing a solid industry research position is no cakewalk either. But the difficulties in finding a research job don't preclude doing software engineering work to pay the bills.

It's not uncommon to decide a few years into a graduate program that you'd rather just be making more money and drop out for a more lucrative industry/industry research position. For CS PhD students, the job market only seems terrible if you have a very narrow definition of a job (which seems irrational).

7
sjg007 1 ago 2 replies      
A PhD is a great way to immigrate and this drives down PhD stipends.

A PhD is a great way to continue learning.. And good for the overall economy because companies rely on math, science and engineering to produce products.

8
hprotagonist 1 ago 0 replies      
I'd say it entirely depends on the assumption that to get a PhD means that you want to be a professor (and teach in the US at a reasonably decent school and get tenure one day). If that really is your goal, then yes, you should think very hard before you do it.

The Adjunct scam is really, truly atrocious. Adjucts and grad students can unionize now, though, and it only takes one strike in the e. coli labs for a research university to cave...

However, PhDs do have valuable (and financially rewarding) roles in nonprofit research institutions that are not subject to the same challenges as universities (though I sure hope you like writing grants),

In "real" industry positions, a PhD can be a way to bulwark yourself against "the engineer trap", where you get promoted into a project management role and become unable to actually do anything except munge Excel documents. It is a gamble.

I have an engineering background, so I can't really speak to what it is like to, e.g., have a PhD in English. But, depending on your degree, there are also really cool gigs at places like microsoft research -- they do everything from cell biology to audio, now, as well as employ "pure theory" CS folks.

9
nickff 1 ago 3 replies      
Should universities not be held accountable for the fraud that they have perpetrated on their students? The tobacco companies were taken to court because their products had long-term consequences which they persistently minimized, while continuing to push product; it seems universities have done the same. If anything, the universities have better, clearer data than the tobacco companies, and the universities have far more people with clear understandings of both statistics and the observable consequences of getting a PhD. Throwing away the last years of your life because of cancer doesn't seem clearly worse than throwing away the best years of your life seeking a PhD.
10
jdoliner 1 ago 4 replies      
From my reading it seems like the crux of the argument is this:

> Weve presupposed a scenario in which there really is a massive oversupply of PhDs, and thus PhD students must be irrational for treading into an oversupplied labor market. But thats simply not true. PhD oversupply is just a euphemistic way of talking about the fact that colleges and universities havent met student-generated demand with a commensurate supply of full-time, tenure-track faculty.

PhDs aren't irrational for wasting 5 years on something that won't net them a job... universities are shirking their responsibility to provide a job to those who demand them.

8
How do we explain email to an expert? sobersecurity.blogspot.com
15 points by ashitlerferad  2 ago   16 comments top 9
1
upofadown 26 ago 0 replies      
It would be interesting to know exactly how one could get owned by running just an email server. It's been forever since there has been any server that could be attacked with just SMTP. IMAP has a login, as does that buggy PHP webmail program.

There just isn't much of an attack surface there.

The real problems are spam and convincing other mail servers to accept your email.

2
lwhalen 36 ago 0 replies      
Horsepuckey. Hosting your own data services is not dangerous, if you do your homework. Email is not that complex (compared to something like a federated Kerberos environment), and there are several good, concise, HOWTOs, books, etc on the subject. Running your own stack makes you a first-class citizen of the 'net, and more people should do it.
3
femto 18 ago 0 replies      
Be wary when something is described "too hard", but the describer doesn't/can't give specifics on why it is too hard. If a person can't give a generally understandable description of the difficulties, one has to ask whether they understand the topic well enough to make a pronouncement of hardness.

> Then you'll understand there are no experts.

I disagree. There are, but experts have to keep learning in order to maintain their expertise.

4
zAy0LfpBZLC8mAC 52 ago 1 reply      
1. It's a massive exaggeration that it's dangerous to run your own email server. Now, he doesn't explain why he thinks that it is, but I guess there would be two major categories, both based on the assumption of vulnerabilities: (1) your server could be abused against others (yes, but so could your laptop or smartphone) or (2) your own emails could be at risk of being leaked (yes, but the solution to that obviously is not to directly give them to someone else instead).

2. None of that is actually solved by simply trusting someone else to run your email servers for you if you also can't judge whether they are doing it properly and, in particular, in your interest.

3. If you think that trusting someone else to run your email servers for you is an acceptable solution, that is functionally equivalent to trusting someone else to provide you with a software package to run on your email servers. Everything that someone else could do on an email server that they run for you, they could just as well package into, say, an install CD image, which you then could use to run your own functionally equivalent, hopefully well-configured and well-updated, mail server.

So, I would think, this is at best an argument against putting together your own mail server from distro packages, and configuring everything yourself, maybe.

Other than that, there is no fundamental reason why you would need to know email RFCs any more for running an email server than you need to know them for running an email client. In either case, knowing them can help you with figuring out some problems. But also, in either case, if the software is good, the programmers should have taken care of reading the RFCs for you, so you don't need to, for the most part.

5
aub3bhat 12 ago 0 replies      
Given the recent attempt against Kenneth Reitz, which used changed MX records for performing password reset, I would caution against using a custom domain name let alone a home brew server.

http://www.kennethreitz.org/essays/on-cybersecurity-and-bein...

6
baby 26 ago 0 replies      
Came here thinking I would learn something. Learned literally nothing reading this article.

Reminded me of this other gem article: http://www.clickhole.com/blogpost/if-black-lives-matter-isnt...

Basically trying to prove a point with very... umm... poor arguments :)

7
Normal_gaussian 42 ago 0 replies      
I'm toying with setting up my own email server so that I can use all the prefixes for my domains. Ideally it would make managing my actual mail so much easier.

However. I know setting it up is a PITA. And if I move to it I am stuck with it.

The various mailinabox solutions sound great, but I can't quite tell if they do what I want and email is too damn complex to vet myself. And they need an entire bloody box because they don't play nice with containers apparently (on last google) - all to securely parse some text files. Overkill or what.

8
ikeboy 49 ago 0 replies      
That chart has some issues http://danluu.com/dunning-kruger/
9
gregatragenet3 1 ago 1 reply      
I haven't found a cloud email service which will let me keep my procmail filters. (A group-3)
9
SRL Simple Regex Language simple-regex.com
162 points by maxpert  7 ago   67 comments top 30
1
Drup 5 ago 3 replies      
Regex combinators are a much better solution to that problem, for which I gave various arguments here[1]:

- You don't need to remember which regex syntax the library is using. Is it using the emacs one ? The perl one ? The javascript one ? that new "real language" one ?

- It's "self documenting". Your combinators are just functions, so you just expose them and give them type signatures, and the usual documentation/autocompletion/whatevertooling works.

- It composes better. You don't have to mash string together to compose your regex, you can name intermediary regexs with normal variables, etc.

- Related to the point above: No string quoting hell.

- You stay in your home language. No sublanguage involved, just function calls.

- Capturing is much cleaner. You don't need to conflate "parenthesis for capture" and "parenthesis for grouping" (since you can use the host's languages parens).

[1]: https://news.ycombinator.com/item?id=12293687

2
ajarmst 4 ago 0 replies      
Demonstrative "that" adjective-connective-subjective "seems" infinitive-marker "to" verb-existential "be" verb-passive-gerund "missing" article-definite "the" noun-subject "point".
3
colanderman 4 ago 2 replies      
The first example:

 /^(?:[0-9]|[a-z]|[\._%\+-])+(?:@)(?:[0-9]|[a-z]|[\.-])+(?:\.)[a-z]{2,}$/i
is a total strawman, needlessly obfuscated. How about writing it like this:

 /^[0-9a-z._%+-]+@[0-9a-z.-]+.[a-z][a-z]+$/i
which, while "scary looking", is at least immediately readable by anyone who knows even the basics about REs. If the argument for "verbose REs" is valid, it ought to stand up at least a typical standard RE.

Also, it's not clear that "letter" and "[a-z]" mean the same thing. Does "letter" include uppercase? Does it include non-ASCII letters like "[[:alpha:]]" does? Don't forget the weird collation behavior "[a-z]" sometimes encounters.

4
qwertyuiop924 6 ago 4 replies      
So we're replacing a universally understood syntax for a new one that was just invented, and is painfully verbose? I understood what the first regex was doing just fine.

This is a major step up in readability, so it's nice, and you have to invent a new syntax to do that, so I'll chalk that up as unavoidable. But did it have to be so verbose? SCSH/irregex's SRE had similar readability wins, with way less verbosity. You still have to learn a new syntax, though.

5
chubot 35 ago 0 replies      
I designed a similar but terser language in 2012:

The examples give the gist:http://chubot.org/annex/cre-examples.html

More justification: http://chubot.org/annex/intro.html

doc index: http://chubot.org/annex/ (incomplete)

I showed it to some coworkers in 2013 and got some pretty good feedback. Then I got distracted by other things. One of the issues is that I learned Perl regex syntax so well by designing this language that I never needed to use it again :)

I plan on coming back to it since I'm writing a shell now, and I can't remember grep -E / sed -r syntax in addition to Perl/Python syntax.

SRL is the same idea, but I think it is way too verbose, which it appears a lot of others agree with.

If anyone is interested in the source code let me know! It was also bootstrapped with a parsing system, which worked well but perhaps wasn't "production quality". So I think I will reimplement CRE with a more traditional parsing implementation (probably write it by hand).

6
rosalinekarr 5 ago 0 replies      
This is really cool, but my brain keeps getting stuck on the word choice. Every time I see the `literally` keyword, I hear a teenage, valley girl accent in my head.

"Literally, at sign."

"Like, literally, hashtag, guys."

I can't even.

7
adamjcooper 5 ago 0 replies      
Consider changing "either of" to "any of".

The word "either" implies only two choices, making your opening example confusing when the first "either of" was really picking from three possibilities.

8
kazinator 1 ago 0 replies      
Sane middle ground:

 $ txr This is the TXR Lisp interactive listener of TXR 147. Use the :quit command or type Ctrl-D on empty line to exit. 1> (regex-parse ".*a(b|c)?") (compound (0+ wild) #\a (? (or #\b #\c))) 2> (regex-compile *1) #/.*a[bc]?/

9
throwanem 6 ago 2 replies      
I like how the marquee example is of how to do something you shouldn't be trying to do [1] anyway.

In more general terms, if a regex is complicated enough that something like this seems to make sense, the problem is that your regex is too complicated, and you should fix that.

[1] https://news.ycombinator.com/item?id=12312574

11
edtechdev 5 ago 0 replies      
Very nice. Would love to see versions in other programming languages.

I'm very interested in examples that extrapolate this idea to other areas of programming and even math. And also work in the reverse direction.

Most of the examples I've found are old or not open source.

another example of english to regex:https://people.csail.mit.edu/regina/my_papers/reg13.pdfhttps://arxiv.org/abs/1608.03000

English to dateshttps://github.com/neilgupta/sherlock

English to a graph (network representation)https://github.com/incrediblesound/MindGraph

C to English and vice versahttp://www.mit.edu/~ocschwar/C_English.html

English to python:http://alumni.media.mit.edu/~hugo/publications/papers/IUI200...

English to database querieshttp://kueri.me/

12
glangdale 4 ago 0 replies      
We (the Hyperscan team) have spent a lot of time staring at regular expressions over the years (shameless plug: https://github.com/01org/hyperscan).

I think a better format for regex is long overdue, but this isn't it. It's way too verbose (other commentators also noticed the resemblance to COBOL). I'm picturing a Snort/Suricata rule with this format regex, and you've now doubled the amount of screen real estate per rule.

The real problems with regex readability are (1) the lack of easily grasped structure, so it's almost impossible to spot the level at which a sequence or alternation operates (PCRE's extended format and creative tabbing can help) and (2) the total lack of abstraction - so if you have a favorite character class or subregex you write it approximately a bazillion times.

13
coroutines 28 ago 0 replies      
Hey, remember that time we gave up regular expressions and went back to writing grammars? Right tool for the job..
14
Eridrus 6 ago 1 reply      
I'm no fan of regexes, but I'm not a huge fan of this either; I would be interested in seeing existing convoluted regexes expressed like this for me in an IDE, but I don't like it as an input format.

I do wonder if having an EBNF compiler like ANTLR being more accessible would solve the readability & maintainability issues.

15
keithnz 4 ago 0 replies      
Maybe this as a learning tool? I think it's much better to learn regex as is, no matter how ugly or terse you may or may not find it. It's pretty universal across languages ( with some annoying variations ). There's lots of online tools and programs that can help you decode or create regex, after a while it's not so hard to read/create. But also worth knowing a more comprehensive parsing tool or parsing techniques so you don't get too ambitious with regex :)
16
PieterH 6 ago 0 replies      
Looks like the COBOL of pattern matching, and frankly I like it.
17
PieterH 6 ago 0 replies      
What's missing in regexps IMO is composability so you can build larger patterns out of smaller ones, giving each a clear name. Replacing '[0-9]' with 'digit' doesn't really help much.
18
slantedview 5 ago 0 replies      
19
0xCMP 5 ago 0 replies      
I think this is really valuable. Just today I had a non-tech co-worker who needed to understand Regex for some tool we were using. I did the regex for him and (very briefly) explained it. Now this might be something he can more easily grok and use a translator (+ regex101.com to verify) to create more complex regex's he might end up needing.
20
matt_wulfeck 4 ago 0 replies      
The author pits his project against POSIX regular expressions, but personally I feel that it's PCRE that rules the day. I find pcre regex significantly less verbose and easier to read.
21
buckbova 6 ago 0 replies      
Sometimes it's difficult to reason out an involved regex. I doubt I'd ever use something like this from code but I might use the translator.

Example based on their example.

https://simple-regex.com/build/57bc5eac74c4d

22
Cozumel 2 ago 1 reply      
'Regular Expressions Made Simple'

Regular expressions are simple. It's just a matter of putting a bit of time in to learning them.

23
malkia 5 ago 0 replies      
Anyone remembers: South. South. West. Look. Pick axe.

for the same reasons I'm having mixed feeling about cucumber and similar testing frameworks (BDB), that also rely on semi-english language to do things. It looks cool, and enticing, but hard to sell (to others), even If I myself am super-excited to see it in action (just because how crazy it looked the first time I saw it).

24
kazinator 5 ago 0 replies      
COBORE: common business-oriented regex.

Bonus: COBORE -> (Japanese) kobore -> -> /("spillage").

"Overflowing spillage of verbosity."

25
DonaldFisk 4 ago 0 replies      
If you're dissatisfied with the terseness of regular expressions, it's worth looking at SNOBOL4: http://www.snobol4.org/which has been around for decades.
26
yegortimoshenko 2 ago 0 replies      
AppleScript, SQL, COBOL and others have all made the same mistake.
27
onetwotree 6 ago 1 reply      
Hmm, perhaps useful for a teaching tool.

If used as such, it'd be really nice to be able to go the other way - a regex explainer if you will.

28
JustSomeNobody 2 ago 0 replies      
I laughed so hard at this. 'Cause it _is_ a joke, right? Right?
29
parenthephobia 5 ago 2 replies      
I'm not absolutely sure this isn't a joke that got out-of-hand. This is the COBOL of regular expressions. :)

Whilst the conventional regular expression syntax is arguably overly compact, this is just too far in the opposite direction!

Something more PEG-like, or even Perl 6 regex-like, would make more more readable regular expressions whilst not completely throwing out everything we think things mean. Hell, even /x -- ignore whitespace and comments -- can make things much clearer:

 / ^ [0-9a-z._%+-]+ # The local part. Mailbox/user name. Can't contain ~, amongst other valid characters. \@ [0-9a-z.-]+ \. [a-z]{2,} # The domain name. We've decided a TLD can never contain a digit, apparently. $ /x
Tangentally, there's no point validating email addresses with anything more complicated than /@/. If people want to enter an email address that doesn't work, they can and will. If you want to be sure that the address is valid, send it an email!

30
crdoconnor 4 ago 0 replies      
My attempt at simplifying (a subset of) regexes:

https://github.com/crdoconnor/simex

10
Academic Torrents: A distributed system for sharing enormous datasets academictorrents.com
479 points by iamjeff  14 ago   85 comments top 17
1
WhitneyLand 10 ago 3 replies      
API suggestions:

1). Don't use hard coded values for types

>GET /apiv2/entries?cat=6 -- List entries that are datasets

>GET /apiv2/entries?cat=5 -- List entries that are papers

These could be written as: /apiv2/entries/datasets /apiv2/entries/papers

2). You may not need path elements like entries, entry, collection, and collection name. For example further simplification would leave

 /datasets /papers
3). Don't use capitals like this, switch to lowercase: /apiv2/entry/INFOHASH

4). Use HTTP verbs in a standard, semantic way. For example this

 POST /apiv2/collection -- create an collection POST /apiv2/collection/collection-name/update POST /apiv2/collection/collection-name/delete POST /apiv2/collection/collection-name/add POST /apiv2/collection/collection-name/remove
This could all be collapsed into one form and would in turn be more familiar to developers.

2
tombert 13 ago 9 replies      
I'm pretty glad that torrents have started to break out of the "it's only for warez" stereotype. It's a useful technology, regardless of what made it popular.
3
rakoo 13 ago 1 reply      
If only WebTorrent (https://github.com/feross/webtorrent) worked with standard bittorrent protocol instead of a custom one on top of WebRTC, we would have live access to all the papers and "displayable" data directly instead of firing up a torrent downloader just for a small file.
4
danso 4 ago 0 replies      
So, what are the rights and licenses for this data? I see that one of them is Yelp photo data from a Kaggle contest [0]. Yelp distributes another Academic data set, but you have to fill out a form and agree to their TOS [1]. So they're OK with the data being available like this?

Another random datapoint: When EdX/Harvard released a dataset showing how students performed/dropped out, I uploaded a copy to my S3 to mirror and linked to it from HN. I got a polite email the next day asking for it to be taken down. Academics are (rightfully, IMO) protective of their data and its distribution (particularly its attribution).

One thing I would love to see on here is stuff from ICPRS, such as its mirror of the FBI's National Incident-Based Reporting System [2]. As far as I can tell, it's free for anyone to download after you fill out a form. But it also should be free to distribute in the public domain, but for all I know, ICPSR has an agreement with the FBI to only distribute that data with an academic license.

(The FBI website has the data in aggregate form, but not the gigabytes that ICPSR does)

[0] http://academictorrents.com/details/19c3aa2166d7bfceaf3d76c0...

[1] https://www.yelp.com/dataset_challenge/dataset

[2] https://www.icpsr.umich.edu/icpsrweb/NACJD/NIBRS/

5
edraferi 9 ago 0 replies      
How does this compare to IPFS (https://ipfs.io/)?

That project maintains a number of archival datasets, including arXiv: https://ipfs.io/ipfs/QmZBuTfLH1LLi4JqgutzBdwSYS5ybrkztnyWAfR...

Seems like an opportunity to combine efforts.

6
wodenokoto 10 ago 1 reply      
A lot of these "data sets" appears to be coursera courses. I'm not sure if those are legal to redistribute. It also clutters the browse function since a lot of results aren't data sets
7
babak_ap 12 ago 0 replies      
Anyone looking for Wireless Data, I suggest taking a look at:http://crawdad.org/and http://www.cise.ufl.edu/~helmy/MobiLib.htm#traces
8
_lpa_ 10 ago 0 replies      
I like this idea! In my research we deal with relatively large amounts of sequence data, all of which needs to be associated with geo (https://www.ncbi.nlm.nih.gov/gds/). While geo is in many ways a good thing, it is not the most pleasant to use - I would love it if we could use something like torrents instead.

I feel like there is a danger, however, that using torrents would facilitate the thousands of nonstandard (often redundant) formats bioinformaticians seem to create.

9
rkda 11 ago 0 replies      
You might also be interested in dat. They're trying to solve this problem as well.

http://dat-data.com/

"Dat is a decentralized data tool for distributing data small and large."

10
jakub_h 12 ago 5 replies      
I'm wondering if torrents as such are actually useful for this. I'd figure some kind of virtual file system (perhaps based on BitTorrent) would be very useful. You'd simply pass a file path to an open() routine in your scientific code and data would get opened transparently. You currently have this with URLs and HTTP but there's no useful caching or data distribution.
11
degenerate 12 ago 0 replies      
I already knew about Academic Torrents, but didn't know about Kaggle, which was linked[1] from one of the 'popular' data sets on AT: https://www.kaggle.com/c/yelp-restaurant-photo-classificatio...

It looks like one of those logo-design-competition-sites, but for big data. Anyone compete in one of these?

[1]: http://academictorrents.com/details/19c3aa2166d7bfceaf3d76c0...

12
WhitneyLand 10 ago 0 replies      
It's a great initiative. One more step to help bring science collaboration into the modern Internet world.

How much data do you have? How much storage do you project is needed? I'm wondering how practical it would have been to use centralized storage, which has its own advantages.

13
jakeogh 6 ago 0 replies      
Similar project: http://911datasets.org
14
arxpoetica 9 ago 0 replies      
How does this compare with, say, noms? https://github.com/attic-labs/noms
15
danielmorozoff 12 ago 0 replies      
This is fantastic. I am so glad someone built this.
16
Philipp__ 12 ago 0 replies      
This is amazing! Thank you for sharing this!
17
mtgx 13 ago 5 replies      
.com? Is that wise? I imagine the domain will be seized as soon as the site becomes popular enough. They should at least have some contingency plan to deal with that.
11
Can smiling make you happier? slate.com
61 points by nate  6 ago   21 comments top 12
1
sowhatquestion 2 ago 0 replies      
Was anyone else disturbed by Strack and Martin's hand-waving away the null replication result? Based on my (admittedly elementary) knowledge of statistics, it seems like 17 replication attempts (samples) whose means are distributed around zero constitute some pretty airtight empirical evidence that there's no inner emotional effect from smiling. How else to read Strack and Martin's complaints but as a kind of special pleading that there was something ineffable about the experiment that the replications missed? Some of their comments gesture in the direction of claiming that replication may be literally impossible.

I walked away from this article more convinced than ever that there are big problems with this field of research. And I don't "want" to believe it, either -- I loved Kahneman's Thinking, Fast and Slow.

Speaking of which, kudos to Kahneman himself for being (apparently) a more committed empiricist than the other psychologists discussed here.

2
j2kun 5 ago 0 replies      
3
rezashirazian 4 ago 0 replies      
Having to fake a smile to feel slightly happy is fairly depressing to me.
4
ultramancool 23 ago 0 replies      
If I recall correctly they've had major issues duplicating studies which got positive results on this.
5
bootload 49 ago 0 replies      
Interesting question. This week it was reported that Gene Wilder passed away. Yet if you look at most photos he is smiling and as I look at them I cannot but help feel positive.
6
pcunite 4 ago 1 reply      
Smile awhile

And give your face a rest

Raise your hand to the one you love the best,

Then shake hands with those nearby,

And greet them with a smile!

7
matthoiland 4 ago 1 reply      
It seems this article is more about research techniques over the years than it is about smiling.
8
postmeta 1 ago 0 replies      
Reading hackernews and smiling can make you happier./gratuitous
9
sandworm101 4 ago 2 replies      
Forced smiles making you more happy? Talk to anyone in in-person customer service. Talk to a model or "sales associate". Talk to a waitress, or even a stripper. There are plenty of people who force smiles all day. It isn't fun. They don't feel better about themselves afterwards.

https://en.wikipedia.org/wiki/Smile_mask_syndrome

"According to Natsume, this atmosphere sometimes causes women to smile unnaturally for so long that they start to suppress their real emotions and become depressed."

10
Geee 4 ago 2 replies      
What is the answer? I'm not going to read the whole thing.
12
PieterH 3 ago 1 reply      
Sigh. People seem to always forget that we're a social species and a lot of our emotions have social roles.

Smiling as such does not make us feel happier. That is trivially proven, just as pretending to cry does nit make us sadder.

Thinking of happy things can make us feel happier, and thus we may smile.

But what makes us happiest of all is when other people smile at us (without being creepy). And so if I smile at you , and you smile back, then I will absolutely feel happier.

13
Decoupled Neural Interfaces Using Synthetic Gradients deepmind.com
41 points by yigitdemirag  4 ago   9 comments top 3
1
nicklo 14 ago 0 replies      
Super cool stuff in this paper.

At its heart, this is a new training architecture that allows parameter weights to be updated faster in a distributed setting.

The speed-up happens like so: instead of waiting for the full error gradient to propagate through the entire model, nodes can calculate the local gradient immediately and estimate the rest of it.

The full gradient does eventually get propagated, and it is used to fine-tune the estimator, which is a mini-neural net in itself.

Its amazing that this works, and the implications that full back-prop may not always be needed shakes up a lot of assumptions about training deep nets. This paper also continues the trend of this year of using neural nets as estimators/tools to improve the training of other neural nets. (I'm looking at you GANs).

Overall, excited to see where this goes as other researchers explore the possibilities when you throw the back-prop assumption out.

2
imh 1 ago 3 replies      
Do most of the images not load for you guys too?
3
m1ck 1 ago 2 replies      
Is this a big deal?
14
106 years after tragic crash, locomotive located in Lake Superior duluthnewstribune.com
26 points by wglb  3 ago   7 comments top 3
1
miles 1 ago 1 reply      
Perhaps the most tortured and obscure opening paragraph I've ever read in a newspaper:

Guided not just by the hands of operator Tom Crossmon, but also by the past efforts of an extended network of divers and the collective memory of a community, the remotely-operated vehicle descended into the depths of Lake Superior.

2
vanattab 1 ago 3 replies      
Why can I not highlight the txt? I can't be the only one who likes to highlight as they read. If it's to stop copying they should have disabled double click and ctrl+a.
3
sbuttgereit 50 ago 0 replies      
Hmmm... the black and white opening sequence in the accompanying video appears to be a model of the train in question... like an HO scale thing or so, as opposed to actual historical "before" photos. I don't recall seeing a presentation like that in something like this.
15
Browser Bloat (1996) miken.com
54 points by laktak  4 ago   35 comments top 12
1
wtbob 1 ago 2 replies      
So, Netscape Atlas was 6 megabytes in June 1996; Firefox for Windows 64-bit is 45.2 megabytes today (I picked the Windows download because it sounds like this guy was using Windows back then); that means the size has multiplied by 7.5 over the last twenty years, for an annual increase of about 10%. That's not terribly great, actually.

OTOH, according to http://www.jcmit.com/diskprice.htm disk prices have been dropping by almost a third every year, with the result that the cost of a Firefox install in 2016 is less than 1/500th the cost of a Netscape install in 1996. That's pretty awesome!

2
rbisewski 1 ago 0 replies      
Rather fun read, I must say.

It really seems like it makes sense that browsers really "need" the bloat; it has got to the point where it ends up being the most useful and most used application on any OS, for a given section of the end-user base, anyway.

At that point can we really call those features bloat? Maybe in 1996 you didn't need to play videos much, but a web browser without HTML 5 can't do what the majority would expect. What users expect from the internet has very much expanded.

I actually was curious about some of the inner workings of this stuff and made a browser using WebKit once.

https://gitlab.com/ibiscybernetics/sighte

All-in-all one of the more unusual side projects I played with.

3
zeta0134 2 ago 0 replies      
Well, we're definitely at the point where my Web Browser uses more memory than my Operating System. I'm impressed, a great deal of this talk could be applied to modern browsers with a surprising degree of accuracy.

I love looking at the feature list and seeing what actually caught on (Audio Playback) and the laundry list of features that seemed like good ideas at the time, but had no place in the browser. CoolTalk eventually got implemented as VOIP, and chat features showed up on websites once JavaScript got good enough to facilitate it, but nearly everything else has fallen by the wayside.

4
kazinator 1 ago 1 reply      

 <meta name="GENERATOR" content="Microsoft FrontPage 4.0">
Did Microsoft FP 4.0 exist in 1996?

WikiPedia (https://en.wikipedia.org/wiki/Microsoft_FrontPage) says that it was 1.1 in 1996 and 2 in 1997 (also called FrontPage 98).

This might not be an original unmodified-since-1996 page.

Beautiful page though.

5
windlep 1 ago 2 replies      
> The browser will require more memory than the operating system it runs on.

Done! And of course, many apps routinely now take more memory than the OS.

6
sonar_un 2 ago 1 reply      
Oh wow, VRML, I totally forgot about that!

I still remember trying to find all the VRML sites I could find, in all of it's few polygon glory.

7
zwetan 37 ago 0 replies      
and yet 20 years later, the situation is not that better ...

each browser vendors is still fighting the other ones by adding features that make their browser the platform of choice like chrome API only available for the chrome app store

let's all kill Flash because plugins are bad, but let's not hesitate to add our own plugin, oh excuse me the politically correct word is "addon" as a native extension to cast things around.

when CPU, RAM and bandwidth are no more an issues let's cache aggressively everything so that bloated browser does not feel so slow anymore.

8
dTal 3 ago 1 reply      
>A few of the real fringe-dwellers even predicted that Java would cause the end of Microsoft's dominance of the desktop market.

Android apps are written in Java. There's still time...

9
jandrese 1 ago 1 reply      
I don't remember Cooltalk at all. Did it never make it past the Alpha? Did it only ship on Windows? The blurb makes it sound fairly interesting. A built-in IRC client (or maybe ICQ?), VoIP, and a shared whiteboard facility. It really imagines the browser as part of a bidirectional communication system not just a viewer for published content.

Knowing the timeframe it probably didn't work through NAT and died as static IPs for residential customers stopped being a thing. NAT broke a lot of promising applications back in the day.

10
xeniak 2 ago 0 replies      
This is well juxtaposed with Cast being added natively into Chrome: https://news.ycombinator.com/item?id=12383367
11
nilved 3 ago 4 replies      
The Web is bad and getting worst, not better. I still don't understand why browsers became operating systems.
12
djsumdog 3 ago 0 replies      
Oh the nostalgia! The white board! Cool talk! Man I remember all of that stuff.
16
Show HN: Architect Hardware Description and Emulation JavaScript Library github.com
9 points by mbad0la  2 ago   4 comments top
1
TD-Linux 37 ago 2 replies      
For reference, the 4-input AND gate from the third example in Verilog:

 module ForInpAndGate(a,b,c,d,o); input wire a,b,c,d; output reg o; always @(*) begin o <= a & b & c & d; end; endmodule;

17
Hacked: Investigating an Intrusion on My Server frantzmiccoli.com
48 points by frantzmiccoli  5 ago   13 comments top 7
1
schwede 11 ago 0 replies      
You should consider running this malware detection script[0]. That script is designed to catch malware php scripts. I've used it before with lots of success. You will also want to check the access logs for all php scripts hit in the last few months. Your hacker's spam script is likely being activated/ran by a GET or a POST to that script. That's a pretty cheap way to screen for other compromised files.

[0] - https://www.rfxn.com/projects/linux-malware-detect/

2
viraptor 3 ago 1 reply      
This article was absolutely painful to read. Let me try a different ending:

> Conclusion:

1. Ensure the files that php is running are not writable by the same process. Different app -> different user.

2. Unless you're planning to send emails from the server, firewall output on those ports. If you do plan to, firewall everything apart from that server (you can setup alerts when DNS changes, it's not going to happen often for email hosts)

3. Disable most unnecessary functions and modules. Anything touching eval, str_rot13, exec, and many others should be killed right away.

4. Enable basedir.

5. If you can't handle regular system updates, don't run your own server. If you can't handle wordpress updates, host it with someone else.

6. Wherever you host, make sure your app can be easily redeployed. You can't rely on dates to see when anything changed.

7. If you can't handle system upgrades, don't think that docker or any software is going to solve any of your problems.

3
AReallyGoodName 51 ago 1 reply      
Question about this:

It's not possible for me to track all the 0 days for every piece of software and library that my servers run. One of my long running servers could have been backdoor-ed by a 0 day 6 months ago and i probably wouldn't know it. The servers are kept updated but 0 days don't care about that by definition.

What's the best practice here? Should we pre-emptively have our servers rebuild daily just in case a 0-day backdoored them?

4
rmdoss 1 ago 0 replies      
Fun read. If you ever get hacked, I recommend just destroying the server and starting from scratch and a clean backup.

If you try to find all backdoors and left over rootkits, you will end up forgetting one and being re-compromised.

5
efoto 4 ago 1 reply      
In short: an old server running un-updated software was broken into not once but at least four times since the start of the year.

The post describes some steps the author took to investigate and block the attack. It's an entertaining reading, but I'd strongly advise against trying this at home: it is not worth the risk. Reinstalling the system from scratch instead would have been much more prudent.

6
Hogg 3 ago 0 replies      
Does }__ appear in your logs? All versions of all branches of Joomla prior to I think 3.4.6 had a problem with serialization that allowed arbitrary PHP execution.
7
fake-name 3 ago 0 replies      
Ahhh PHP, where `assert()` `eval()`s strings passed to it.
18
Flying Car Dreams May Soon Be a Reality pddnet.com
15 points by prostoalex  3 ago   14 comments top 4
1
sokoloff 2 ago 3 replies      
I'm a pilot and owner of general aviation aircraft. I simply don't understand the appeal of a flying car. It's going to inevitably be a compromise design and likely suck at both being a car and at being an aircraft.

Aircraft "want" to be lightweight and low-drag and crashworthiness (and expense) is not a significant design parameter. Cars need to be crashworthy as they are frequently bumping into each other.

I can fly into nearly any airport and have ground transportation easily available. Certainly at any good-sized airport, of which there are hundreds more than are served by airlines.

If that ground transportation suffers a minor collision, no matter, my airplane ride home is unaffected. If my flying car is damaged in a ground collision, I'm stuck finding another way home and coordinating repairs from a distance. I don't want to maintain my car to the standards required of aircraft. (Want an engine overhauled for my airplane? It's going to cost more than the median new car. Not bragging; mostly just complaining. Want the annual [invasive] inspection done? In most cases, that's going to be a $3-5K bill, minimum.)

I welcome the interest and hope some of that rubs off on general aviation interest, but I don't see flying cars as anything more practical than a gimmick.

Prediction: the media will be the only ones to make significant money from flying cars.

2
T0T0R0 7 ago 0 replies      
Unless they have plans to incorporate self-flying cars into the massive campaign to roll out self-driving cars, there's no way 99% of ordinary people can be trusted to safely negotiate flying a heavy piece of equipment over populated areas.

Here in the United States, people don't like 1KG quadcopters buzzing their neighborhoods, driveways and shopping centers.

So, now a one ton piloted helicopter? Uh, seems dubious.

3
rdtsc 1 ago 0 replies      
I like the idea of a flying car as a great example of just-around-the-corner technology.

I remember reading about flying cars in mid 80's from a Soviet technical journal (Yuniy Tehnik or Tehnika Molodyioji, forgot which...). It was about Paul Moller's cars and how in just 5 years we'd have flying cars around. And I thought that would be so awesome.

30 years later it is still just around the corner.

My other favorite one is "new type of batteries". Every other month there is a new type of battery that will revolutionalize the energy economy.

Not saying there haven't been improvements in these areas, or we'll never see it happen, but it is just an interesting observation I noticed about those 2 things.

One reason I imagine is those two things are easy to sell as "popular science". It is easy for anyone regardless of background, to imagine flying in a car or to imagine never having to change batteries again. Nanobots or faster integer factorization using quantum computers is maybe is not as captivating or fun to dream about.

4
slr555 49 ago 0 replies      
I have to agree with Sokoloff below. The problem with flying cars is, well....they fly. Which means dealing with an object in 3 dimensions, pitch, yaw, roll and all that in a product that's supposed to replace a Prius. Even in highly sophisticated jets landing and take off requires a high degree of skill. Rotary wings are easier in some regards but if they stop you sort of drop like a stone. People can't handle driving on a three lane road. I have little confidence they could navigate a 4 or 7 layer stack of lane sets.

Sadly, George Jetson is still a man of the future.

19
Btree vs. LSM github.com
83 points by pbhowmic  8 ago   18 comments top 8
1
hendzen 2 ago 0 replies      
I have a lot of respect for the author of that page, Alex Gorrod. However, this benchmark is quite dated (2014). The WiredTiger codebase was still immature when it was written. Since WiredTiger's acquisition by MongoDB (the company) and integration in to MongoDB (the database), the btree and LSM implementations have undergone extensive changes and are now much more hardened and adapted to production workloads. Furthermore it would be worth considering other engines such as InnoDb (btree) and RocksDB (LSM), which along with WiredTiger are considered the leading open source storage engines.

For more recent and realistic benchmarks you should look at the work of Mark Callaghan.

2
3pt14159 6 ago 2 replies      
Great analysis but can someone fork this and update the graphs to improve some things? I'd do it but I'm on a really bad internet connection right now.

1. Use SI magnitudes, so that we don't have to count the zeros. (so 100k instead of 100000.)

2. Either use the same chart for LSM vs Btree for each of read and write, or at the very least use the same y axis between the two.

3. It's hard to see the difference on the limited write benchmarks. Can we make it either logarithmic or another graph with these differences highlighted?

3
armon 1 ago 0 replies      
The used synthetic benchmark is not a great indicator of performance in most (any?) production environments. It's generally useful and interesting to understand the tradeoffs between an LSM and B-Tree. Particularly, if you have an update or delete heavy workload the compaction cost of an LSM can become an issue. B-Tree's don't suffer from compaction issues, so in some sense trade consistently slower writes instead of amortizing out an I/O intensive compaction.

Some of the worst production experiences I've had came from exhausting I/O on the database, and then having LevelDB / RocksDB / LSM stores kick off their compaction. B-Tree will give you a very consistent redline in terms of performance generally.

TL;DR: There are trade offs between the two, but this benchmark is not particularly insightful given that it doesn't really test any of the interesting boundary conditions or real world query patterns.

4
erichocean 5 ago 2 replies      
My favorite database by far today is LMDB (B+Tree).[0] Performance is insane, and very low-variance. Reads scale linearly with core counts, and it has a lot of useful index types and knobs to get maximum performance.

What am I most looking forward to using later this year? ScyllaDB[1] and CockroachDB[2], both in conjunction with LMDB.

[0] http://104.237.133.194/doc/

[1] http://www.scylladb.com/

[2] https://www.cockroachlabs.com/

5
filereaper 3 ago 0 replies      
I'd love to see a breakdown of Databases broken down by their primary underlying storage mechanism (eg):

- RDBMS: B-Tree Layout (good for lookups)

- No-SQL (like) DB's: LSM (good for heavy write throughputs)

And then there's ones like Dremel which opt for high-octane full-table scans.

6
hans 3 ago 0 replies      
here's a comparison of the LSM and Fractal trees.

http://highscalability.com/blog/2014/8/6/tokutek-white-paper...

7
rawnlq 5 ago 1 reply      
Might be a stupid question, but is this performance of just the data structure in memory or with reading/writing to disk also?
8
professorm 6 ago 2 replies      
I noticed they used LD_PRELOAD=/usr/lib64/libjemalloc.so

glibc malloc not up to the task?

22
DTrace and Python github.com
92 points by myautsai  10 ago   13 comments top 3
1
bcantrill 6 ago 0 replies      
Great deck! In particular, for the incredible magic on slide 14, a debt of thanks is owed to John Levon[1] and to whomever has maintained that work and brought it forward.

On slide 28, the presentation asks "What is 'Speculative Tracing'?" It's unclear if that's a rhetorical question, but just to answer it here: speculative tracing is a DTrace facility that allows for data to be traced speculatively, and only committed to the trace buffer if and when some other (later) condition is met.[2] My original inspiration for this was a case that we had back in the day at Sun on the Performance and Application Engineering (PAE) team, when Yufei Zhu (now at Facebook) described a case she had in which one out of every 10,000 mmap()'s was failing with EINVAL -- and it was really tough to use DTrace when she was only interested in its output 0.01% of the time. For Yufei's (motivating) example, speculative tracing offered a way of capturing all of the necessary data on every mmap request, but only emitting that data when the entire operation was found to have failed. Speculative tracing is one of those you-don't-need-it-until-you-need-it features of DTrace (to which I would certainly add anonymous tracing) -- but when you need it, it's a lifesaver, and I have it used it as recently as last week to nail a particularly nasty bug that very much needed it.[3]

[1] https://blogs.oracle.com/levon/entry/python_and_dtrace_in_bu...

[2] http://dtrace.org/guide/chp-spec.html

[3] https://twitter.com/bcantrill/status/769225926726918144

2
wslh 6 ago 1 reply      
Shameless plug: if you are looking for something similar to DTrace but for Windows that can be also used in Python or any other COM capable programming language... you can take a look at our Deviare Hooking Engine: https://github.com/nektra/Deviare2

Additionally, we have other open source instrumentation engines like https://github.com/nektra/Deviare-InProc (better and more secure than Detours, check [1]), RemoteBridge: https://github.com/nektra/RemoteBridge. SpyStudio will also be open sourced very soon: http://www.nektra.com/products/spystudio-api-monitor/

[1] https://www.blackhat.com/docs/us-16/materials/us-16-Yavo-Cap... and https://www.blackhat.com/docs/us-16/materials/us-16-Yavo-Cap...

3
pixelmonkey 8 ago 4 replies      
Does anyone have any inside knowledge of when, if ever, DTrace might become a standard/official part of Linux? Seems like right now you need to compile a kernel module which will taint your standard kernel in order to use it.
23
90% of software developers in the US work outside Silicon Valley qz.com
99 points by cpeterso  5 ago   58 comments top 15
1
mklim 4 ago 1 reply      
And the Bay Area is roughly 6,900 square miles vs the 3,806,000 square miles of the US as a whole. Put another way, Silicon Valley is where you can find 10% of the total number of US developers despite it being only ~0.002% of the country's landmass. I get the general point the author is trying to make, but 10% in such a small region is still an extraordinarily high number.
2
steveeq1 4 ago 4 replies      
Another way to interpret this statistic is "10% of software developers in the US work in Silicon Valley"
3
bcheung 3 ago 5 replies      
This is actually a very surprising statistic to me. I lived in LA and was not happy with the job offerings at all and moved back to the bay area where I'm originally from.

I'm also surprised that San Jose has 2x the concentration of software developers than San Francisco. Maybe its just because I have a startup bias and that seems to be the trend for startups. Very few startups in SJ compared to SF.

4
alanh 4 ago 0 replies      
The source for this story reports that 89% of software devs work outside the Valley. 89%. Apparently Qwartz didn't find that number punchy enough.

But yeah, 11% of US software devs live in SV? That is remarkable, although not surprising.

http://www.arcgis.com/apps/MapJournal/index.html?appid=b1c59...

5
NikolaeVarius 5 ago 3 replies      
Also a majority of the worlds population lives outside the US. News at 11
6
fma 2 ago 1 reply      
I would be curious to see the percentage change over the years. With the growth of tech everywhere (note on the map there's lots of big circles) and cost of living sky rocketing in the Bay Area, Boston, etc...What's the influx like of people moving to Atlanta, Houston, Dallas, etc...
7
laxatives 3 ago 0 replies      
This article is terrible. It uses 90% as evidence developers are moving out without any other statistics, namely the percentage from another period of time. Its completely baseless. No shit most of the developers don't live in one city or region.
8
flukus 2 ago 0 replies      
Is this news to anyone not in silicon valley?

Also, the majority of software developers are not in the United States.

9
rhapsodic 5 ago 1 reply      
This should not be news to anyone in the industry.
10
dba7dba 2 ago 0 replies      
Am I reading a tech related article or a political hit piece that spins numbers to fit an agenda?
11
robin_reala 5 ago 0 replies      
The title confused me until I clicked through and realised it was just talking about US software developers (and someones since edited it - thanks!).
12
st3v3r 5 ago 1 reply      
And yet, 90% of the money is still inside SV.
13
yandrypozo 4 ago 2 replies      
did you note that circle in the interception of Colorado, Wyoming and Utah ?

redneck startups ?

14
meddlepal 2 ago 0 replies      
No Shit. News at 11. Is this only surprising to SF folk?
15
beamatronic 5 ago 3 replies      
How many 10x developers work inside Silicon Valley vs outside Silicon Valley?
24
Warned of a Crash, Startups in Silicon Valley Narrow Their Focus nytimes.com
207 points by my_first_acct  11 ago   144 comments top 19
1
icehawk219 9 ago 10 replies      
For the past 10 months or so I've been part of a 3 person team that is bootstrapping. I can't count how many times I've been looked at like I have two heads when I tell someone that no we don't have funding, and we aren't looking for it, and we aren't planning on looking for it, and we're not entirely sure we even want it. And when I tell someone that we're more worried about building a sustainable business that can stand on its own I might as well be speaking a foreign language.
2
chatmasta 9 ago 2 replies      
It seems like a lot of the "easy money" in early rounds comes from institutional seed investors like SV Angel, who are happy to hand you a $100k note with a good story and a bit of traction.

The problem I see with this is that their willingness to do that is tied to the performance of their early investments that have turned into unicorns. So for SV Angel for example, that would be Snapchat. As long as the unicorns are riding high at valuations that are obscene multiples of the initial VC investment, the VC can afford to make more small, early stage investments. If the goal is a 10x average return on the fund, then the higher the valuation of its unicorns, the more it can invest small sums in the "long tail" of early investments.

The problem is that as soon as a unicorn sees a devaluation, the calculation of average return decreases, and therefore there is less money available for that long tail.

This is how I see it, anyway, with a fairly unsophisticated understanding of the mechanisms. I'm curious to hear other input on this perspective.

3
zizzles 5 ago 4 replies      
Silicon Valley is in NEED of a crash.

It is touted as a hub of "innovation" but I do not see it. There are exceptions that exist, scientific and medical companies perhaps, but the majority of tech-startups are not that at all, they are a fucking FUGAZI. They are speculative companies that are all about hype and getting an "exit" someday. Steve Jobs (as an example) is regarded as a "deity", a god of Silicon Valley. Because of wealth? Because the iPhone / iPod / iWhatever had a simple design? Step outside, the iPhone is used as a vessel for narcissism. Facebook and Instagram, two billion dollar companies in the valley, those two are the TEXTBOOK narcissism vessels of HUMAN HISTORY. These tech-startups are an absolute pathetic coping mechanism for humanity, they are not innovative or special.

4
Animats 8 ago 3 replies      
From the article: (failed startup) "providing valet parking with the touch of a smartphone button".

What's ending, one can hope, is the idea that attaching some low-end labor-intensive service to a smartphone app is a "tech business".

5
danieltillett 6 ago 1 reply      
There won't be a crash until the rivers of money being pumped out by the central banks of the world stops. The entire VC industry is just a tiny cork floating on a gale-whipped sea.

I do think that it is getting really hard to break through the noise with a new concept. There are just so many companies chasing the same eyeballs that it is really hard to get the traction needed to build a unicorn no matter how much money you raise. At this point it might be better to go hyper-niche and bootstrap.

6
vonnik 7 ago 1 reply      
Katie Benner has written some good stories, but I'm suspicious of articles like this, because they insist almost irrationally on finding a trend that they can make a pronouncement about.

The truth is, startups are all over the place. Some are wasting cash, and some are trying to build sustainable businesses, and some are trying to get acquihired. And all of those things are going on all the time.

VCs have raised record levels of investment from LPs this year. And that money will get pushed into the system whether Bill Gurley wants the competition or not.

The people who harp on an impending crash -- and they have been harping for quite some time now -- seem desperate for something to say about tech. In secular terms, tech's star is on the rise and everyone knows it, so the real news would be a crash. But the crash, like Godot, refuses to arrive. Actually, many parts of tech are pretty damn healthy, and moving fast, AI and robotics being just two.

7
cylinder 9 ago 5 replies      
Crash is going to happen. Most of these startups are not run by people with business sense who know how to make a company profitable. They are raised by people who know how to tell stories to raise easy money. The money is not there anymore, soon enough they'll run out and won't get a lifeline.
8
izolate 9 ago 2 replies      
Evernote is not a startup any longer. I understand the grey area, but as an industry we need to reach some kind of consensus on this word.
9
jorblumesea 8 ago 2 replies      
This is just silicon valley growing up and joining the actual business world. It used to be a blank check, now people are asking questions about finances, valuation. VCs, startups were some super risky exotic venture. Now that it's become more commonplace, more common business controls are being put into place. Startup == small agile business.

This is a good sign, in my opinion. It means the word startup is no longer some rocket to the stars but means a small business with bootstrapped capital that may or may not make it. Like every other small business tbh. VC has become a legitimate vehicle for investment and returns and not some exotic moonshot project. Therefore, financials are now questioned.

10
rdtsc 9 ago 2 replies      
At least based on HN coverage there seems to be a recently a wave of stories about scammers and wanterpreneurs, some questions about "signs of failed startups", warnings about red flags and so on.

Is increasing scammer activity a sign of the tail-end of a market hype?

11
yalogin 8 ago 2 replies      
What defines a startup? Everything that did not IPO yet is referred to as a startup. Sure everything starts at some point, but should Dropbox, Uber and Airbnb still be called start ups? Aren't these large companies?
12
artursapek 6 ago 1 reply      
Is it really not a caricature that people in SV wear jeans and their startup's stock t-shirt with a blazer over it? People really dress like this?
13
n72 8 ago 1 reply      
Pretty off topic, but is Chris O'Neill huge? Or did they choose very small people to flank him in the photo for the article?
14
patatino 9 ago 4 replies      
What would be the best way to make money betting on a crash?
15
swingbridge 1 ago 0 replies      
Lean and profitable is the new black.
16
alanh 7 ago 0 replies      
A welcome change:

> Other entrepreneurs have a newfound air of practicality, no longer shooting for their companies to be the next tech behemoth like Facebook.

I haven't been hearing much about raising lately Is it still the case that you are expected to lay out a path to $1Bn in your pitch deck, no matter what your company does?

17
throwanem 9 ago 6 replies      
When the New York Times can see it coming...
18
rm_-rf_slash 9 ago 1 reply      
My main concern is that there are few investment avenues these days that provide a decent return without being extraordinarily risky. Stocks are stagnating. Bond yields are so dismal you have to invest in places like Turkey to see any sort of return - if it even happens.

That pretty much makes Silicon Valley the default place to get any bang for your buck. Investors in VC funds want to see returns. VCs want to keep investing in winners, getting more investment in their funds, and making good money along the way. People want to turn their sweat into gold so there is no shortage of aspirational entrepreneurs.

So it seems to me that if there is anything that could turn a cautious slowdown into a full-blown crash, it's a scarcity of decent investment options.

19
cloudjacker 9 ago 1 reply      
How to keep the money machine going:

If one of your investors or potential acquirers is a big company and you already have contacts with them, get that company to do a Euro corporate bond issuance, and use the proceeds to buy your company

A) this is already happening

B) it isn't the strangest thing that has happened

Silicon Valley downturn talk is ignoring broader macroeconomic fundamentals, at this point in time.

Economically unsound? SURE! Are you in a privileged enough position to make a lot of money? DEFINITELY!

25
Maquette Pure and simple virtual DOM library maquettejs.org
55 points by tilt  8 ago   18 comments top 7
1
yladiz 7 ago 1 reply      
So with these virtual DOM libraries, after seeing a lot of them try to fill in this supposed niche, and seeing two this week including this one, I'm of the opinion that it should either be completely JS looking, e.g. a simple wrapper around createElement, or React looking, e.g. with JSX.

The ones that are in between, while potentially having nice syntax or a nice looking homepage, aren't enough to convince me to use them over React, mainly because 1) if I'm doing any production code, I want other engineers to be familiar with it and its pitfalls; 2) I want to make sure it'll be supported 6 months down the road; 3) engineers are familiar enough with JSX or with createElement, but some other syntax may be confusing and have corner cases that a bigger project like React hasn't thought of... I don't want a lookalike React.createElement because I can just use React.createElement; 4) many of these libraries are built with "performance" in mind, but does it really matter how many more times it can process a ToDoApp if it doesn't really affect user facing performance?

While this isn't a jab at this library specifically (although it hasn't had much substantial development in a while...), but too many libraries are created to fill some supposed niche and then abandoned when they move to different pastures, and if you need to fix any of those corner cases you will inevitably run into, ideally you can submit a PR and get it fixed easily and with little delay.

2
doublerebel 5 ago 0 replies      
I really dig Maquette. The library is easy to use, the documentation is good, and the interactive examples make it really easy to get running.

The Hyperscript notation Maquette uses is compatible with other vDOM implementations, so it's possible to use JSX [1], Jade/Pug [2], Handlebars, or any other templating solution that compiles to Hyperscript [3].

I personally prefer really "dumb" views and have my controllers attach events and link model data. Thanks to the modular design of Maquette, instead of the Maquette `createRender()` pattern I was able to use SpineJS for the Model and Controller portions of the app.

I used Maquette recently to create an API and Web UI for Bull job queue, which I'll be releasing this week as open-source. I did not run into any bugs or difficulties in Maquette, it was as efficient and straightforward as advertised.

Previously I've used Templatizer (precompiled Jade templates), but Maquette's feature to map and track lists of objects/elements spurred me to change. I'm sold on it for now but really appreciate the thriving competition of vDOM renderers.

[1]: https://medium.com/maquette-news/maquette-2-2-now-supports-j...

[2]: https://www.npmjs.com/package/gulp-pug-hyperscript

[3]: https://www.npmjs.com/browse/keyword/hyperscript

3
dimgl 5 ago 1 reply      
I find the virtual DOM is much easier to create using regular HTML syntax. I like Vue 2.0's approach where it creates a virtual DOM from the actual DOM itself.

Edit: I left a huge comment regarding POJOs and how the example on the homepage is pretty ugly, but I just found out that there's something called Hyperscript, and it's used to create a virtual DOM. So, while the example is in Hyperscript, Maquette also supports JSX.

I still don't understand the point of all of this I guess. Isn't it time we as JavaScript devs took a step back and thought "why do we need this?"

4
artf 3 ago 0 replies      
Please add 'width: 100%;' to your '.homepage .row' CSS rule. In Firefox I see an annoying scollbar on bottom of the page
5
qwertyuiop924 6 ago 1 reply      
So... it's mithril with a ton of the useful stuff ripped out, for a comparatively tiny reduction in size? Why would I want that?
6
Etheryte 6 ago 2 replies      
"Maquette is a virtual DOM implementation that excels in speed." Waiting for some data to back that claim.
7
erwinkle 5 ago 0 replies      
Love the site design, but will never use the library
26
Soluble corn fiber can help young women build bone and older women preserve bone purdue.edu
12 points by Mz  3 ago   9 comments top 4
1
helloworld 2 ago 2 replies      
I don't want to be unnecessarily cynical, but I do notice that this research about corn was funded by a company, Tate & Lyle, which provides corn-based ingredients:

http://www.tateandlyle.com/ingredientsandservices/pages/rawm...

Too bad that the researchers couldn't have found more neutral funders.

2
colechristensen 1 ago 0 replies      
When it comes down to it, journalists should never write articles about single health studies. In essence, none of them are valuable sources of information for the general public. Without replication and meta-analyses, they really should only get attention from the scientific community.
3
bpodgursky 2 ago 1 reply      
That's pretty funny, I was just this morning looking up Quest bars on Amazon, and the comments were all furious that they had switched the fiber source in them to soluble corn fiber

(ex https://www.amazon.com/review/R3B99075S9546R/ref=cm_cr_dp_cm...)

Maybe a good change overall, although it clearly wasn't driven by this study.

4
GFK_of_xmaspast 2 ago 1 reply      
There was a "what about the men" comment that appears to have disappeared before I could respond to it, but here's some useful info about calcium https://ods.od.nih.gov/factsheets/Calcium-Consumer/ in particular check out those RDAs.
27
A Sneak Peek Comparison of x264, x265, and libvpx netflix.com
97 points by babak_ap  4 ago   33 comments top 9
1
jjcm 3 ago 1 reply      
I'm sad they skipped out on a 4k resolution test. I'd really like to see how they compare at those resolutions or higher, since really x265 isn't really going to be ubiquitous in hardware for another couple years. At that time 4k should (hopefully) be the standard, or at the very least will be a common use case.
2
sergiotapia 3 ago 4 replies      
@dang: Can you update the second x264 to x265 in the title?

---

The only bad thing I've ever heard about x265 is that it needs beefier hardware compared to x264. Otherwise it's better in every regard. Is this true?

3
corysama 3 ago 3 replies      
> x265 outperforms libvpx for almost all resolutions and quality metrics, but the performance gap narrows (or even reverses) at 1080p.

It's not clear if this means libvpx is sometimes better than x265 at resolutions > 1080p or <= 1080p. I think the author intended "occasionally better when <= 1080p"

4
broodbucket 3 ago 0 replies      
the second "x264" in the title should be "x265", was very confusing to parse
5
inthewoods 2 ago 3 replies      
I find it odd that Netflix is using Youtube and Periscope to do their broadcast. I get that that is where the audience is - but I'm surprised they aren't also broadcasting it using their own tech.
6
oDot 3 ago 3 replies      
Any news from Daala?
7
angryasian 3 ago 0 replies      
I imagine the biggest difficulty is the licensing for x265. It may have changed now but from last I knew they wanted a royalty from streaming services monetizing the use of x265.
8
merb 2 ago 0 replies      
> x265 outperforms libvpx for almost all resolutions and quality metrics, but the performance gap narrows (or even reverses) at 1080p.

if you only read the TL;DR part i.e. What did we learn?You think wow that's bad for libvpx.Than you read:

> 3 resolutions (480p, 720p and 1080p)

And think... well not that bad. 2/3 vs 1/2 isn't bad.

9
hackuser 3 ago 2 replies      
Firefox tells me: Secure Connection Failed

EDIT: No idea why it would redirect me to https. I only clicked the link like everyone else.

29
Lightning Strike Kills More Than 300 Reindeer in Norway nytimes.com
20 points by alizauf  3 ago   6 comments top 2
1
IIAOPSW 17 ago 0 replies      
This is the worst mass death in Norway in 5 years.
2
shshhdhs 2 ago 3 replies      
Why is this on HN?
30
Gene Wilder Has Died bbc.com
325 points by cpymchn  8 ago   88 comments top 26
1
simonsarris 7 ago 6 replies      
Willy Wonka (screenplay by the genius Roald Dahl) has one of my favorite scenes in film and I invite you all to watch it: https://www.youtube.com/watch?v=sz9jc5blzRM

> In 1970, when originally offered the lead role in Willy Wonka & the Chocolate Factory by director Mel Stuart, the great Gene Wilder accepted on one condition. "When I make my first entrance, he explained, I'd like to come out of the door carrying a cane and then walk toward the crowd with a limp. After the crowd sees Willy Wonka is a cripple, they all whisper to themselves and then become deathly quiet. As I walk toward them, my cane sinks into one of the cobblestones I'm walking on and stands straight up, by itself; but I keep on walking, until I realize that I no longer have my cane. I start to fall forward, and just before I hit the ground, I do a beautiful forward somersault and bounce back up, to great applause." Asked why, Wilder said, "Because from that time on, no one will know if I'm lying or telling the truth."

Quote from: http://www.lettersofnote.com/2012/06/part-of-this-world-part...

2
rdtsc 7 ago 1 reply      
Young Frankenstein is my all time favorite comedy

http://www.imdb.com/title/tt0072431/

It just has the right mix situational and sarcastic humor. I usually re-watch it every couple of years. Gene Wilder is just so good in that role.

3
fitzwatermellow 6 ago 4 replies      
My favorite scene, and it's an absolute masterclass in comedic technique, is from Woody Allen's Everything You Always Wanted to Know About Sex. The moment his Greek patient confesses: "Doctor, I'm in love with a sheep!" Without saying a single word, Wilder's expression goes from jesting to confusion to amusement to fright to intrigue and back again through the entire gamut of possible human response. He sputters and strains. It's all right there on his face! We feel the tortured struggle occurring within his mind, grasping for any semblance of assessing the situation and formulating the appropriate thing to say. It's truth is it's genius!
4
woodruffw 7 ago 2 replies      
Very sad. Young Frankenstein was probably my favorite movie as a kid - the Frau Blucher scene[1] always made me laugh. He'll be remembered (and watched) for a very long time, which I suppose is the greatest honor an actor can receive.

[1]: https://www.youtube.com/watch?v=zdIID_TGwhM

5
dmd 7 ago 1 reply      
https://www.youtube.com/watch?v=kRb3u0PtEZE is how I always think of him.
6
greggman 2 ago 0 replies      
As a Gene Wilder fan I was once digging for things to watch on Amazon and stumbled on a documentary narrated by Gene Wilder. I wouldn't have even noticed it but when I saw his name He'd been out of the limelight for so long I thought "wow, what could have made him agree to do this?" So I watched it.

I can't recommend it enough. It's called "EXPO - Magic of the White City" and is as about the 1893 Chicago Exposition. It takes about 10 minutes to really get started and it's got some cheesy stuff but it was fascinating. I've shown it to several people and they all got sucked in.

Not sure if this is a legit upload but it's on YouTubehttps://m.youtube.com/watch?v=cpOQE5KJJds Or Amazon https://www.amazon.com/Expo-Magic-White-Gene-Wilder/dp/B004S...

If it weren't for Gene I'd never had known about such an amazing topic. Thanks Gene!

7
bitwize 7 ago 0 replies      
Is the grisly Reaper mowing...? :(

Alternatively...

Do you know what happened to the man who suddenly got everything he ever wanted? He lived happily ever after.

8
1024core 7 ago 1 reply      
I'll always remember him from Blazing Saddles.
9
mattezell 6 ago 0 replies      
"From that fateful day when stinking bits of slime first crawled from the sea and shouted to the cold stars, "I am man.", our greatest dread has always been the knowledge of our mortality. But tonight, we shall hurl the gauntlet of science into the frightful face of death itself. Tonight, we shall ascend into the heavens. We shall mock the earthquake. We shall command the thunders, and penetrate into the very womb of impervious nature herself." -Dr. Frederick Frankenstein, Young Frankenstein.
10
jv22222 6 ago 0 replies      
Young Frankenstien is one of the funniest movies of all time. Every scene a classic. If you haven't watched it, I highly recomend it.

RIP Mr Wilder

11
milge 7 ago 0 replies      
"A little nonsense now and then is relished by the wisest men." One of my favorite quotes from Willy Wonka.
12
petergatsby 3 ago 0 replies      
Still my all-time favorite song in a musical: Pure Imagination https://www.youtube.com/watch?v=RZ-uV72pQKI
13
ArkyBeagle 3 ago 0 replies      
Wilder combined with Mel Brooks... that's a high-water mark.

It's nearly criminal that he wouldn't make any more movies after Gilda died, but I admire the gesture.

14
gm-conspiracy 4 ago 0 replies      
Also, a great buddy comedy w/ Gene Wilder and Richard Pryor:

See No Evil, Hear No Evil

http://www.imdb.com/title/tt0098282/?ref_=fn_al_tt_1

15
gm-conspiracy 4 ago 0 replies      
Also a good comedy, Haunted Honeymoon:

http://www.imdb.com/title/tt0091178/?ref_=nm_flmg_act_11

...with Dom DeLuise in drag.

16
amyjess 6 ago 0 replies      
The Producers will always be one of my all-time favorite movies. Gene Wilder was a fantastic actor.
17
rmason 6 ago 1 reply      
How many people remember that Gene Wilder was in Bonnie and Clyde?

Or maybe I should ask how many people here have even seen that movie with Warren Beatty and Faye Dunaway?

http://www.imdb.com/title/tt0061418/

18
Imagenuity 7 ago 0 replies      
Good night, Herr Doktor.
19
BatFastard 2 ago 0 replies      
May you rest peacefully in the land of your imagination.
20
btgeekboy 7 ago 0 replies      
He lived a long and accomplished life. I can only hope to be as as successful as him.

Good day!

21
madengr 3 ago 0 replies      
Wilder and Pryor were the dynamic duo. Loved those movies.
22
sverige 4 ago 0 replies      
Love his acting and the great romance he had with Gilda Radner.
23
mikeryan 7 ago 0 replies      
dammit 2016.
24
syngrog66 5 ago 0 replies      
huge fan of him and especially Young Frankenstein. so much so that I created a character in a comedy story named Heinrich von Hexenhammer as a homage to Gene's definitive mad scientist:

https://reddit.com/r/DSPR/comments/1m4zrl/when_heinrich_met_...

25
mdevere 7 ago 0 replies      
i enjoyed his portrayal of steve jobs
26
AncoraImparo 5 ago 4 replies      
How is this relatable to Technology?
       cached 30 August 2016 04:02:01 GMT