hacker news with inline top comments    .. more ..    7 Jun 2016 News
home   ask   best   2 years ago   
1
Show HN: 2D field of view demo jsfiddle.net
143 points by legends2k  3 hours ago   45 comments top 12
1
thomasjonas 1 hour ago 0 replies      
Similar technique used for a nice website background: https://web.archive.org/web/20140429043206/http://www.goodga...
2
legends2k 2 hours ago 3 replies      
In case someone is interested in the design:

https://bbcdn.githack.com/rmsundaram/tryouts/raw/e06259fffad...

Devising the algorithm took around a month; did it in my free time. The implementation took a week, in HTML5 Canvas/JavaScript. Doing a 360, not-bound-by-distance FoV would have been simpler, but FoV limited by both angle and distance took time.

The idea is to find the field of view of an observer on a map with blocking buildings (polygons) and also to service a line of sight query, to know if a given point is visible to the observer.

The code contains enough comments including citations and references to articles and books. Here's the actual repo from where the above page is served:

http://bitbucket.org/rmsundaram/tryouts/src/master/CG/WebGL/...

3
damptowel 10 minutes ago 1 reply      
On a tangent, this reminded of this 2D shadow mapping technique.

https://github.com/mattdesl/lwjgl-basics/wiki/2D-Pixel-Perfe...

4
Artlav 32 minutes ago 2 replies      
So... Can anyone explain what is so special or interesting about this?

Both the idea and the algorithms are decades old and were already used in the 80s and 90s computer games like Wolf3D and Doom.

5
asQuirreL 1 hour ago 1 reply      
This is some interesting work! I was musing about another algorithm to achieve the same effect, as I was reading:

* First render the scene in 3D, from the viewer's position, using a perspective transformation with the right field-of-view to match the "bounding angle".

* Save the resulting Z-buffer into a texture.

* When rendering the 2D, top-down scene, you can consult the Z-buffer (now a texture) to see whether that pixel should be highlighted or not.

I think this is a technique that's already used, but almost complementarily, to calculate shadows from dynamic lights.

6
MasterScrat 2 hours ago 4 replies      
Interesting!

How much more complicated would it be if the obstacles were not polygons but points on a grid?

I've always been curious how the FoV was computed so fast in the Baldur's Gate games given the complex grid-based map and the weak harder at the time.

Example grid: http://aigamedev.com/static/tutorials/FPSB_bg_rsr.png

And the FoV rendered: http://cdn.wegotthiscovered.com/wp-content/uploads/Baldurs-G... the rooms with no one in them are slightly darker)

7
manuelflara 1 hour ago 1 reply      
This brings back memories of playing Commandos: Behind Enemy Lines: http://store.steampowered.com/app/6800/
8
foota 1 hour ago 2 replies      
Here's an interesting approach to do this from a game worked on by Shamus Young, http://www.shamusyoung.com/twentysidedtale/?p=20777
9
almightysmudge 1 hour ago 1 reply      
Very nice. Did you look at http://ncase.me/sight-and-light/ at all?
10
mrmattyboy 2 hours ago 1 reply      
I found a bit of a weird 'bug', where I could get the POV to have a straight edge (rather than the curvature), if it's of any use: http://imgur.com/a/MPzGa
11
intruder 2 hours ago 1 reply      
This is really cool, appreciate the demo, source and the writeup.

The debug view is beautiful. Great work.

12
xabi 2 hours ago 1 reply      
Old (June 2012), but still interesting related article: http://www.redblobgames.com/articles/visibility/
2
Show HN: New calendar app idea oneviewcalendar.com
494 points by petermolyneux  8 hours ago   131 comments top 52
1
Vilkku 3 hours ago 2 replies      
Looks pretty nice. Some criticisms:

* I dislike how the website hijacks scrolling, the phone doesn't fit on a Macbook 13" screen so I tried to scroll down, but that just scrolled in the app

* When scrolling out to a view where I see many months at once it can only show a few events, in the demo it only shows Gym, and when I zoom in a bit more it also shows Hockey with kids. I feel these recurring events should have the lowest priority when zoomed out, it's more important to see "special" one-off events.

* It's not clear that there are events hidden. When zoomed out a lot it's of course expected that this is the case, but when zoomed out a bit more it might be unclear if all events for a time period are visible or not.

2
petermolyneux 8 hours ago 19 replies      
Hi all.Just wanted to show you my latest side project (been working on it for a couple of years now). It's a calendar app with a twist.

It's developed as a web app, thats wy it could be integrated directly on the landing page. It can also be run on any device directly from http://app.oneviewcalendar.com

Instead of the traditional monthview and day view I have used a timeline that is zoomable and scrollable. Give it a try, it's quite a different experience.

Please leave me a comment :)

3
dasil003 6 hours ago 4 replies      
There's something existentially terrifying about being able to quickly zoom out from out from a view of 5 minute intervals to one of several decades.
4
doczoidberg 6 minutes ago 0 replies      
Great app. I hope you will make apps via apache cordova from it and upload it to the app stores.
5
cha-cho 10 minutes ago 0 replies      
A very clever idea. The UX and the ease to onboard with the demo are both well executed. Congratulations!
6
kagamine 37 minutes ago 0 replies      
When you add an event you scroll down, down, down. The final step is OK/Cancel and they are at the top of the page, centered. I can't reach those buttons with my thumb and have to use my other hand. They should be down at the bottom where the flow of the action ends, left and right justified.

Edit. I think it could also benefit from a 'snap to now' button somewhere. I don't have time to use your fancy GUI for literally seconds of my life to see what's going on this morning.

Just my 2c. Really nice app. (tu) I hope this doesn't come across as negative, I just want to help.

7
andai 12 minutes ago 0 replies      
Excellent app! I too am jealous you beat me to it.

I just wanted to say one thing, you can make it open source, or wait until someone else does it.

8
yuchi 10 minutes ago 0 replies      
I had that same exact idea years ago but didnt finished it. Congrats, looks very, very promising.
9
adsche 21 minutes ago 0 replies      
That's awesome!

I have a suggestion: I love how when you tap a week number it zooms to that week. But I think it would be useful if it resets to the previous zooming if you tap it again.

10
asimuvPR 7 hours ago 0 replies      
The demo is really slick. The app does feel better than Google calendar. What was it built with? :
11
sharp11 6 hours ago 1 reply      
Beautiful UI, very intuitive!

One issue I ran into (on a MacBook Pro, OSX El Cap) was that scrolling wouldn't work on my (external) trackpad. It interpreted 2-finger scroll as pinch (down) and zoom (up). Had to use trackball to scroll.

12
xavi 1 hour ago 0 replies      
13
dheera 3 hours ago 1 reply      
My biggest gripe with Google Calendar is the lack of consolidation of duplicate events. Often I have about 3-4 copies of various events and a storm of 10+ notifications for every appointment. If you can solve this problem, it would be a killer app for me.
14
amelius 35 minutes ago 0 replies      
Looks nice! I think this could be a really successful app!

It would be nice if you could keep us (the HN crowd) updated on the business side of things.

15
itaysk 4 hours ago 0 replies      
I like the creativity, but I think for daily use I still prefer the table inspired week view (or 3 day view for portrait mode) as it gives me a better sense of orientation.For execution you get A+ :) really well executed.
16
yoz-y 2 hours ago 0 replies      
One thing I learnt from working on a calendar app is that the things which people want to see are seemingly very different from person to person.

In the case of your calendar I would prefer if the events that happen on the same time would "expand" much sooner. In the example if you want to see both morning-briefing and my-breakfast-day you need to zoom in to a day view. I would much rather see these events one under another in the "week" zoom level, especially since there is a lot of free space available.

17
steinsgate 2 hours ago 0 replies      
Thanks for making this app. I had a similar idea too, but never executed it. Of course, the idea was based on a certain need. Often, I would have unscheduled events such as "talk to Santosh sometime next week" or "go on a short bike trip with Francesca next month". Basically events that have not been assigned a defined date and time. These are the events that I would like to see in the week view or month view (when I zoom out) so that I know that I need to find a specific time for it. The other events (which have a set date and time) receive lower priority in the zoomed out views. Have you thought of implementing something similar? Would be nice.
18
spditner 7 hours ago 0 replies      
Reminds me of the Fisheye Calendar from UMD's HCI Lab, which was doing experimentation with Zoomable UI's: http://www.cs.umd.edu/hcil/piccolo/learn/fisheye.shtml
19
castell 1 hour ago 0 replies      
> By Peter Molyneux, Sweden

Peter Molyneux, the designer of Black & White and Fable? Or just the same name?

20
OJFord 2 hours ago 0 replies      
Nice! Reminds me of Spence & Apperley's bifocal display ('fisheye') calendar [1], but on a single axis.

I remember thinking when I saw that in an HCI course a couple of years ago that despite being over 30 years old it was a better calendar app UX than anything available that I was aware of!

[1] - see Fig. 6 of http://www.ee.ic.ac.uk/r.spence/pubs/SA82.pdf

21
omarforgotpwd 6 hours ago 3 replies      
This is a cool demo, but why would someone want to have all these long running events on the screen at all times? It makes the UI cluttered and less clear what the next thing is you need to be doing. For example, do I really need the fact that I'm on vacation to be taking space at the side of the screen at all times? I'm not going to forget. This kind of design makes it easier for me to miss things I need to remember to do, perhaps defeating the point of using a calendar app
22
soneca 1 hour ago 0 replies      
I synced it to my Google Calendar, but it imported all the events of my coworkers calendars that I subscribe to.

It should have a way to remove it from view or not even import them at all.

23
chris_overseas 4 hours ago 2 replies      
This looks very promising, with a few improvements I can easily see this as my preferred calendar app.

Feedback: Is it possible to selectively enable/disable individual calendars? I have a "birthday" calendar from a large social network that results in multiple birthdays each day completely dominating the calendar. If I can't turn that off, it's virtually unusable for me unfortunately.

I tried to assign up for the Android beta but just received a confirmation email that I'd subscribed, no link to install the app. Is that to be expected?

[Edit: I just received a download link in an additional email. Thanks!]

24
WWKong 4 hours ago 1 reply      
Beautiful. How do you build something like this? Coding language? Stack? Dev environment?
25
DigitalJack 4 hours ago 1 reply      
Very slick. But something seems wrong with the two finger zoom (or I misunderstand it). It zooms, but it also scrolls a little... so the thing I was looking at will go off screen.

For example if I have Sunday the 10th at the top of the screen, and I two finger scroll/zoom, I'm preplexed why the date at the top shifts to earlier or later dates. This happens both directions. Maybe it's intentional, but it's unexpected (for me). I either want to zoom or scroll, not both.

I hope you take that as constructive. I really like it otherwise :)

26
Stenerson 7 hours ago 0 replies      
Very clever design, looks great.

A few thoughts on navigation:I'm on a mac and I felt that the zoom using the "scroll wheel" (i.e. track pad) was pretty disorienting. I was expecting it to scroll. Having to click and hold down to scroll doesn't feel right.

I also instinctively thought the arrow keys would work but they didn't. I'd suggest up/down (obvious) and left = zoom out, right = zoom in.

27
dmvaldman 5 hours ago 1 reply      
the main view is all rendered in a <canvas> element... wow!
28
cdcarter 5 hours ago 1 reply      
Give me a place for my email so you can tell me when it's iOS ready, please!
29
steventhedev 4 hours ago 0 replies      
Really awesome app! My only advice would be to move the sidebar to the top when possible because horizontal real estate is important, especially if you have lots of long term events.

Other thing I'd love to see in a calendar app is prompting me to track the exact amount of time I spend on each thing in my calendar. Having contextual reminders and a split (planned/reality) view would be really interesting. It would help spot those people who are habitually late.

30
bakul 4 hours ago 0 replies      
Very nice!

One suggestion is to auto arrange to different number of columns based on the current time unit. Though zooming would get weird -- it would have folding/unfolding effect or snaky undulations! Not sure if this is possible but such "zooming" can have other uses....

31
castell 1 hour ago 0 replies      
It reminds me of the WinFS "Journal" demo application.
32
bikamonki 5 hours ago 1 reply      
Very, very good! Works dandy on my mobile browser. May I ask what js framework, if any, are you using?
33
whatcd 5 hours ago 0 replies      
Love the pinch-to-zoom on this touch-screen Chromebook. It reminds me of Google Photo's!
34
cpeterso 4 hours ago 0 replies      
I can't scroll the page using my trackpad on Firefox because the scroll events seem to be captured by the simulated phone and not passed to the page.
35
imdsm 2 hours ago 1 reply      
Does this integrate with my current calendar, say at Google?
36
petermolyneux 2 hours ago 0 replies      
Hi everone, Thanks for all the great feedback.Anyone applying for Beta-testing will have to wait a few hours. Got my hand full :)

Feel free to use the contact form for any questions or if you just want the IPhone version.

Cheers,Peter

37
aruggirello 2 hours ago 0 replies      
Very nice and intuitive UI! Any thoughts about supporting ICS/VCS?
38
plainOldText 7 hours ago 0 replies      
This app looks very similar to Timepage by Moleskin which has been around for a while.
39
sidcool 4 hours ago 0 replies      
Amazing UI fluidness on mobile. Very good idea!
40
xufi 6 hours ago 0 replies      
Very nice. I was meaning to implement something like this in JS (this summer hopefuly if I dont get busy) as a fun project for my website but something much smaller. Got any ideas?
41
fiatjaf 7 hours ago 1 reply      

 Uncaught TypeError: Cannot read property 'populateCalendars' of undefined Uncaught ReferenceError: moment is not defined

42
ashitlerferad 3 hours ago 2 replies      
Anyone have a screenshot of this? (No JavaScript)
43
bobbles 5 hours ago 1 reply      
Are you planning an iOS native app? I would love this and would pay for it
44
johndifool 6 hours ago 0 replies      
Double-clicked on the left side bar and it locks it without being able to re-open it.
45
kjcharles 7 hours ago 0 replies      
Really nice design! I think the simplicity compared to Google Calendar could suit a lot of people.
46
wtbob 8 hours ago 2 replies      
Really neat idea. I'm sad that the site requires JavaScript, but I don't know how this would work otherwise.
47
robbbbbbbbb 4 hours ago 0 replies      
I signed up for the email but I didn't get a download. Did anyone else have any luck?
48
davidmix 6 hours ago 0 replies      
a lot like timepage for ios https://itunes.apple.com/au/app/moleskine-timepage-calendar/...

cool idea making it a web app

49
yellowapple 5 hours ago 0 replies      
Hard to scroll down and read the stuff about this app of yours, since the "app" itself captures my scroll wheel. Had to use the scrollbar instead, which was a bit lame.

Interesting idea, though.

50
jsilence 5 hours ago 1 reply      
Awesome UI!

Does it support CalDAV?

51
jsprogrammer 7 hours ago 1 reply      
I like it. I've been working in a similar view.

How do I get rid of all your items?

52
alien3d 6 hours ago 0 replies      
Nice.
3
Understanding the Elm type system adamwaselnuk.com
83 points by jaxondu  5 hours ago   15 comments top 4
1
spion 1 hour ago 2 replies      
> For example, when writing an Elm program I might at some point decide that users should have an admin flag. I will then try to use that flag in a function at which point the compiler will tell me that I have failed to add it to the User model. I will add it to the model at which point the compiler will tell me that I have failed to account for it in my main update function.

This beautifully explains why types are so useful. I don't know if the number of bugs goes down, but it sure is useful to have an assistant check all the implications of a change I want to make, and does that in a second.

2
ralfd 25 minutes ago 1 reply      
> Elm was my introduction to using a static, strong type system.

I think I am now officially feeling old. A language I never heard of is the introduction to programmers to static typing??

3
z1mm32m4n 2 hours ago 0 replies      
It's refreshing to see a piece written by someone who's new to functional programming, discovering the beauty of types and pure functions for the first time.

I very much agree with the author; learning functional programming idioms have had a profound impact on my ability to model problems in code, regardless of the language I'm using.

I admire Elm so much for putting an emphasis on the user experience, recognizing that it has been one of the biggest blockers to making functional programming mainstream.

4
ghayes 2 hours ago 2 replies      
At first glance, Elm's type system looks like it borrows heavily from Haskell [0]. Learning Haskell's type system is a great mental exercise, even if you don't even up coding in the language.

[0] Basics: http://learnyouahaskell.com/types-and-typeclasses

4
8x Nvidia GTX 1080 Hashcat Benchmarks github.com
32 points by biggerfisch  2 hours ago   6 comments top 2
1
nikcub 53 minutes ago 2 replies      
I'm going to wait for the 1070 benchmarks as in other series the x70 models had a better price/performance ratio:

http://www.videocardbenchmark.net/high_end_gpus.html#value

The 1080 is $599 while the 1070 will be $379 while the Titan X is ~$1000.

$1,500 for a 4x 1070 setup could be very attractive considering the performance should be on par with 4x Titan X (at $4k)

edit: forgot to mention power consumption as well, which should be a component of price/performance

2
hathym 36 minutes ago 1 reply      
Can you do some gaming performance in FPS?
5
Wireless charging startup uBeam accused of being the next Theranos techcrunch.com
43 points by jacquesm  2 hours ago   22 comments top 10
1
wiredfool 2 hours ago 1 reply      
There's a big difference between a venture backed tech product that doesn't work at the desired scale because of the laws of physics and a medical product that doesn't work because of phlebotomic reasons (also, procedural and calibration).

And that difference is who gets hurt.

uBeam is going to hurt the VCs, Theranos hurt everyone who relied on the results of their tests.

2
jwr 23 minutes ago 0 replies      
"uBeam could be vaporware"

You don't say?

Seriously uBeam's claims were always ridiculous. Anybody with any kind of engineering background should be able to feel it instinctively, and be convinced after spending a couple of minutes with pen and paper.

3
raverbashing 1 hour ago 1 reply      
Funny thing is that Marc Andressen had some mean comments on @pmarca about how HN reacted after first news appeared here and commentators showed it was technically impossible (or at least not what most people expect)
4
hathym 1 hour ago 1 reply      
"uBeam has refused to publicly show a demo because the technology doesnt work."

in this case it's the VCs to blame for blindly throwing their money.

5
fennecfoxen 1 hour ago 0 replies      
Better the next Theranos than the next Therac.
6
aaron695 40 minutes ago 3 replies      
Both Theranos and uBeams CEO's are women.

Totally crazy technology outside the laws of science but no one will call them on it.

This stuff has been non debatable in the science/logic community for..... years?

How the hell else are they still going?

Are there companies run with male CEOs that have done the same? Not crazy ideas, but crazy ideas that are not scientifically plausible that also get PR and VC funding.

Are they just getting more PR cause they are run by women and seem to be more pumped than they are?

Are they being picked on because the CEO's are women and plenty of other crazy companies who get similar funding just disappear?

I don't get it, but how many stories do we need about these two companies that say they are just not possible.... It's getting boring.

7
gakada 1 hour ago 1 reply      
uBeam isn't like Theranos because uBeam isn't defrauding anybody. They have honestly described how the product works. The product just happens to be a horrifying deafness ray. Some VCs will fund that.
8
williamscales 1 hour ago 0 replies      
This reminds me of the idea of having a Tesla coil in every room for power delivery.
9
PhasmaFelis 56 minutes ago 0 replies      
The blog that the article was sourced from (http://liesandstartuppr.blogspot.com/) currently has a biting post about how Theranos' failures are endemic in the tech industry:

"Companies in the new tech area tend to be a little lax when it comes to doing things carefully - Mark Zuckerberg, founder and CEO of Facebook, tells us all to "Move fast and break things", which I have to agree is a great way to innovate and learn when your product does nothing of actual importance at all. It doesn't matter if you don't get your silly cat videos, or can't post pictures of your holidays, because your business payroll doesn't run on it, your medication isn't delivered by it, nor is your aircraft navigation based on it. Real consequences of a Facebook blackout are near zero."

"[...] In both these industries [generator manufacturing and medical ultrasound] the mentality was "we must make sure this is safe" and the idea of reducing or skipping safety is never considered, but I do not see the same thinking in many of these tech companies. I have literally heard "what's the minimum we have to do?", "we don't have proof it's a risk", "that sounds time-consuming and expensive. We should do <pointless but fast/cheap thing> instead", and "well if it goes to court our lawyers say they have good arguments"."

10
gizmo 1 hour ago 1 reply      
Are they being called the next Theranos simply because they're also a tech startup without a working product that has a woman founder/CEO? Sure looks that way. uBeam is just a dumb idea that can't work. Squandering a few million of VC money in pursuit of a dumb idea happens all the time. This article seems really meanspirited.
6
Voice Assistant? Yes please, but not in public creativestrategies.com
32 points by Gys  4 hours ago   14 comments top 5
1
kleiba 20 minutes ago 0 replies      
> The high proportion of usage in the car would suggest it has more to do with the hands-free law that regulate driving and texting vs. a free choice by consumers to embrace this technology.

I doubt this conclusion. For one, given how many people I see everyday driving while holding a smartphone to their ear, it does not seem that too many people are concerned with that particular law. Couldn't it be instead that driving is a situation where not having to use your hands is perceived as an actual benefit?

2
ilanco 2 hours ago 3 replies      
I'm surprised they didn't mention that in areas with a lot of background noise, eg public places, voice recognition doesn't work as well. I think that is also a reason why people refrain from using it in public. From personal experience I can say it's very frustrating to repeat the command to your phone multiple times because it's sorry it didn't get that, or calls your ex girlfriend instead of playing some music.
3
ocdtrekkie 2 hours ago 1 reply      
I ponder if some day even our in-home voice assistant should be aware of this. Like, I was thinking if a guest is there, my voice assistant should know. My voice assistant might respond differently in that context. Like providing less detailed information in voice responses.

There's probably even ways a home assistant could also specifically help my guests. It could know where I store things they might need, like cups, plates, and silverware. Or be able to tell them where my bathroom is.

4
okla 1 hour ago 2 replies      
I was always wondering isn't it better to make usage of voice assistant similar to simple phone call? Why it should be in speaker mode?
5
AstralStorm 2 hours ago 2 replies      
Pity we don't have a reasonably accurate subvocal technology ready for public use.
8
This is not a place of honor energy.gov
222 points by erubin  8 hours ago   164 comments top 38
1
tragomaskhalos 0 minutes ago 0 replies      
Pictures of clowns ... lots of pictures of clowns. In fact just use John Wayne Gacy's clown self-portraits.
2
amk_ 6 hours ago 1 reply      
Holy shit. Dieter Ast was my landlord in college. This explains so much about my basement.

To those who haven't read the article yet, the real title is Judgment on Markers to Deter Inadvertent Human Intrusion into the Waste Isolation Pilot Plant. Prof Ast was the material scientist on one of the multidisciplinary teams tasked to design warning symbols for 10,000 year nuclear waste storage sites. He used to go around collecting old lab computers that were going to be thrown out and resurrect them with Windows 2000 or Puppy Linux installs.

3
orbitingpluto 3 hours ago 2 replies      
I'm never understood this marker system. Graves have been marked in a threatening manner so that they would not be plundered. It doesn't work.

The point is to hide the waste. The system should be designed to progressively unveil warnings if some future man starts digging. Otherwise, you're just asking for them to dig it up.

Another thing not mentioned is to design the container so that anyone who plans to excavate will think they have hit rock bottom... like Pharoahs' tombs or a multi-level pirate cache.

Or even better, put something horrible and poisonous twenty feet down. It might be better to obviously poison a couple people if they start digging this up. That would be easily understood and eventually avoided.

Remember the radioactive sign in the Star Trek episode where Data gets shishkabobed? They made jewelery...

4
jtolmar 2 hours ago 3 replies      
Contrary to many of the comments here, I think they did a good job of considering what it takes to make something foreboding without sounding like "here be treasure." The pseudo-mystical messages sound hokey, but they're effectively a backup system in case the straight-up "This is atomic waste, here's a description of atomic waste" descriptions are incomprehensible to future generations. And the more primitive communications deserve more consideration, because that's the harder part.

Additionally, I don't think the "no marker, anonymous patch of ground" plan is sound. 10000 years is a long time, which will hopefully be inhabited by peoples more advanced than us, and they could do a lot of digging in that time.

That said, the approach I'd suggest would just be a big plain monument that's physically obnoxious to get around. Although the insides of the pyramids have been robbed, the pyramids themselves will last another 10000 years, and I doubt anyone will try to mine under them during that time. And experience has shown that the best way to preserve a language is to make sure there's a large enough sample for someone to brute-force it, so these pyramids could contain chambers full of detailed explanations with pictures.

5
antihero 25 minutes ago 2 replies      
Surely written warnings, if some future society discovers them, they might be curious enough to have decrypted at least one of our present day languages/methods of communication.

Just have the same concepts relayed in as many languages/ways as possible, and then make the site sufficiently difficult to infiltrate that it would take a sufficiently advanced civilisation to break into it.

You could even tier the messages, and use words that would likely be common and thus more likely to have been recognised based on discovering whatever other shit we've left around.

DEATH.

THE THINGS HERE MAKE DEATH.

THIS MATERIAL WILL KILL YOU.

And progressively more complex and complete messages, etc translated into Chinese, Spanish, Braille, French, pictograms, what the fuck ever.

And if they're too lazy/careless to try and decrypt any of the fucking obvious messages, fuck 'em.

6
kbart 3 hours ago 2 replies      
Any marks would only attract people's attention. I can't come up with a single historical example where some marks would successfully keep people away. Even now we keep digging once forbidden and sacred places like pyramids, graves, temples, plague victims' burial grounds. Furthermore, a "menacing earthworks" example from the article looks like a treasure is buried inside that square. Why not bury it deep enough in an unmarked grave (for example, put it in the tunnels inside a mountain then blow all entrances up) so no primitive civilization could dig it up? If civilization is sophisticated enough to dig deep enough, they must be well aware of radiation as well.
7
frostirosti 4 hours ago 1 reply      
http://99percentinvisible.org/episode/ten-thousand-years/

99pi did an awesome about this. I thought the most interesting idea was that culture permeated much deeper than anything else. So seed our world with these stories of cats that changed color near radiation or something like that would do best. Since symbols meaning change but oral tradition or old wives tales last much longer.

8
valbaca 3 hours ago 0 replies      
I had an internship here during the summer of 2010. For the most part it was incredibly boring as there was SO much emphasis on safety (rightfully so).

For example, people would have to be cleared from an area in order for janitors to vacuum an area, so that no one would trip on the power cord.

I did get to go down into the salt shaft which was incredibly cool (also literally cool, which was a relief because it was the summer in New Mexico).

For the most part I upgraded some software systems and helped with some hardware upgrades.

The engineers were all characters. Several of them were preppers convinced that I was silly for going into computer science and not stocking up on gold.

9
labster 5 hours ago 3 replies      
It's really interesting to me to see architecture used for its typical antithesis. It is typically used to bring a sense of bring out positive emotions, to inspire, to bring facility to humanity, to sanctify. Here it is being used to desecrate, to decrease utility, to ward away.

As a hacker, I'm used to the thought of "what is the worst possible way I can make this UI", but it's cool to see it applied in an entirely different field.

10
jrockway 1 hour ago 2 replies      
I like the comic where the happy man has successfully plundered the nuclear waste. It contaminates him and he is observably less happy, though still exhibiting signs of above-average satisfaction with life. Then in the final frame, his beloved treasure stolen, he sadly dies of what appears to be thyroid cancer. His dying thought is that the person that robbed him of his ill-gotten plutonium squeezings will soon be suffering the same fate. Justice.

If I were watching a movie where the protagonist goes in to get some ancient artifact and this comic showed up on the wall, I would be like "yeah fucking right, some spooooky spirit kills the tomb raider? suspension of disbelief fail!" But of course this is real and is actually what would happen. If the society in 10,000 years is as cynical as me (and has forgotten about radioactivity), this comic will just egg them on!

11
korethr 4 hours ago 6 replies      
Honestly, reading this, I kept thinking to myself, "Man, this is just waiting to be turned into science fiction short story." It could go in different directions, future humans discover such a site, humans discover an alien analogue of such a site on $planet, aliens discover such a site crafted by humans on $planet, etc. Either way, there's potential for a good story there.
12
ktRolster 5 hours ago 3 replies      
To prevent people from entering, they should place statues of soldiers in front of the entrance. Thousands of them. Each should be sculpted individually from terra cotta.
13
apsec112 6 hours ago 3 replies      
We should just assume that nuclear waste lasts forever, like a lot of chemical waste does, and then treat it the same as the equivalent class of chemical waste. When someone says "10,000 years", people start thinking about how to wait it out. When someone says "forever", people give up on waiting it out, and start thinking about more realistic safety measures.
14
SCAQTony 4 hours ago 1 reply      
What language(s) did the earth speak 10,000 years ago. The concept is electrifying for will English or any other language written or spoken today even exist?

The monument would have to be a "Rosetta Stone(s)" quite obtrusive and large like a pyramid. It would have to be written in multiple present and ancient languages. It would would have to feature math formulas and illustrations etched foot deep into Titanium, carbon fiber, or a material that wouldn't degrade in 10,000 years. Then inside the monument would have to feature even more information. WOW!

15
dbrower 5 hours ago 3 replies      
I've read the report several times (it keeps coming back every 5-7 years or so) since publication, and I've never felt like a reliable solution had wither been found, or was in the offing. There is good thinking, but the problem itself seems very daunting. I think that is the real lesson.
16
duckingtest 2 hours ago 1 reply      
I don't want to sound evil... but why? It's not like digging up low-intensity radioactive waste is going to end the world. Just look at how alive Chernobyl zone is... So let's say some people bring it up, they get sick and die, which makes people remember that radioactivity symbol = bad for the next several hundred years or so.

On a related note, wouldn't it make more sense to turn the radioactive waste into powder and dump it over a large area of sea or desert (Sahara is HUGE)? Given a large enough area it wouldn't even be detectable.

17
stared 33 minutes ago 0 replies      
I feel that all warning are doomed. If we discovered some ancient site (let alone ancient alien site), each marking would prompt us to dig further and explore more.

No warning works (even if an explicit one), when met with a curious being. See: Eve and the Tree of Knowledge (from Genesis) or Pandora and her box.

And when the beings are not curious, it is unlikely that they would develop technology, or be tempted to new, alien places.

18
stonogo 6 hours ago 2 replies      
If their goal was to create a hokey-sounding quasi-spiritual ward that future generations will consider naive and ignore, they've certainly hit the nail on the head.

Probably a better approach is to accept the fact that nuclear waste will either be cleaned up or destroy humanity long before ten thousand years comes to pass, and spend the money they spent on this exercise in speculative fiction instead on working toward a real solution.

19
smegel 6 hours ago 1 reply      
> Put into words, it would communicate something like the following

That initial text was not intended to be written, but communicated through the design of the message system. Interesting.

I expected more skulls and crossbones.

20
mooreds 6 hours ago 1 reply      
See also Into Eternity, a movie about how the Finns are dealing with this, with a 100,000 year timeframe.

http://www.intoeternitythemovie.com/

21
andrewflnr 5 hours ago 1 reply      
Use the heat of the waste's radioactivity to power infrasonic emitters, and induce horror and panic in anyone who comes close. Or maybe set it up so that the wind creates infrasonic vibrations. You'd stand a very good chance of convincing people that the place is literally haunted.

https://en.wikipedia.org/wiki/Infrasound#Human_reactions

22
danieltillett 6 hours ago 2 replies      
I would have thought that to an advanced civilization that a nuclear radioactive waste dump may well be of value.

I do think worrying about radioactive waste given how much carbon dioxide we have dumped into the atmosphere is a bit like bandaging a stubbed toe on an amputated leg.

23
mesh 6 hours ago 1 reply      
24
joshmarinacci 3 hours ago 0 replies      
While I appreciate the effort they put into it, trying to plan anything for 10,000 years in the future is sheer folly. We have no idea what humanity will be like in 200 years, much less 10,000.

Assuming we are around at all, we'll likely have mastered fusion power (thus no longer creating nuclear waste) or have mastered space flight (thus we can chuck it into the Sun) or have discovered that concentrations of radioactive materials are incredibly useful and not waste at all. A few hundred years ago crude petroleum was a waste product as well.

25
mholt 5 hours ago 2 replies      
Interesting. I know that the idea of launching radioactive waste into the sun or out of the solar system is highly criticized for the potential for things to go wrong (and rightly so), but do the critics really think that the earth -- with humans on it -- will be more reliable in 10,000 years than space flight even 100 years from now?
26
gruez 6 hours ago 2 replies      
It appears that most of the images are missing (there are figures that are refereed to but do not exist). Is there a complete copy somewhere?

edit: found a mirror http://prod.sandia.gov/techlib/access-control.cgi/1992/92138...

27
socket0 2 hours ago 0 replies      
One clear problem with using A.D. (anno Domini) as measure for time elapsed, is that in 10,000 years Domini will no doubt be taken to mean either Bender Rodriguez or Donald Trump.
28
amelius 1 hour ago 0 replies      
It seems that what they are trying to accomplish is the exact opposite of advertisement.
29
JoeAltmaier 5 hours ago 1 reply      
Seems like overkill. What's a few poisoned people in 12,000 years? I imagine that people dying around the site will be the most effective ward to keep others away.
30
Aelinsaar 6 hours ago 2 replies      
It's a fascinating problem, a way to communicate the concept that this is not a treasure trove, not a historical site, but something dangerous that was meant to be sealed away for many thousands of years due to its hazardous nature. The parting thoughts seem most telling to me thought.

They wonder if this really worth it, since in the end coming into contact with the waste is a bit of a self-limiting problem, in that people exploring will become sick and die. It will ergo, become a "Place to avoid" anyway. If the cost is bound to be a few explorer's lives every few millennia in any case, and that's what will send the real message, then... well... you see?

Finally though, they just wonder how to construct something massive, durable, and yet not likely to be cannibalized for parts or scrap! They even raise the issue of what 400+ generations of unknowable humanity might do to the marker structure, without disturbing the rest of the site.

31
gadders 1 hour ago 0 replies      
This just makes me wonder what toxic substance the dinosaurs/space aliens buried under Stonehenge now.
32
takshak 3 hours ago 1 reply      
why not write in a language that has survived more than 10000 years?
33
homero 6 hours ago 0 replies      
Can we tour it?
34
hugh4 4 hours ago 0 replies      
The only way in which something like this will be useful is if, for some reason, civilisation collapses but humans survive. (If humans don't survive then 10,000 years is far too soon to worry about another intelligent species arising.)

If that happens, some major catastrophe has undoubtedly already occurred which makes the possible death of a bunch of future-cavemen who happen to start digging in the wrong patch of desert pale in comparison.

If we're really worried about this scenario we should worry less about marking specific sites, and more about trying to come up with a way to store all our existing scientific and cultural knowledge in a non-perishable manner, in many places, that can be dug up and hopefully eventually decoded by people of the future. Purely from an avoid-human-suffering point of view, telling people "don't dig here" isn't nearly as valuable as telling them about infectious diseases, and vaccination, and...

35
ageofwant 6 hours ago 0 replies      
Funny stuff. A product of the narrative of the day, and in less that 30 years its already changed. Of course there will be no nuclear waste in the future, that stuff is way to valuable as fuel in modern reactors.
36
tn13 4 hours ago 0 replies      
Can someone summarize this page here ? It is too long and I am unable to understand the context.
37
kaishiro 5 hours ago 2 replies      
Just wanted to say I accidentally fat fingered a downvote while trying to do the opposite. Sorry!
38
gaur 4 hours ago 0 replies      
Sad that even in 1993 the government was still producing documents on a typewriter with shitty, photocopied black-and-white line art. Photoshop had already been available for three years at this point.
9
A New Theory of How Consciousness Evolved theatlantic.com
89 points by curtis  7 hours ago   33 comments top 3
1
niccaluim 4 hours ago 4 replies      
This article was a little confusing for me. When I read "consciousness," I think of private, subjective experiencequalia. This meaning of consciousness is a very hard problem indeed, and any new theory is bound to be interesting. (See Chalmers for a great summary: http://consc.net/papers/facing.html) But the article seems to be talking about meta-cognition, not conscious experience. Meta-cognition is interesting in its own right I suppose, but far less so than what I normally think of as consciousness.
2
sillysaurus3 3 hours ago 1 reply      
Are insects conscious? Why or why not?
3
acqq 1 hour ago 0 replies      
The paper from the same author where he formally presents his ideas:

http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...

10
A New Origin Story for Dogs theatlantic.com
39 points by tokenadult  5 hours ago   15 comments top 6
1
sigill 3 hours ago 2 replies      
> But after decades of dogged effort, he and his fellow scientists are still arguing about the answers.

> Other canine genetics experts think that Larsons barking up the wrong tree.

> Of all the problems that scientists struggle with, why has the origin of dogs been such a bitch to solve?

It seems like the author was in some sort of pun writing competition.

2
hrvbr 39 minutes ago 0 replies      
3
owyn 3 hours ago 2 replies      
This story has the vibe of a TED talk, it's got scientists with different opinions, a grand scope and scale of tens of thousands of years, plus dogs, which we all love.

I really do think humans were doing interesting things tens of thousands of years ago. Some of it was written down, and some of it only survives through oral traditions but the ideas and actions of those people in that era still impact us today.

Some of those things are animals like the cat that is trying to sit on my laptop right now.

4
tex0 3 hours ago 1 reply      
Look out. That page always Redirects me to a Malware page after a few seconds.
5
sandworm101 2 hours ago 0 replies      
Before getting to the domestication question, I think they need to figure out exactly what they mean by 'wolf'. The lines are blurry.

https://en.wikipedia.org/wiki/Coywolf

https://en.wikipedia.org/wiki/Coydog

6
pluma 16 minutes ago 0 replies      
I thought the common wisdom (not saying it's correct) was that dogs had been domesticated several times (not just twice) and most modern dog breeds are derived from a mix of different (but closely related) animals?
11
Urbit is now in open developer beta urbit.org
131 points by jonasrosland  8 hours ago   85 comments top 19
1
aaron-lebo 7 hours ago 5 replies      
This might be the most revolutionary technology in the world, but I really have no idea because your landing page is full of the vaguest most utopian promises. You might say, well, you should go read the "technical docs", but have read about so many hyped techonologies that came to nothing at this point that I am incredibly skeptical.

If you can't tell me what you are doing in a single paragraph, then you've got a problem. If you can, why doesn't your homepage reflect that? Show me! I don't want to scroll through another website with 16 point font.

We all love the Unix philosophy and puppies, too.

Please take this as the constructive criticism it is.

2
tree_of_item 6 hours ago 3 replies      
Urbit is purposefully obfuscated. If you don't believe me, just take a look at the docs:

https://urbit.org/docs/hoon/advanced/

There is no reason to gensym all of your concepts like this. It is different just for the purpose of being different: apparently you can't sell people on a "revolutionary technology" without appearing to be extremely different.

Nock is also not a good virtual machine. Recognizing blessed sequences of bytecode and replacing them with opaque blobs of code is not a valid approach to optimization. No one can actually run a pure Nock VM, so what is the point of having Nock in the first place?

Someone else on HN gave the best summary of Urbit I've seen yet: an elaborate cup and ball game, meant to give the impression of innovation and technical excellence.

3
Animats 5 hours ago 1 reply      
I dunno. This thing is weird, but they put a lot of work into it, and you can download something that runs.

What they want to build, from the user perspective, seems to be a federated social network. Like Diaspora, only with some of the problems solved. The two big user-level problems they claim to solve are 1) spam, and 2) being tied to a service provider.

The solution to 1) is that you have to buy an identity from someone. You can't create identities for free. This is a profit center for someone, although I'm not clear whom. Not clear how much a personal identity costs, but there are only 2^32 of them.

The solution to 2) is that you can pick up your ball and go home - take the entire state of your online presence and move it to another server. The routing gets fixed somehow. Sort of like cell phone number portability.

Those are both good features. Right now, they apparently power only an online chat system and the ability to host web pages driven by programs in their language. Somebody could potentially build a Facebook-like system on top of that.

The terminology and the cult-like aspects are seriously annoying. It reminds me of Xanadu and its team. (I knew that crowd. Mostly extreme libertarians. Everything is pay per view in Xanadu.)

I wonder if this could be used as a lightweight container system for server-side web applications. It has a container system, and those containers can serve web pages and talk to other containers. Unlike, say, Docker, you don't have to lug around a whole Linux environment in your container. Being able to move your container to a new hosting service very quickly would force hosting services to be competitive.

4
MaysonL 5 hours ago 0 replies      
It's a cult creation technology: anyone who expends enough effort to launch a ship will have convinced themself that this is really cool, and that they are very much smarter than the average bear.
5
bordercases 4 hours ago 1 reply      
I'm not entirely sure I buy the arguments that "made-up words" are a bad thing. All words were made up at some point. And the English language on the surface seems extremely redundant in its vocabulary, but each element of a cluster of words can contain separate nuance, so we keep them around. Perhaps this doesn't have a place in computing, but perhaps it does. Consider, for example, the most jargon-y field of mathematics of all, category theory:

https://ncatlab.org/nlab/show/computational+trinitarianism

Proofs = Types = Categories; all related, all translatable in terms of one another, yet all different. New words in a technical vocabulary let you be both concise but also on occasion familiar.

I find arguments about obfuscation maybe a bit more credible. It's hard to say. I'll venture that getting people to pay attention to your ideas by throwing you off of previous convention could potentially work, but time will tell.

6
pbnjay 6 hours ago 0 replies      
From a purely academic standpoint, I find this project and it's goals intriguing... And I'd be interested in playing around with the environment, if only it weren't for the "Hoon" language. I'm fine dusting off lisp, or erlang, or any of the myriad imperative languages I know to play around, but I have no desire to learn an esoteric language that only works in one esoteric system! If someone ever makes a cross-compiler from a sane language let me know!
7
avip 5 hours ago 1 reply      
>We should note that in Nock and Hoon, 0 (pronounced "yes") is true, and 1 ("no") is false. Why? It's fresh, it's different, it's new. And it's annoying. And it keeps you on your toes. And it's also just intuitively right.

This kind of sums up the approach taken here. I'm out. I'm too dumb for this project.

8
arsalanb 1 hour ago 0 replies      
I met with the founders of Urbit over Dinner a few years back, part of a community meetup at the Thiel Fellowship Finalists Round in 2014 (yeah, bring out the pitchforks) and they seemed like pretty amazing guys.

What I don't understand, however, is the need to romanticize software. There's too much magic in this. I get the feel that this is something very important, but I don't understand what it is. Anyhow, best of luck to the team!

9
imron 6 hours ago 2 replies      
> An Urbit identity, or "ship," .... is actually just a 32-bit number, like an IP address

32-bits huh? That's brave.

10
sesteel 6 hours ago 0 replies      
I want to like this, but I feel like they are trying to solve too many things at once. Which is just another kind of problem. New CLI, new language...

https://urbit.org/docs/hoon/syntax/

11
inaseer 4 hours ago 1 reply      
I'm reserving my judgement on the system. Just sharing a few videos which might help in understanding Urbit better. Start with this LamdaConf talk: https://www.youtube.com/watch?v=I94qbWBGsDs and pair with this demo video after: https://www.youtube.com/watch?v=Tp9aCEfA6ao
12
adrusi 5 hours ago 0 replies      
I haven't read the whitepaper, but from what I've gathered, urbit is a virtual machine that runs on a network, maybe with support for untrusted nodes? If that's the case then that's amazing, and it's understandable how esoteric it all seems. But that's quite an extraordinary development and I'm skeptical.

It would be nice if they would give a clear description of what is they've made. Does this enable running a server jointly with a partner you don't trust, with neither party having physical access, and without involving any third party?

13
sumitgt 6 hours ago 1 reply      
Wow, before I read the HN comments, I felt really dumb for not understanding a word of what that page meant.

I thought it was one of those strange things all the cool kids are into these days whose appeal I cannot understand or explain.

14
state 6 hours ago 0 replies      
People may also be interested in the LambdaConf talk from a few weeks ago: https://www.youtube.com/watch?v=rkZ3GkeU9kg
15
galistoca 6 hours ago 0 replies      
Wow it's amazing how this post became #1 on front page, just based on criticism, no sarcasm. I guess it is true that all press is good press.
16
kondro 6 hours ago 1 reply      
I'm very confused.
17
bjorndmitri 5 hours ago 0 replies      
This has a strong "Shinichi Mochizuki's proof of the ABC conjecture" vibe to me.
18
asimuvPR 6 hours ago 0 replies      
The scope of the project is big. It mentions gmail, ifft and others. The api integrations for the third parties is measured in terms of years due to breaking changes and sheer amount of work. How is the urbit team managing that?
19
panic 6 hours ago 3 replies      
Urbit is a republic. Its government has one task: promoting, preserving and protecting Urbit. It may take any legal action which advances this goal.

That sounds a bit scary!

12
Hylogen: Haskell EDSL for live-coding fragment shaders github.com
34 points by luu  5 hours ago   5 comments top 4
1
sleexyz 41 minutes ago 0 replies      
Here's a short demo reel with audio-reactive shaders made with Hylogen:

https://hylogen.com/

(works with Chrome, Firefox, Safari)

(includes a GPU Game of Life implementation!)

2
eggy 3 hours ago 1 reply      
Very cool. I mainly use Extempore [1] for livecoding visual and audio, but this would be a great complement to Tidal [2], a Haskell-based livecoding system for audio.

I wonder how difficult it would be to marry the two, instead of separate windows with web sockets, so you could inline Tidal code as you build your shader in the same editor? Sort of what Shadertone [3] does by combining the Clojure-based audio generating Overtone, with a Shadertoy GLSL interface. The graphics are right behind the text in the editor window.

I would suggest Euterpea [4], the Haskell-based music coding environment, but it is not as suited to livecoding as Tidal is. Nonetheless, a great Haskell environment. Great work!

[1] http://extempore.moso.com.au/

[2] http://tidalcycles.org/

[3] https://github.com/overtone/shadertone

[4] http://www.euterpea.com/

3
pjmlp 3 hours ago 0 replies      
The examples appear to only run on Chrome (not available on this PC).

Other than that, it looks cool from what I could read.

4
pka 3 hours ago 0 replies      
Finally! Been wishing for something like this for a while, good job!
13
Visual Studio Code 1.2 released visualstudio.com
18 points by cryptos  3 hours ago   1 comment top
1
rejschaap 5 minutes ago 0 replies      
The automatically inserted whitespace always bothered me in VS Code. When writing code without an auto-formatter I always end up noticing the extra whitespace in the diff before committing and end up hunting it down. Automatically trimming the automatically inserted whitespace sounds like a somewhat complicated solution. It makes me wonder how other editors handle this, because I never had this problem with other editors.
14
Hellcat: netcat that takes unfair advantage of traffic shaping systems github.com
127 points by luu  11 hours ago   18 comments top 7
1
sleepychu 4 minutes ago 0 replies      
The commit messages for this are gold.https://github.com/matildah/hellcat/commits/master
2
MoSal 19 minutes ago 0 replies      
Is this a problem where multiple connections wouldn't help?

I can add an option to saldl[1] to use a new connection with each chunk. But I'm not sure there are real world examples where this would help.

[1] https://github.com/saldl/saldl

3
nisa 8 hours ago 0 replies      
This should work pretty well for tc/iptables shaping where you count the bytes in the connection and move to a lower class.

But I guess most of the time something like aria2[1] works better for real downloads - if you shape only for a single tcp stream this should also defeat this or at least speedup the download enough that the rate limit doesn't matter.

On the server side it's probably easy to stop this - nginx seems to have all you need[2],[3]. Just set a unique cookie for the download and deny access otherwise - not sure what the shared hosters are doing but likely something similar (mandatory waiting time before the cookie or url-hash is set, limit access based on connections/hash)

 limit_conn_zone $uid_got zone=cookie:10m; server { mp4; limit_conn cookie 1; limit_rate_after 10m; limit_rate 512k; }
On the other hand it probably screws users for mostly no reason most of the time.

1: https://aria2.github.io/

2: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.htm...

3: http://nginx.org/en/docs/http/ngx_http_userid_module.html

4
kpcyrd 9 hours ago 0 replies      
I've seen other scripts utilizing curls --speed-limit and --continue-at for http to restart the download after the throttling kicks in and resume the download on a new connection. Really nice!
5
BrandiATMuhkuh 8 hours ago 0 replies      
I wrote a while ago a shell script utilizing curl to do something similar. But many pages stop high speed download after about 100mb. Not just a couple of kb. So using this approach is actually quite nice. https://github.com/BrandiATMuhkuh/downloadAccelerator
6
leggomylibro 9 hours ago 5 replies      
Is it really faster to do a whole FIN / ACK / FIN / ACK / SYN / SYNACK / ACK every N bytes? How much does this sort of traffic shaping typically throttle, and how long does it typically give a connection?

Can this be easily parallelized? I'll bet it could.

7
rosstex 9 hours ago 3 replies      
Could anyone with a custom OS make their own TCP stack that violates all rate limiting and always sends data as fast as possible / reports a humongous window size?
15
Applied Mathematical Programming (1977) mit.edu
107 points by luu  11 hours ago   15 comments top 7
1
kxyvr 7 hours ago 2 replies      
I'm glad there's more free material online for optimization modeling and algorithms. Personally, here's my favorite list for algorithms and modeling for optimization:

For learning the simplex method, Linear Programmingby Vasek Chvatal (first 10 chapters):https://books.google.com/books?id=DN20_tW_BV0C&printsec=fron...

For learning branch and bound, Integer Programming by Laurence A. Wolseyhttps://books.google.com/books?id=x7RvQgAACAAJ

For odd modeling tricks for integer programming, Logic and Integer Programming by Williams, H. Paulhttps://www.springer.com/us/book/9780387922799

For modeling tricks for convex programming, Convex Optimization by Stephen Boyd and Lieven Vandenberghehttps://stanford.edu/~boyd/cvxbook/

For basic nonlinear optimization algorithms, Numerical Optimization by Nocedal, Jorge, Wright, S.https://www.springer.com/us/book/9780387303031

I've never really found the ultimate optimization book, but the references above tend to have the right modeling techniques, algorithms, and references. I've found they're a good place to get the right idea and the right words to search for more material.

2
vitaut 7 hours ago 0 replies      
Mathematical programming (optimization) has so many useful applications and I am very pleased to see this classical book on the topic available online.

Unfortunately because it has been written a while ago it doesn't cover algebraic modeling languages and examples are in Excel which is not an adequate optimization or teaching tool (even XKCD has something to say about it in his webcomic https://xkcd.com/1667/).

3
psykotic 8 hours ago 2 replies      
'Programming' as in linear programming, dynamic programming, etc, not 'programming' as in computer programming.
4
eggy 7 hours ago 0 replies      
Old, but after skimming it for an hour, it seems very applicable. I had been researching genetic algorithms to do scheduling optimization in the past based on a 1996 thesis 'A Genetic Algorithm for Resource-Constrained Scheduling' [1] (PDF download alert).

I am not an operations manager, but I seem to always have a need to perform similar analyses or be in a position to comment on others. Operations management texts tend to be very broad, or geared to a non-mathematical manager, so I am finding more texts like this one to be more enjoyable and informative for my purposes.

The last optimization problem WRT to OM I worked on was to review an OEM's recommended spares list for validity given lead times on certain items, and the FMEA (Failure Modes and Effects Analysis) to see how many of each spare, and when we should order replenishments to achieve a 95% or greater confidence level that we would have stock of a particular item, and not have any downtime.

There is huge software that automatically does this for large companies, but for little to mid-level companies, it is great to be able to dig in to the algorithms to gain a better understanding, and heck, just for the fun and learning experience especially when said company doesn't see the benefit of the larger software.

Thanks for the link!

[1] http://lancet.mit.edu/~mbwall/phd/thesis/thesis.pdf

5
princeb 8 hours ago 1 reply      
management science is a great topic and is one of the important applications of various optimization algorithms in mathematics. it's roughly the same field as industrial engineering.

most people get by just figuring out how to put problems in excel and hit "SOLVER" (or Matlab and fmincon) but there is some value in figuring out the algorithm works. then it's a hop skip and jump away from the other classes of optimization that you see in machine learning.

6
graycat 1 hour ago 1 reply      
This book is comparatively old, but the authors are experts, and likely the material is still surprisingly current.

But I will insert another point, call it the "Dirty Little Secrets of Optimization Your Books and Professors Don't Tell You."

Here is a bottom line, bold, summary fact of life about such optimization, mathematical programming, operations research: It's super tough to find real world problems where can apply such material, do well on the problems, please the customers, and get paid well enough to have a career. Trying such work either as a consultant or an employee is tough.

There are exceptions, but they are rare. If you can find an exception, terrific, but, did I mention, they are rare. Nearly anything else you can do with a BS, MS, or Ph.D. in a STEM field will be an easier way to make a living.

In practice, the field has a bad reputation: Early on there was a lot of hype and a lot of projects that failed. Leading reasons for project failure were (1) the basic data was too difficult to collect, (2) some custom software was needed, and it was too difficult to write, (3) the available optimization software was not powerful enough, (4) the mathematical techniques were not powerful enough, at least not without a lot of special work unique to the particular practical problem, (5) the computer run times were too long, too often days or weeks.

In addition there were various social problems: So, the optimization people were seen as wizards with tall, conical black hats decorated with stars and moons and were not really welcome, were not understood, were threatening, etc.

Here is a larger problem: The optimization people did not have a recognized profession. Lawyers, physicians, insurance actuaries, and more do have a recognized profession, maybe with government licensing, a code of ethics, professional peer review, legal liability, graduate education which is not just academic but also professional, likely with practical apprenticeship, a serious professional society, professional certification, etc. That way, a generalist line manager can hire a respected professional, as an employee or consultant, with relatively low risk to his career. Without professional status, such a line manager is reluctant and for good reason.

Then there's another secret: Early on, when there was a lot of optimism, there were two ideas, maybe just implicit but, still, powerful:

The first idea was that such optimization would, to borrow a more recent phrase, "eat the world". That is, essentially every economic activity in the world would be directed by such optimization.

The second idea was that the optimal solution was the absolute goal. Accepting a solution that was 10 cents away from optimal was seen as a moral lapse. The word optimal was a stick used to beat up on everything and everyone else.

Then reality set in: The applied math available was nowhere nearly powerful enough to solve just any practical problems that could be found in routine operations. And that was just the clouds before the hurricane:

The hurricane happened when the question P versus NP was formulated and, then, it was seen that large, practical problems in NP-complete were as common as pebbles in a stream bed. Darned near every problem in scheduling, routing, logistics, resource allocation, and more could want at least something as difficult as integer linear programming, but that problem is in NP-complete. Then considering non-linear aspects gave a storm that made a hurricane look like a gentle rain. Then it got worse: A lot of real problems had a lot of randomness. Okay, a lot of those problems can be formulated as stochastic optimal control problems, maybe continuous time, maybe discrete time, but then get bad news right away: First don't have enough data on either the system dynamics or the probability distributions of the random effects. Second on realistic problems, stochastic optimal control can eat super computer time by the months per problem.

The worst of the chuckholes in the road was the struggle of P versus NP. And with the moral attitude that optimal was to be obtained made the P versus NP challenge much worse. Indeed the US economy went for decades declining to implement solutions that could save 15% of operating costs because saving 16% for an optimal solution was too difficult for the math and the computing. That is, commonly saving that last 10 cents is by far the hardest. Or there might be millions of dollars to be saved, but still could not save the last 10 cents, and people gave up due to the difficulty of that last 10 cents.

And there was a huge cultural problem: Various universities set up professorships, sometimes departments, concentrating on optimization. Well, there the professors concentrated on publishing papers and largely ignored real problems. And anything like professional education was pushed out as inferior to the research agenda. Then, sure, the students got relatively little benefit in non-academic careers -- e.g., they were not in a profession.

For a while, there was an eager customer who did put up with a lot of the hype and poor or failed projects -- the US DoD.

So, bottom line: A lot of very good applied math has been done for optimization. And there is a lot of high quality software. But real problems that can be attacked successfully with these tools and where the benefits are really worth the botheration, time, money, and effort are rare.

If you can find suitable real problems and make a living, you will be very rare, but good for you.

What will likely happen is that occasionally some optimization will become part of some software that is sold as, say, SaaS. And, in some fields where optimization is known to be effective, say, feed mixing, we can expect that optimization is an accepted tool in that field.

So, what was the biggest result of the academic research? Sure, they identified the problem of P versus NP. Gads, I should have started a pizza shop!

7
pabb 8 hours ago 2 replies      
I won't assume that someone on HN wouldn't potentially benefit from this, but it honestly seems like OP submit this without reading the first sentence of the first page. That, or this was blatant trolling of some benign fashion.
16
Show HN: Map of immigration to the U.S. since 1820 metrocosm.com
270 points by mgalka  14 hours ago   105 comments top 28
1
notahacker 11 hours ago 2 replies      
Coincidentally, I saw these state by state migrant origin maps for each decade in animated form on Twitter this morning (showing numbers resident rather than numbers flowing in)http://www.pewhispanic.org/2015/09/28/from-ireland-to-german...

Some predictable phenomena (Cubans being the most common migrant group in Florida from 1970 onwards, Mexicans being the most common migrants in Texas and the West Coast initially and then most places) and some weird ones (why were Laotians so common in Minnesota in 1990, and why is the Ethiopian diaspora the largest in South Dakota as of 2013?)

2
josho 13 hours ago 4 replies      
I love new ways to express data. But, as I see animations like this I become a little anxious as I'd like to absorb the information, but if I look too closely in one area then I risk missing a bigger pattern or trend.

After watching, I'm convinced that a good old line chart would have been a better representation of the data. Perhaps with some added dynamic behaviour.

3
namenotrequired 12 hours ago 1 reply      
This seems to exclude forced immigration from Africa, which was a large majority until halfway the 19th century.

They are taken into account in the same site's visualisation of New Yorkers: http://metrocosm.com/where-new-yorkers-come-from/

4
c3534l 13 hours ago 1 reply      
I'd like to see this as a percentage of the world population since the increase over time makes it hard to see exactly what's happening.
5
bhandziuk 14 hours ago 1 reply      
This is a cool map and there's tons of information in it. It'd be nice to see more details, like the actual numbers instead of just a top 3 counties list. Or if I pause the timeline and hover over a country it'd be neat if it listed that year's exodus.

Why do come countries brighten in some years but then dim later on?

Edit: ah I see the associated blog post has a little more info, especially about the brightness question. http://metrocosm.com/animated-immigration-map/

6
mxfh 11 hours ago 0 replies      
7
curiousgal 12 hours ago 0 replies      
That one spot in Oklahoma must be popping.
8
mmanfrin 13 hours ago 7 replies      
Surprised me how much came from Canada -- has that been a thing, or is Canada a proxy for other countries?
9
baran1 5 hours ago 0 replies      
Hi, nitpick comment, but the boundaries of the countries on this map are not changing over the course of almost a century.
10
PoorBloke123 12 hours ago 2 replies      
Since this page states" "Most illegal immigration is not included" I'd like to see one of these representing the flow of illegal immigrants (especially from and across Mexico) since Obama won his first election (and second).
11
k__ 13 hours ago 8 replies      
Didn't know so much Germans came to the US back in the days.
12
bunkydoo 12 hours ago 0 replies      
We're gonna build a dome around the continental US and make the united nations pay for it!!
13
shimon 13 hours ago 1 reply      
Awesome viz! Would like some perspective on the overall volume, too. Maybe add a bar chart above the timeline/slider showing total immigration in each year? Maybe even alongside a graph of total US population growth so it's clear how much of the growth is due to immigration.
14
r-w 13 hours ago 1 reply      
This is an amazing chart! I wish youd used great circles (https://en.wikipedia.org/wiki/Great_circle) though.
15
ktRolster 12 hours ago 2 replies      
In the last 10 years, immigration has really picked up from all over the world, whereas earlier it was just a few countries.
17
lifeformed 10 hours ago 0 replies      
They should've horizontally centered the US on the map so that that immigrants from Asia/Russia don't have to jump across the map.
18
fiatjaf 5 hours ago 1 reply      
I was waiting for US to explode in the end of the animation.
19
matthewbauer 12 hours ago 2 replies      
Does "United Kingdom" in the early 20th century mean just the British Isles? Or is India, Australia, Middle East also included under "UK"?
20
slezyr 13 hours ago 1 reply      
Is something wrong with Ukraine? At the end there a lot of dots coming from it, but it doesn't changes it's color. And it has same color as Russia.
21
senthil_rajasek 6 hours ago 0 replies      
"The quality of mercy is not strained"
22
23
stevofolife 8 hours ago 0 replies      
Anyone know what type of format these data are in?
24
leighmcculloch 12 hours ago 2 replies      
Australia has very little immigration to the US it seems.
25
marcoperaza 11 hours ago 1 reply      
1965 Immigration Act. That's when we stopped caring about making immigration work for our country, and started caring about doing whatever it takes to not be called racist.
26
jnmandal 12 hours ago 0 replies      
why are you using an iframe?
27
chm 13 hours ago 0 replies      
According to this, Canada was in the top 3 from 1910 to 1969 (being #1 from 1920 to 1949). Mexico has been #1 since 1960 up to present. It would be interesting to see the number of immigrants per country (at least in the top 3) as well as the proportion of immigrants wrt the total population during each decade.
28
namelezz 8 hours ago 0 replies      
Why are you wasting your time making this? A map of US hate crimes against immigrants would be more helpful for newcomers than this.
17
New phenomenon breaks inbound TCP policing whirlpool.net.au
88 points by RachelF  9 hours ago   25 comments top 9
1
chadnickbok 2 hours ago 1 reply      
Hey cool, someone forgetting _yet again_ why we use TCP.

We don't use TCP because its fast. We don't use it because its reliable (although that's really useful). We use it because _we kept breaking the internet_. Once you get above a certain threshold, the network can't keep up with you and packets start getting dropped. The problem is that backing off just a little doesn't allow the network to recover.

Instead, we need to use exponential backoff in the face of packet loss to ensure that the network as a whole can recover.

But if you're pretty much the only connection misbehaving, and everything else backs off, then you can kinda get away with not using exponential backoff. The problem is that the applications that is was "kinda okay" to do this for was VOIP and friends, where realtime delivery is really important and exponential backoff causes noticeable drops in quality.

For a great read about these kinds of issues, check out the TCP-Friendly rate control RFC: https://tools.ietf.org/html/rfc5348

2
colanderman 7 hours ago 1 reply      
Title is wrong; updates come over TCP that has been modified not to perform link sharing.

I have seen this with my wife's computer. Though I haven't seen the large-window phenomenon (haven't looked), what I do see is dozens of active connections opened to the same destination, which of course defeats both TCP's link sharing and standard QoS algorithms.

I work around the issue by bucketing any Akamai IP ranges I find into a very-low-priority queue, and let those TCP connections fight it out. Seems to have worked well.

For those interested, here are the Akamai IP ranges I use:

 23.0.0.0/12 23.32.0.0/11 23.64.0.0/14 23.72.0.0/13 104.64.0.0/10 2001:428:4403::/48 2001:428:4404::/48 2001:428:4405::/48 2001:428:4406::/48 2600:1400::/24

3
jsnell 7 hours ago 1 reply      
Where did the HN submission title get UDP from? I don't see anyone in that thread suggesting that the updates were done in UDP, and all the traffic in the trace file is all TCP.

The trace is indeed a total mess, but I'm not convinced it's anything to do with TCP acceleration. There's absolutely massive levels of reordering and packet duplication happening in ways which are not consistent with TCP acceleration at all. It's much more likely that it's some kind of configuration problem elsewhere in the network.

From eyeballing the trace, almost half the payload segments there are duplicates, while a much smaller proportion are retransmits. (You can tell the difference e.g. using IP ids or by TCP timestamp TSvals / TSecrs).

4
voltagex_ 6 hours ago 0 replies      
PCAP is at https://forums.whirlpool.net.au/forum-replies.cfm?t=2530363&.... It's not UDP.

I'll have to see whether this is what causes my 100 megabit downlink to behave as if it's capped at 30 megabits sometimes. The router can barely keep up as it is.

5
NeutronBoy 8 hours ago 1 reply      
I've actually seem the same thing recently - updates will soak up all available bandwidth, to the point where web browsing is basically impossible.
6
thomas-b 2 hours ago 0 replies      
I've seen on multiple win10 laptops where it just get all download bandwidth (no noticeable upload) to a point where websites won't open and skype looses connection. That's on a 5Mb line. I do see many connections opened by a single process on Microsoft IPs mentioned.

I believe it was really only update related but I saw it happen when actually no updates were available.I just ended up limiting the corresponding process bandwidth whenever it gets annoying.

One into the other, I'm mainly just very surprised this kind of thing can happen. Except that I'm fairly happy with win10 though as opposed to the usual MS bashing we can hear.One thing into

7
wmf 6 hours ago 1 reply      
Windows has a feature to perform low priority downloads of updates called Background Intelligent Transfer Service: https://technet.microsoft.com/en-us/library/cc776905(v=ws.10...

There's also the Windows Update Delivery Optimization P2P feature: http://windows.microsoft.com/en-us/windows-10/windows-update...

8
0XAFFE 4 hours ago 0 replies      
It's interesting how on the forum this is totaly not about Windows 10, but more about Akamai and here all people are bashing on Windows.
9
0x0 7 hours ago 0 replies      
Really, pushing Windows 10 has now become so urgent we can't let TCP slow us down?!
18
Chess legend Viktor Korchnoi has died bbc.co.uk
54 points by jacquesm  8 hours ago   14 comments top 5
1
WildUtah 6 hours ago 3 replies      
Korch was within one game in a long match of winning the world championship two different times in his forties. The champ was a young twentysomething Karpov.

Then he went on to be the best fiftysomething chess player ever to play the game in his fifties. And the best sixtysomething in his sixties. And the best seventy-something in his seventies.

He had a stroke right around age eighty but managed to slip in at least one win against American grandmaster and world championship candidate Fabiano Caruana in 2011 after he turned eighty, so he was the best ever of that decade as well.

Chess players usually peak a bit before thirty. Korchnoi defected after forty. As a dissident under the Soviet system, Korchnoi didn't have the chance to prove himself in international competition in his twenties or thirties, but he may have been the best in the world for a long time before Karpov or Fischer reached the championship.

2
grizzles 4 hours ago 0 replies      
I read a study the other day that concluded that neuron regeneration can occur well past middle age. I think that's the only explanation for what happened here. He was easily the toughest senior citizen player that has ever lived.
3
komaromy 6 hours ago 0 replies      
He came so tantalizingly close to being world champion. Besides the two World Championship matches with Karpov, one of which he lost by a single game, the 1974 Candidates final also decided the championship following Fischer's abdication.
4
deweerdt 7 hours ago 1 reply      
5
p4wnc6 6 hours ago 3 replies      
He is the reason I took up playing Caro-Kann again, and the primary inspiration for my feeling that it's much more fun to play as Black -- a feeling that got me back into chess after years away.
19
Lambda crabs, part 1: A mathematical introduction to lifetimes and regions ticki.github.io
38 points by adamnemecek  7 hours ago   8 comments top 3
1
ekidd 3 minutes ago 0 replies      
These articles explain the mathematical underpinnings of Rust's borrow checker. The explanation is actually fairly simple, but it assumes prior familiarity with abstract algebra.

For people who are unfamiliar with posets, lattices, etc., this introduction might help a bit: http://old.iseclab.org/people/enji/infosys/lattice_tutorial.... (PDF)

For introduction to how Rust's ownership/borrowing/lifetimes system works, see the three chapters starting here: https://doc.rust-lang.org/book/ownership.html

And a "standard disclaimer" for math: To understand this material, you may need to read slowly, look things up, and work through each example. Math is often very simple, but that doesn't mean that it's easy. The trick is usually to stare at it from enough different directions that the underlying simplicity finally "clicks," and you wonder how you could have ever been confused. If you're unfamiliar with the topics in question, a page per day can be a perfectly acceptable reading speed.

3
transfire 38 minutes ago 0 replies      
How did this make it to the front page? It is so esoteric to begin with, and so poorly explained beyond that... it's not even useful to most of the people it is intended.

All that aside, can we stop making up new words for things that already have words? -- correction, can we stop re-purposing old words for old things and acting like they're new? I mean really, outlives or equals to?

20
Resignations at Cisco hint at internal power struggle recode.net
117 points by petethomas  10 hours ago   25 comments top 8
1
vanessa98 7 hours ago 2 replies      
A legendary racket from days past! Leave Cisco with technology and engineers, get generously funded by Cisco, get generously bought out by Cisco, inside Cisco enjoy sandbagged targets and guaranteed payouts, lather, rinse, repeat. Self-dealing masterpieces!
2
iaw 7 hours ago 1 reply      
As problematic as Cisco's messaging was on this fiasco, Tony Fadell should take note about how professionals handle disagreements in the press.
3
mdip 6 hours ago 2 replies      
I'm a bit biased here since I work doing development mostly aimed at a competitor of Cisco in the Unified Communications space, but after seeing a few presentations by Rowan Trollope at Enterprise Connect, I have the feeling all is not well at Cisco and this article seems to echo that.

At EC, I really felt like they were on the defense trying to market a product that's trying to be "cool like Slack" while chiding enterprise customers for being uncool and wanting things like control over upgrade roll-outs and being interested in "fake clouds"[1]. Their presentation of this new product had the feeling of an angry old man trying to sell mood rings to hipsters.

The attitude of the presenters bordered on insulting and I was reminded of a meeting with Cisco guys with a very similar attitude, almost decade ago, when my previous employer was trying to re-instate maintenance on our Cisco phone infrastructure. At the time, we were looking to either get maintenance or replace it with something else (Office Communications Server 2008 -- which hadn't been released yet, but which Microsoft was actively courting us to become a tester of with very good financial terms associated). The rep sarcastically said "What are you going to do, switch to Asterisk or LCS?"[2].

My recollection of parts of this is hazy, but IIRC, they weren't willing to budge on price and even found places that we had miscalculated the cost we already couldn't pay, resulting in a higher cost. In 15 years of dealing with vendor reps, I've never had a call that even came close to that one. I fielded two different calls within an hour of that meeting's end with both people saying the Cisco guys were "arrogant DICKS".

Within two years my previous employer ripped out all of our Cisco IP-PBX related devices, moved to OCS 2007 and the company has stayed with the Microsoft Solution of Various Names since. If the vendor reps were any indication, Cisco didn't believe there would ever be competition for their product, and had a very dim view of Microsoft (still seem to, today). Their new product (who's name escapes me) seems to be the direction they want to go, but they're late, and aren't as good as the competition.

[1] This was a phrase the Cisco folks seemed really attached to and I kept thinking that the one feature you want in a collaboration/"phone" system is stability. And the "fake clouds" were things like on-prem/cloud deployment options available from Microsoft and policies that embraced limited backward compatibility and controlled update roll-out. For me, the phrase became "fake clouds don't rain" (or at least when they do, you have some control over it).

[2] This is paraphrased, but not much. Microsoft had courted us at the time and we were involved in pre-release for OCS. We didn't actually migrate to LCS, we migrated to OCS while participating in the OCS R2 TAP. They provided us with people on-site that basically designed and helped us roll the solution out at no charge (they were supposed to be resources for the TAP program but they assisted with everything).

4
irq 2 hours ago 0 replies      
For those unfamiliar, the name MPLS here is also a play on https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching
5
twblalock 7 hours ago 1 reply      
I think Cisco is going to end up like HP.
6
ccvannorman 2 hours ago 0 replies      
It is my understanding that Cisco was founded on router technology developed Stanford which was then patented and exploited by the company (despite Stanford's intention that it be public domain)[1]. I wonder if the cultural tone that that set is what, decades later, has landed them in this situation.

[1] http://pdp10.nocrew.org/docs/cisco.html

7
thrwaway_711 8 hours ago 1 reply      
MPLS' impact at Cisco is similar to that of Jeff Dean and others at Google. They were responsible for creating Cisco's most successful products, so it's sad to see them leave. But the power struggle is definitely undeniable, as a lot of senior figures have departed Cisco.
8
0FCEE9602718 8 hours ago 0 replies      
It had become known that Mario, Prem, and Luca were moving on, but the expectation was that Soni was sticking around and gaining more responsibility. Guess not. Unfortunate for Cisco.
21
An 18th Century North African Travelling Physician's Handbook britishlibrary.typepad.co.uk
32 points by Petiver  7 hours ago   discuss
22
Sild is a Lisp dialect jfo.click
37 points by luu  6 hours ago   9 comments top 4
1
kaosjester 4 hours ago 2 replies      
I'm pretty familiar with various approaches to implementing Lisp / Scheme, and seeing projects like this is almost heartbreaking. What's been done here (which I might call the "mechanical" approach to implementing Scheme) seems to attempt to emulate an underlying LISP machine instead of emulating the language itself, and that's fundamentally problematic because, for the most part, that's the wrong abstract machine. If the author took the time to work through the first four or five chapters of SICP, they'd have the knowledge to construct a much cleaner machine (based on CEK) that would, ultimately, end up far more performant and faithful to the end-goal semantics. All of this mucking about with allocation and free-cells for expressions and the like is orthogonal to building a LISP interpreter in C. (And, if the author is really interested in that sort of thing, the LISP machine is well-documented.)

If anyone else is interested in starting a project like this, getting down and dirty, I'd urge you to start with the first part of SICP and then move into Dybvig's dissertation (at http://agl.cs.unm.edu/~williams/cs491/three-imp.pdf), which directly and plainly describes multiple approaches to producing efficient, reliable Scheme implementations.

(Also, why doesn't this link to the Github page instead of the blog?)

2
DigitalJack 4 hours ago 1 reply      
I love projects like this. I'm a chip designer, and I feel very comfortable at that level, and low level programming like assembly. I used to be comfortable in C back in the day.

These days I feel like I don't have time for the nitty gritty details of programming anymore and only work in Clojure (with occasional forays into other languages to see "how goes the world").

I have a large intellectual gap between logic gates, assembly and a high level language like a lisp. I'd really like to attempt a project like this some day, but I mostly feel like those days are behind me.

3
sklogic 1 hour ago 0 replies      
For handling function call frames you may consider compiling your Lisp code a bit before interpretation - do an explicit lambda lifting with an explicit closure allocation. This way you'd be free to kill each frame after a call.
4
tyfon 3 hours ago 1 reply      
I was about to write that sild is herring in Norwegian, but then I saw you had it in the github repo (but Danish) :)

Cool project, I'll have a look at it!

23
A Gluster developer's thoughts on Torus atyp.us
135 points by wmf  13 hours ago   19 comments top 5
1
spotman 7 hours ago 0 replies      
This is a tough one. It's easy to be the expert and point out what will not work.

The crux of it is as he states the false advertising, and how many people will see this and start to use it without knowing what they are doing. However it's hard to separate false advertising, with overconfidence. Maybe they even do not have to be the different things in this context.

The author is correct, this is incredibly hard to get right. Without having some experts heavily contributing and steering a project like this (and, by experts, I mean distributed filesystem experts, not just general distributed application knowledge) it is going to be a long uphill battle, fraught with terror.

But, who's to say that they won't find that type of collaboration. They may just do that, and they may just pull off making a decent project in some years.

But is it hard, hell yes. Is it going to happen as fast as they claim, probably not. Is it a silly idea to build this and not contribute to prior art more? Probably. Is someone a little pissed off about this and taking to the internet to moan about it a bit? Probably.

At the end of the day, file systems are hard, and if your job is to deploy, manage, scale, or recover them, you won't just be blindly throwing your petabytes at a brand new project anyways, and if you are, you should be removed from your current position.

The type of folks that will be early adopters and contributors won't be putting their banking transactions on it.

2
dice 11 hours ago 7 replies      
>Single-threaded sequential 1KB writes for a total of 4GB, without even oflag=sync? Bonnie++? Sorry, but these are not "good benchmarks to run" at all. They're garbage. People who know storage would never suggest these.

As a non-storage person: what should I be using instead of dd and bonnie++?

3
merb 1 hour ago 0 replies      

 It's not true for Gluster. It's not true for Ceph. It's not true for Lustre, OrangeFS, and so on. It's not even true for Sheepdog, which Torus very strongly resembles. None of these systems were designed for small clusters.
That's true. There is no system that is easy to administrate and starts with 1 Node and then can Scale to 1, 3, 5, 7, etc.

No system addresses this (they don't want to or it's to hard whatever).

4
sshykes 11 hours ago 1 reply      
It started off so nice and friendly, and then reads so hostile at the end. Sort of like a shit sandwich, except with the bottom slice of bread missing.

I am curious to understand how Torus is similar to Sheepdog [0].

From the Sheepdog website:

 Sheepdog is a distributed object storage system for volume and container services and manages the disks and nodes intelligently. Sheepdog features ease of use, simplicity of code and can scale out to thousands of nodes. The block level volume abstraction can be attached to QEMU virtual machines and Linux SCSI Target and supports advanced volume management features such as snapshot, cloning, and thin provisioning. The object level container abstraction is designed to be Openstack Swift and Amazon S3 API compatible and can be used to store and retrieve any amount of data with a simple web services interface.
[0] https://sheepdog.github.io/sheepdog/

5
the_common_man 11 hours ago 2 replies      
> Anybody who would suggest these is not a storage professional, and should not be making any claims about how long it might take to implement filesystem semantics on top of what Torus already has.

The entire post reeks of condescension and arrogance.

I also don't like the potshots at marketing. I think Torus is of to a very good start. It's a project after all and they are claiming things to set across their vision. They didn't "lie" about things being here already but it's going to happen in the near future. What's wrong with that? Because "storage experts" think it takes years to build ? Sorry, visionaries don't listen to "experts" and set out doing things.

24
Walking and Talking Behaviors May Help Predict Epidemics and Trends psu.edu
12 points by brahmwg  4 hours ago   1 comment top
25
Early State-Sanctioned LSD Experiments in Communist Bulgaria atlasobscura.com
40 points by Hooke  8 hours ago   7 comments top 3
1
DigitalJack 4 hours ago 4 replies      
Reminds me of a story I heard on the radio with a DJ reminiscing about his pot days. He and his friends decided that the stuff came up with while high was comedy gold and decided to write it down so they could remember it.

He looks at it the next day to see what they were rolling on the floor about, and there was one word: "Fart."

Some of the paintings here made me think of that. Perceived creativity while in an altered state of mind, vs actual creativity.

I remember someone describing a shroom experience where they felt like they understood life, the universe, everything, and were at peace with it. At least I think it was shrooms... maybe MDMA or something, I don't remember.

I was intrigued by this because it was a change in the way they felt that was the life altering experience. Something very intangible. Having dealt with depression for a long time, I see how this can be, and yet it really struck me as interesting. Perspective matters.

2
BashiBazouk 4 hours ago 0 replies      
They leave out that Ken Kesey was participating in the Menlo Park CIA sponsored LSD tests in 1959.
3
boulos 5 hours ago 0 replies      
I really enjoyed the "write your name" test. It's too bad they stopped at 3.5 hours...
26
Some Lost Superstitions of the Early-20th-Century United States slate.com
53 points by samclemens  8 hours ago   10 comments top 3
1
gfaure 5 hours ago 1 reply      
I think these reflected a kind of magical thinking when infant mortality was significantly more common -- what happened before the child died must have been the cause.
2
ekianjo 6 hours ago 4 replies      
Good link. Are there any studies on modern days supersititions, and how they differ by country? In Japan there are tons of supersititions still very much alive for about everything in life.
3
rusabd 3 hours ago 0 replies      
Fascinating, the very first superstition is very close to one Kazakh superstition that stepping over child prevents his/her growth.
27
Integrating Elm and Phoenix Channels via Elm-Phoenix-socket dailydrip.com
205 points by onlydole  14 hours ago   68 comments top 10
1
ModernMech 11 hours ago 14 replies      
How come no one (seemingly) uses Elm, despite all the love? I've noticed a lot of quick, drive-by posts in Elm related threads about how great the language is. This thread ("This is a great little webstack!") and today's other front-page elm thread[1] both do this.

With such high praise, I would expect the language to be more popular. What's with the disconnect? My hypotheses:

1) There are no unhappy users because unhappy users just don't use it.2) HN goes easy on technological curiosities, saving harsh criticism for large projects that deserve it3) While there may be a widespread lovefest for Elm, there is too much momentum behind current technologies.

Any other thoughts as to what the disconnect is? Moreover, how do we get to a world where the webstack we all use is "a great little webstack"?

[0] https://news.ycombinator.com/item?id=11846707

edit: clarity

2
knewter 11 hours ago 1 reply      
Josh from DailyDrip here. My first realization that this was here was that we started getting a ton of traffic. :) We just ran the first Remote Elm Meetup today so my attention was elsewhere: https://www.bigmarker.com/remote-meetup/1-Elm-Remote-Meetup-... (yes, I know our company name is misspelled in the url :-\ )

I'd love to chat with anyone that wants to know what we're doing and why we do it :)

3
MrBlue 12 hours ago 1 reply      
Elixir, Phoenix and Elm have absolutely rekindled my love of web application development.
4
fbonetti 9 hours ago 0 replies      
Author of elm-phoenix-socket[1] here. I'm happy to answer any questions about the library, Elm, or whatever.

[1] https://github.com/fbonetti/elm-phoenix-socket

5
perfq 11 hours ago 0 replies      
We love this stack, and dailydrip and elixirsips have helped us get a lot of the knowledge we have. Josh does an awesome job at explaining the technology in a step by step fashion that is both concise and easy to understand.
6
mgalka 11 hours ago 1 reply      
I'm impressed by the Hacker News crowd. Pretty cool to see a topic as esoteric (and awesome) as Phoenix / Elm hit the top spot. Very good post!
7
brightball 13 hours ago 1 reply      
Elm and Phoenix really seem like a perfect match for each other
8
onlydole 12 hours ago 0 replies      
I absolutely love dailydrip.com too on top of Elixir and Elm...those languages paired with such a great learning source is simply phenomenal.
9
bfrog 12 hours ago 0 replies      
This is a great little webstack!
10
jergason 7 hours ago 1 reply      
What makes Elm + Phoenix so good together? I use Elm professionally and love it, but a back end is a back end. Is the love just "I really like Phoenix?" The way people talk about it makes it sound like it is uniquely suited to Elm, which I don't understand.
28
The Story of Tetris denofgeek.com
21 points by cpncrunch  5 hours ago   2 comments top 2
1
ganeshkrishnan 2 hours ago 0 replies      
Henk Rogers now stays in Hawaii and runs an incubator BlueStartups.

I spoke to him at one of his launch parties in Honolulu and he is an amazingly warm guy to speak to.It was a welcome change compared to the VCs here in Australia who sit high on their iron throne and sneer down at startups

2
ensiferum 1 hour ago 0 replies      
Funny story and a great game.
29
Keith Rabois on the Role of a COO, How to Hire and Why Transparency Matters firstround.com
28 points by jackgavigan  6 hours ago   3 comments top 2
1
blastrat 3 hours ago 0 replies      
it's a buncha hogwash.

Companies need a board with a chairman, a president and treasurer, and vice presidents of things like engineering and marketing. If you have 20 people, you don't need more executives than that. If you have an operating business with customers, you'll need a controller, and a director of operations. If the company doesn't do R&D, it's simply an operations company (like say a restaurant or a moving company) you could conceivably call the President also the COO, because operations is critically important to operating the business.

Chief this and chief that make sense in a large company with need for extreme specialization, like a chief technical officer for an airline: their core business is not computer systems, but computer systems are critically important, so the guy in charge of all that technology needs authority to make executive decisions and to report directly to the top. But that's not a 20 person situation.

The point is, you don't sprinkle all these grandiose titles around to use them all up like at an Aviato or a Pied Piper. You use them to describe meaningful roles that reflect the business and managerial structure of the company, but only in a way that is important. 20 person companies and even 100 people companies don't have that much going on, in general.

If your business is developing software, you don't need a CTO (compare above with airline) especially not one who is just a smart founder engineer with limited managerial skills.

2
Roritharr 2 hours ago 1 reply      
I'm very interested in the transparency side of things, because that structure seems very risky and possibly instable. If you can chain good news after good news then it's surely nice. But if your newly highered senior marketing strategist just moved from half the planet to your company to then be brought into the fold to find out that the company only has about 3 months of runway left but "we're very close to signing..." then that could have very demotivating effects, like whole groups of people leaving.

Maybe it's a German thing, but in my experience the very second one doubts if their company CAN not keep the promises that were made financially, the allegiance to that company drops like a rock. So having that kind of information floating around people that most likely don't have the experience to judge it fairly would most likely do more harm then merit.

Maybe there is a way to explain bad news without risking to kill the mood and allegiance, i just haven't seen it yet.

The other thing is the one bad apple that abuses the transparency before being found out. How does the recruiting and onboarding process deal with that?

30
Ancient Phoenician DNA may change the way we see human migration csmonitor.com
26 points by tokenadult  5 hours ago   5 comments top 2
1
bedros 4 hours ago 0 replies      
actually, Lebanon back then did not exist. the whole region, which is now Syria, Lebanon, Palestine, and Jordan was known as the Land of Syria.

So, Phoenicians are Ancient Syrians who lived on the coast area of syria (which part of it right now is called Lebanon)

The Alphabet system was created in city of Ugarit, which is located inside Modern Syria

2
danieltillett 4 hours ago 2 replies      
Amazing - people who loved boats, sailing and setting up new colonies moved around a lot - who would have guessed?
       cached 7 June 2016 10:02:02 GMT