hacker news with inline top comments    .. more ..    12 Jun 2016 News
home   ask   best   2 years ago   
1
How to spy on a Ruby program jvns.ca
62 points by bartbes  2 hours ago   1 comment top
1
nxzero 1 hour ago 0 replies      
One issue I've run across is spying on JRuby's native Java classes. Anyone have a solution?
2
Walmart Canada stops accepting Visa cards walmartcanada.ca
81 points by jackgavigan  2 hours ago   83 comments top 11
1
ced 57 minutes ago 5 replies      
Dear HN, I'd like to fact check my outrage...

If I understand correctly, credit card terms forbid stores from offering two different prices (one for paying with and one for paying without), because if they didn't, some stores would offer a 2-3% discount for cash, and visa would lose out. Meanwhile, most people I've spoken with are happy with the system, because they feel like they're getting money back through cashback/reward programs, so credit card companies are incentivized to keep increasing their merchant fees so that they can give more money back to their customers. Restaurants/shops that don't accept credit get less customers, and are forced into the system as well. While the government could do something about it, they don't because they can track every credit card transaction (-> tax revenue), but they can't track cash. And cash-paying customers get to pay the 2-3% "credit card tax" on every transaction, which is so ridiculously backward.

I hope Walmart's decision snowballs.

2
mabbo 1 hour ago 3 replies      
Just so folks understand: in Canada (actually, pretty much everywhere but America) every bank issues it's own non-credit-card affiliated debit card that can be used to pay for things at stores (or get money at the ATM). So this is annoying, but not a huge deal.

I'll get to the cash, get told "sorry, no visa", and take out my debit card to pay that way instead. Really not a big deal.

It's just Walmart annoying their customers to try to prove some point to Visa.

3
BlackJack 1 hour ago 4 replies      
"To keep prices low we continuously assess opportunities to lower our operating expenses...

Customers will continue to be able to use...American Express."

Thought this was funny given that Amex must have the highest interchange & fees. Their statement still makes sense because Walmart is saying Visa fees in Canada are too high compared to their rates in other markets.

Maybe Target will win some market share, but I doubt it. I think customers will switch cards or go cash. Walmart is too good at what they do.

4
downandout 1 hour ago 6 replies      
It says:

...Visa and Walmart have been unable to agree on an acceptable fee for Visa transactions. As a result we will no longer accept Visa in our stores across Canada, starting with our stores in Thunder Bay, on July 18, 2016.....We sincerely regret any impact this will have on our customers who use Visa and remain optimistic that we will reach an agreement with Visa.

This is nothing more than a negotiating ploy to get Visa to cave to Walmart's demands before July 18. No major retailer can afford to stop taking Visa. They may take the hit and carry through with it for a while just to show Visa they're serious, but it won't last very long.

5
GreaterFool 48 minutes ago 2 replies      
I lived in Singapore and I still have a bank account there. A year ago my bank bumped foreign transaction fees on credit cards to 3.5%. This is a robbery. I immediately cancelled that card. I recall the website said the most of the fee is what Visa (or Mastercard) imposes on them. Either way that's not acceptable.
6
peeters 48 minutes ago 2 replies      
In Canada, most CC agreements with merchants do not allow the merchant to pass the fee on to the consumer. You can't charge $102.04 for a $100 product if the customer is paying with Visa. Thus the total cost to consumers is completely hidden.
7
ourmandave 43 minutes ago 0 replies      
A few years ago the Illinois Secretary of State wouldn't accept VISA at DMV facilities because they weren't allowed to pass the fee to the consumer. (Mastercard and Discover let them though.)

https://www.wbez.org/shows/wbez-blogs/why-dont-illinois-secr...

The link mentions that Indiana had the same problem but they paid the fee because Customer Service was more important.

8
whack 29 minutes ago 1 reply      
Isn't Amex the most expensive of all the credit cards? At least that's what I would expect, given that Amex has the best rewards, and most customer-friendly service. I'm surprised Walmart would continue to accept Amex but not Visa.
9
cmrx64 1 hour ago 1 reply      
The last sentence made me feel like this is Walmart trying to force Visa's hand in this. This seems quite surprising to me in any case. Most (maybe all?) Walmarts I've been to have ATMs in the front, so it's not like people won't be able to get cash at the store. They'll probably just have to pay the $3 (or whatever) fee.
10
nxzero 53 minutes ago 1 reply      
Idea of credit cards still puzzles me.
11
gambiting 1 hour ago 7 replies      
So as a customer, if my bank only issues Visa cards, and I am not willing/can't get a credit card of a different type, what do I do?
3
Machine learning for financial prediction robotwealth.com
53 points by Matetricks  5 hours ago   12 comments top 5
1
joegreen 3 hours ago 2 replies      
If anyone else is getting errors when loading the page, here's the google cached version http://webcache.googleusercontent.com/search?q=cache:-ciyXfS...
2
aj7 18 minutes ago 2 replies      
Really successful traders spend their obtaining insider information, not massaging public data. It stands to reason that an ensemble of technical trading methods would regress towards the mean.
3
lordnacho 43 minutes ago 0 replies      
Interesting article. I do something related, and here's my take:

Data mining is useful because it gives you things that are predictive that you might not have considered at first, but make sense after. This is mainly due to combinatorial explosion in the potential number of formulas.

You generally have a vague idea of what might be predictive, eg cheapness vs earnings and cash flow, but there's a huge number of ways that might show up in the data, and there's a huge number of ways it might hide in the data.

So for instance an old school analyst might do a ranking of price/earnings as well as cash flow, or whatever bespoke formula desired.

A data mining approach could take all the fundamentals and generate formulas mixing the variables, yielding a number that seem to be effective. Out of those, you'd look at them and decide that they capture some thesis (low P/E, upward trend in earnings). Then you'd look at whether the formula is sensitive to small tweaks. For instance, if you regressed the last 6 earnings and it had phenomenal performance, but with 5 or 7 it wasn't, you probably conclude it's some sort of random result.

There's funds that take the mass approach to an extreme. They have huge databases, with a genetic algorithm that generates expression trees, and a battery of stats (incl backtests) to decide what works. They end up with many thousands of strategies that are a great deal more effective than your standard one-trick pony fund.

4
aj7 17 minutes ago 0 replies      
...spend their time and resources...
5
nxzero 47 minutes ago 2 replies      
Never understood why anyone would spend time creating any trading method given even if it did work (possible, but unlikely) the SEC would audit you and then leak how you were making the outperforming returns.

Welcome any thoughts, in part because legally beating the market is possible, just don't get the SEC & OPSEC aspect.

4
Replacing Celery with Elixir in a Python Project klibert.pl
58 points by klibertp  4 hours ago   19 comments top 10
1
raphinou 24 minutes ago 2 replies      
Am I the only one not liking the slides having both vertical and horizonzal navigation? Going through the whole presentation requires going in the right direction (down unless last slide in section, in which case it is right...)
2
plainOldText 31 minutes ago 0 replies      
I personally like to bridge Elixir and Python via nanomsg with MessagePack serialization.

Here are some useful libraries:

http://nanomsg.org/

https://hex.pm/packages/exns

https://github.com/walkr/nanoservice

3
simon_acca 3 hours ago 1 reply      
Interesting article!

I would just like to point out that concurrent downloads can be handled much more efficiently in Python > 3.4 thanks to the asyncio library.For an example, look at Guido van Rossum's crawler [0].

[0]: https://github.com/aosabook/500lines/tree/master/crawler

4
rtpg 2 hours ago 2 replies      
The multiprocessing slide mentions the RAM usage issue with things like Celery (because you start many instances of Python and load in dependencies). Does this solve them?

If so, how does it get around the whole GIL thing and whatnot? Or maybe I'm misunderstanding at what level things are happening?

Is it that you still have one python process but the bottleneck/URL fetching is happening inside your elixir stuff?

Super interested in this, we have this problem with Celery workers and would love to not be bound by RAM for worker count

5
lbn 2 hours ago 1 reply      
Can we avoid the overhead of starting and shutting down processes by running a single Python process and communicating using something like grpc [0] (or even JSON-RPC for maximum simplicity)?

How do web frameworks like Flask handle multiple concurrent requests? Would performance increase if we started multiple instances of this Python web server on the same machine and load balanced them? The code would be much simpler if there was no need to handle process management.

[0]: http://www.grpc.io/

6
elktea 2 hours ago 1 reply      
While BEAM is indeed great I'm wondering why you didn't use Scrapy? It handles concurrency well and is a battle tested production scraper.
7
lrem 3 hours ago 0 replies      
I think I misunderstood your presentation at first glance. Elixir "processes" are actually green threads. Thus, you actually have Python interpreters in separate OS processes, right?
8
emson 2 hours ago 1 reply      
Fantastic. I recently used Elixir for scraping courses off Udemy. I've put the results of this into a site, http://www.coursenut.com

Also I've added an Elixir course promotion to:

http://www.coursenut.com/courses/3692

9
assaflavie 3 hours ago 0 replies      
Does this solve the state-sharing difficulties of the python solution?
10
spraak 3 hours ago 0 replies      
This is a cool way to use both tools together.
5
Dave Cheney: Go stack traces and the errors package cheney.net
73 points by bootload  6 hours ago   31 comments top 6
1
nickcw 2 hours ago 0 replies      
Having been writing Go for 4 years now, I can say that Dave Cheney's package is exactly what I've been looking for.

The thing that really sells it to me is the being able to wrap the error when necessary with errors.Wrap and find the underlying cause with errors.Cause. I've yet to experiment with the new formatting `%+v` but I can see that coming in useful too.

There have been earlier attempts at something similar (eg https://github.com/juju/errors) but none with the same clarity of thought.

So thanks for a great package Dave and for taking the time to whittle it down into the simplest, most elegant thing.

2
TheDong 4 hours ago 3 replies      
The fact that you need a third party non-standard package just to reliably get error stacks is ridiculous.

What's worse, is this solution doesn't solve random third-party packages you depend on all that much since they'll still use the utterly useless std-library error package or their own variation which won't necessarily be compatible.

3
buro9 3 hours ago 1 reply      
I've been using my own errors package for a while to address a different issue with errors when building RESTful APIs or web applications... when the error occurs, the code that touches it first best knows the HTTP status code to ultimately return.

My package in it's entirety is:

 package errors const ErrUnknown = 0 type RESTError struct { HTTPStatus int `json:"-"` Code int `json:"code,omitempty"` Message string `json:"error"` Err error `json:"-"` } func (e *RESTError) Error() string { return e.Message } func New(status int, code int, message string) *RESTError { return &RESTError{HTTPStatus: status, Code: code, Message: message} }
Doc comments removed just to show the code and nothing else.

When those errors finally get back to my web handlers, a standard renderError wraps all errors:

 func renderError(w http.ResponseWriter, err error) int { if e, ok := err.(*errors.RESTError); ok { return render.JSON(w, e.HTTPStatus, e) } e = errors.New(http.StatusInternalServerError, errors.ErrUnknown, err.Error()) return render.JSON(w, e.HTTPStatus, e) }
I handle all errors as soon as possible and then immediately assign the HTTP status code that should be returned, the error code to enable an external system to look up the error (without string parsing), a sanitised error message (to allow devs to get an idea without looking up the code) and I do put the original error in the Err so that my log files can contain the raw underlying error.

It effectively acts as a map between internal errors, and external communication of errors, and allows the place where the error occurred to say "this HTTP status". And because the Err is never stringified I am comforted that internal sensitive data (this is a billing system and unsanitised data should never be leaked even via error messages) does not get leaked.

That's my goal, to handle the error in the way that allows it to best communicate to:

1. HTTP clients via the status

2. Developers via an error code and sanitised message (which they can use to communicate to users)

3. SREs via detail in the secure logs

I found the native errors nice, but not the best for dealing with my 3 audiences.

4
astrobe_ 3 hours ago 1 reply      
There are two types of errors: the ones you log, and the ones you report to the user.

The difference is that the latter has to be translated to be useful (even when your user understands english).

5
justme24 2 hours ago 0 replies      
Link below from google web cache since website is unavailable
6
Image Dithering: Eleven Algorithms and Source Code tannerhelland.com
128 points by nkron  10 hours ago   30 comments top 15
1
dividuum 14 minutes ago 0 replies      
There is another interesting application for dithering that I've read about in the recent Uncharted 4 Brain Dump (Ctrl-F for "Dithering") here https://redd.it/4itbxq: Use dithering instead of alpha blending to fade out close objects. Alpha blending can be quite expensive while dithering just omits pixels. The result looks like this: http://allenchou.net/wp-content/uploads/2016/05/dithering-1-... (best visible at the top left corner).
2
mynegation 7 hours ago 1 reply      
Back when I was at university, we had a computer graphics course. Each week we got new assignment that we needed to program, write to a floppy disk (it was before Internet became widely available) and give it to the prof next week.

Dithering was one of the assignments. We were required to implement black-white quantization and then Atkinson and Floyd-Steinberg. We were given the freedom to choose our own images.

During development at the dorm my favourite picture to debug on was pretty racy (think along the lines of full version of "Lena"). I totally did not intend to put it to the floppy disk...

Not only I got the 10 - the highest number of points for this assignment, I got +2 on top of that with the comment from prof: "for the choice of test images in the best tradition of the field".

3
harryf 4 hours ago 0 replies      
A while back got curious about the approach used by apps like Manga Camera ( https://play.google.com/store/apps/details?id=jp.co.supersof... ) to turn photos into Manga style "drawings".

Turns out there's a paper on it "MANGAWALL: GENERATING MANGA PAGES FOR REAL-TIME APPLICATIONS" ( https://www.semanticscholar.org/paper/MangaWall-Generating-m... ) and an implementation - https://github.com/zippon/MangaWall - that implementation uses ordered dithering ( https://github.com/zippon/MangaWall/blob/master/src/MangaEng... ) among other things to help produce a pencil drawn like effect.

Anyway just saying ;) To me at least, pretty fascinating ...

4
seanwilson 21 minutes ago 1 reply      
Great and easy to follow article!

> For simplicity of computation, all standard dithering formulas push the error forward, never backward. If you loop through an image one pixel at a time, starting at the top-left and moving right, you never want to push errors backward (e.g. left and/or up).

Would the image look a lot different if you dithered it backwards from the bottom right pixel?

Are there dithering algorithms that consider the error in all directions instead of pushing the errors forward only?

5
bajsejohannes 6 hours ago 0 replies      
> It also has uses when reducing 48 or 64bpp RAW-format digital photos to 24bpp RGB for editing.

The author touches upon it here, but I think it's worth generalizing further: If you have high or maybe infinite precision in your color values, dithering will look much nicer than simply rounding to the nearest value. A concrete example is a color gradient. If done naively with rounding, color bands will be clearly visible. With dithering, they will be almost impossible to see.

See for example: http://johanneshoff.com/dithering/

6
1wd 5 hours ago 1 reply      
A fun (if not terribly effective) algorithm that's missing here is dithering along a Hilbert curve.

http://caca.zoy.org/study/part3.html

7
richard_todd 7 hours ago 0 replies      
I don't know if it's some kind of nostalgia factor (since a lot of old 16-color EGA software used it, I think), but I find ordered dithering output strangely appealing. I have also used it sometimes where size mattered more than style on the assumption that GIF/PNG compression rates would be better on ordered dithers. ImageMagick can do a lot of dithering styles and it's fun to play with.
8
nullc 9 hours ago 1 reply      
This thesis on image dithering and noise shaping is one of the best works I've read on the subject: http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pd...
9
RiscyAcorn 3 hours ago 0 replies      
Last year I tried painting some Floyd-Steinberg dithered pixels... https://flashasm.wordpress.com/2015/11/04/more-incredibly-sl... (16 colours) and https://flashasm.wordpress.com/2016/05/18/81920-pixels-of-64... (64 colours)
10
1wd 1 hour ago 0 replies      
Another fun fact I always think of when dithering comes up is how Tim Sweeney's software rendered or the original Unreal game used dithering instead of bilinear interpolation for texture mapping. This was very impressive at the time.

http://www.flipcode.com/archives/Texturing_As_In_Unreal.shtm...

11
IvanK_net 2 hours ago 0 replies      
Few years ago, I created a real-time dithering of video in Javascript: http://blog.ivank.net/floyd-steinberg-dithering-in-javascrip...
12
semi-extrinsic 4 hours ago 0 replies      
This article is really nice. I used it as my starting point when I once won a code golf competition on image quality in dithering. Using Fortran.

The algorithm is based on Sierra Lite, but I added a random element to the direction in which the error is propagated. This removes essentially all dithering artifacts.

http://codegolf.stackexchange.com/questions/26554/dither-a-g...

13
huuu 6 hours ago 0 replies      
It's a kind of funny those images look nice on high res screens.

Some time ago I experimented with 16-32 color images on websites. Because of high res screens they look great but save a lot of data.

14
chhabrakadabra 6 hours ago 0 replies      
This was a great read. Very approachable.
15
ja27 8 hours ago 0 replies      
I sense a disturbance in the force. It's as if 1,000 BitCam clones were just born.
7
Berners-Lee: WWW is spy net theregister.co.uk
80 points by gustavson  3 hours ago   28 comments top 4
1
_Understated_ 53 minutes ago 7 replies      
I don't have a problem with Google, Facebook and Twitter dominating the web as such because I don't have any accounts with them: They are commercial entities filling a need. Capitalism at work I suppose.

We all have a choice not to use them and I think this is the point he's missing (or the article is omitting).The opening paragraph should be "Inventor of the World Wide Web, Sir Tim Berners-Lee, has warned that the internet has become the "world's largest surveillance network for those that use Twitter, Facebook and Google".

I have had lengthy conversations with my family and friends about Facebook and Google and what they do with your information and the majority shrug and say either "I'm not bothered" or "They would get bored looking at my stuff". I don't bring it up with anyone anymore.

I feel that something monumentally bad will happen in the future related to the information these companies have amassed, something epically bad and the public will (perhaps) say "Enough is enough" but it will be too late... just my $0.02

2
infodroid 2 hours ago 4 replies      
Berners-Lee said:

> "The web is already decentralized... We don't have a technology problem; we have a social problem."

I am not sure I follow. Surely censorship and mass surveillance are facilitated by the hierarchical topology of the internet and the reliance on central authorities like IANA for key resources. This does not look like a social problem to me. Maybe a political problem, yes.

3
rdxm 33 minutes ago 0 replies      
this seems ilke a re-hash of Schneier's piece from 2013...

http://us.cnn.com/2013/03/16/opinion/schneier-internet-surve...

4
admax88q 1 hour ago 1 reply      
This from the guy that helped push DRM for the Web.
8
All the lonely people ucobserver.org
259 points by miraj  12 hours ago   157 comments top 23
1
ChuckMcM 10 hours ago 8 replies      
Again, something my grandfather taught me, was that it was important to walk around your neighborhood and say "hi" to the people who lived there. He called it "Being neighborly."

We also, periodically but not on a particular schedule, put together a block BBQ where people can bring food, or not, and share a moment together to talk about what is going in or what they are worried about etc. Generally people respond well to the outreach, and as a consequence I know all of my neighbors on sight, and have shared experiences with many of them creating perhaps not a deep bond, but one which certainly gives anyone permission to approach and talk without an invitation.

It pays benefits in surprising ways, when our dog was terrified of a pile of beeping smoke detectors and ran off without us looking, one of our neighbors both recognized him, and knew us, so called us to tell us they had brought him in and we could pick him up when ever we wanted.

One of the families on our street has their grandmother living with them now. She is suffering some dementia like the woman in the article but everyone on the street knows her and I think keeps an eye out for her.

All from being neighborly and just walking around and talking to people.

2
kstenerud 9 hours ago 8 replies      
The more I read about this, the more weird I think I must be. I've always preferred being alone. My time in San Francisco was utter hell. I couldn't get a moment away from people and noise. Now I live in complete isolation. My nearest neighbor is two miles down a dirt road. I only go into town to stock up on foods, which I vacuum seal and stuff in the deep freeze (enough to last me 3 months at a time). I've never been happier than I am now.
3
protomyth 10 hours ago 4 replies      
One institution not mention (and given the publication, I find it a bit odd) is the local churches. It is one of the large connection points that have started to fade. The amount of social welfare the churches supported has not been replaced.
4
some_hero-_- 8 hours ago 4 replies      
I was a semi introvert and extrovert guy but after working for one startup for more than 3 years in isolated environment makes me feel like i am fully introvert. There is no one with whom i can express my feelings openly. I don't have friends to hangout, and the most depressing thing is the people of my age have all this thing. What's more awkward is I can't even able to approach people, seems like i forgotten how to interact.

The other reason might be, i was afraid to not look like looser. Most of the people consider me quite successful and showing such weakness in front of society is not acceptable.

I have lot of ideas in my mind but i don't have energy to execute. My bank balance is good but it doesn't bring happiness. I don't have time on my hand. Sometimes i feel like i do sucide but i am strong men.

I don't recommend anyone to start a startup unless he socially fit and not under his early 20s.

5
firebender6 2 hours ago 0 replies      
Where I currently live in (Bangalore, India), the community includes a large no of young people who've migrated from rural to urban places due to work or for studies. A lot of us are also first time graduates with financial dependents.

In tech industry (where I work), the current work culture has made sustaining a friends circle (support system) outside of work really hard; even more so when they live in different cities.

I personally see co-workers & friends who struggle to cope with stress at work, staying away from home/family/trying to sustain a relationship. The sad part is that it rarely gets spoken about. In our case time not invested in relationships during 20's (when one is also building his/her career) is a major reason for loneliness as people get older. Nothing that we have done so far has equipped us to handle it. An over-competitive job market only makes things worse. Prioritizing family/friends often gets interpreted as not being motivated/hardworking enough and has huge penalties (direct and indirect) associated with it.

I really hope corporates begin to see the importance of healthy & happy workers not just on book, but in practice as well.

6
dominostars 6 hours ago 0 replies      
Having community, closeness, and realness is so important for our growth but so lacking in our environments. The only way most people know to find community is through partnership (which has its obvious flaws e.g. divorce rates), or in the work place, which is often lacking in meaningful connection. Family structures are weak, and the communities formerly facilitated by religions are not being supplemented in our new, growingly secular world.

The internet can provide some amount of valuable community and support, but it's limited. Words in text can only convey so much. You can't be held. You can't shout and scream and be irrational. It's easier to hide than confront difficult emotions.

Therapy is so valuable, and its stigmatization hurts us all, but it's also limited. It's just one person, with one set of beliefs. They aren't there with you in the heat of a difficult situation. And honestly, there's something so gross about having to pay hundreds of dollars just to find someone to be real with.

7
sgt101 5 hours ago 0 replies      
I wonder if the story about H1B visas and older americans being sacked provides a hint of a back story here. It seems to me that social solidarity impedes immediate profits; things like healthcare and pensions are very awkward for folks who want to smash open the value locked up in corporations for redeployment into yachts and sports cars. The consequences have been out of the public eye till now, but as the boomer bulge starts being the focus of this, and as boomers discover that they are working till they are 80 and retiring poor maybe things will change. How to change things - well that's going to be one of the challenges of the future.
8
sbardle 6 hours ago 0 replies      
I've been involved in a number of "community building" initiatives. What I've learned is that it is easy to start the community, but soon enough people start getting political and the infighting begins - it often only takes one or two people to ruin it for everybody else.
9
tluyben2 3 hours ago 0 replies      
When I lived in different cities people would find it extremely weird when I talked to them or said hi on the street; guys would think I wanted something from them and girls would think I wanted them. With many people around me constantly, it was a very lonely experience; hard to get to know people (I have a weird/rare first name and one of the bar guys of our local (popular) cafe we went to every friday had the same first name; after 2 years he didn't know what I always ordered or who I was) at all, let alone make friends. I started to believe the statement which people always put forward; you don't make new good friends after 30. I had colleagues at the office including my some of my long term high school / uni friends but there was no point of living in an (expensive) city for that.

We packed up and left (it was a decision on a drunk night when, once more, we found ourselves sitting in a Kirchner painting of a place where no-one looks at or interacts with each other) for a tiny place deep in the mountains of another country (we moved from the Netherlands to Spain). Everyone talks to everyone here, a few years in we not only made new friends but even people we would call new best friends. Our closest neighbour is 5 minutes walk and the closest bar 10 minutes, so if we don't want to see anyone, we can and if we want a party with a mix of friends and strangers: at least 2 times a week, but in the summer 4 times a week, we can.

Because this is a place tourists usually avoid (it is high up in the mountains and most people really don't like that; they want beach and some blue drink with a parasol), the people that do want to live here (and weren't born here) have the same kind of attitude. Great place to work with a laptop in the sun.

10
emmelaich 1 hour ago 0 replies      
Definitely Nordic. I think I'm Nordic though I was born and live in Australia :-)

http://virtualwayfarer.com/nordic-conversations-are-differen...

(oops - finger spasm made me move this from a reply to top level and lose me 3 karma points!)

11
morgante 1 hour ago 0 replies      
I'm not sure that the root causes are accurate in this article. I think loneliness transcends poverty and in many cases can be aggravated by wealth: many of the poorer people I know have to depend on social connections for things which I can buy.

Also, this comment seems really off:

> The homeless are, almost by definition, alone.

In my understanding, many homeless people frequently communicate with each other.

12
jokoon 3 hours ago 0 replies      
I'm so angry about that issue. Long post ahead.

I remember an experience of mine. At 19, I wanted to flee my dad's home, which was in Paris's center-near suburb, because my mother in law was emotionally harassing me, she was very intolerant towards my presence at home and it was generating a great deal of anxiety in me.

Before taking off, I called my dad, and he convinced me to rent AND PAY for a small apartment 20min from home. I went to school, still the same isolated and nerdy teen. The 3rd year, as I was failing to get a degree and diving into depression, I did not look for a school program (getting registered in higher education in france is very weird), so I stayed at this small apartment, ALL EXPENSES PAID! I just let myself be abandoned. I stayed on my computer all day long, going to bed 2h later than the day before, so at some points I was getting in bed at 9am, and waking up at 17pm.

Only my grand parents came in to help me do some shopping (I never had a driving license).

During that 3rd year, the 2007 crash came in, my father sent me to my mother (who has been chronically unemployed in a small remote town) because he could not pay that rent anymore.

My mother really helped me because I had social contact with her and my sister, and it helped me a lot. I found a girlfriend, but we're fighting, and for the life of me I just can't contemplate the thought of being alone again, it terrifies me. So essentially I'm in that "friend zone", but I don't complain and we are still good friends.

My thought on loneliness ? I think modern society, with individualism, has compartmentalized personal space to such a degree that people don't have ANY opportunity to talk to people. We all have our own personal:

* washing machine* fridge* oven* kitchen* toilet* bathroom* tableware* terrace

Individualism means that compartmentalization of ownership allows for better management of resources. The REALITY, is that we don't share ANYTHING. We actually live in our own prison cell. This is fine if you have a job that implies social interaction, but for those who don't have a job, or have a job that doesn't involve social interaction, you will lose your social skills pretty quickly. And that's not a myth, that's what psychiatry says.

My cynicism tells me that despite the fact that modern democracies have a higher standard of living for all sorts of reasons, I can really envision a disintegration of that modern society if nothing gets done. I can really envision why much poorer countries can thrive and surpass wealthier countries for that reason only: because ultimately, if social atrophy increases, it doesn't predict good things...

13
ojbyrne 9 hours ago 1 reply      
I grew up in Canada and now in the US and after reading this, the first thing I thought of after reading this is - there are vastly fewer neighborhood bars in Canada. Strict drunk driving laws (obviously more important) and cold winters seemed to lead to the only viable bars being large clubs or in downtown cores where there's tourists.
14
roansh 7 hours ago 1 reply      
As far as I understand - according to the article, loneliness in itself is not a bad thing for the health but what it brings with itself - the possibility of depression, possible risk of suicide, inactivity hence possibility of obesity and diabetes. I wonder how it affects the people who genuinely prefer being alone, and like it!
15
aaron695 7 hours ago 5 replies      
> When she couldnt find her way back home, confused and scared, she screamed for help. She banged on peoples doors and tried to claw her way into vehicles, setting off car alarms.

> Neither offered to help, nor called 911 they pulled the curtains shut and went back to bed.

This is a insult to my intelligence. This sort of emotive bullshit is both a lie (This is not what happened) and totally unconstructive.

There is a serious topic around this issue of care for people above food and shelter, it's just a shame this article is so piss poorly written.

16
advertising 6 hours ago 1 reply      
In the states I found people will look you in the eye when oassing by and some will say hello. Most will respond back if you say hello or how are you.

When I'm in Mexico it's almost rude not greet someone with a customary "buenas dias/tardes/noches"

In Japan rarely people will look you in the eye period.

17
phantom_oracle 9 hours ago 1 reply      
and unlike solving the problem of horizontal scaling, a mathematical equation or something with a clear solution, there is no clear, Silicon Valley style cure to this problem.

You can't exactly Facebookify people who are lonely.

Also, AFAIR(r=recall), Japan has a similar issue of loneliness.

Makes you wonder though. Even if you cure the economic (poverty) and health issues (living into your old age), something else will probably get you (depression, loneliness, boredom, etc.)

18
branchless 10 hours ago 4 replies      
I don't know Canada well enough yet, but I can tell you what life used to be like in the UK in my mum's time as a kid, around 55 years back in the North East of England. All families lived near each other and you lived near your school. You'd walk home past your nan's house and stop in there or at your uncles. Maybe when you got home your mum was out at the shops - ok go to your nan's a few yards away. Or just go to your neighbour, she knows you and should your mum be delayed long will feed you.

Then people decided that housing was an asset, an investment. The people that decided this were in finance, they made the credit available which sets the price. Access to local housing was limited. You had to go and get the best paying job you could just to compete in the "market". This meant moving away.

There were other reasons too like the destruction of local industry (forced out in part as land prices rose making them uncompetitive vs countries without efficient rent extraction). Plus brain drain into rentier activity in the capital. But for me land prices are at the heart of destroying communities.

I don't see community returning until we expel the usurers from the temple. Why is living on a piece of land so damn expensive? We don't have time for each other any more as we work insane hours. Both husband and wife have to work, leaving no time for other work in the community like stopping in to see a neighbour. All these tasks that were unpaid are now "on the clock" and the banks get a cut via renting money via debt creation for land.

My mum didn't have loads of libraries or "urban design" somehow coaxing people to interact. What that street did have is that they were not living in a time where ever single person was giving a big wedge of their wages to the banks thereby forcing them all to work all the time.

19
sidcool 8 hours ago 0 replies      
Loneliness is a curse of some cultures.
20
markdown 3 hours ago 0 replies      
OT: No part of this article is visible above the fold.

http://i.imgur.com/h12ea4h.jpg

21
ashitlerferad 7 hours ago 1 reply      
People are assholes :(
22
petecox 9 hours ago 1 reply      
Where do they all come from?
9
Building a SuperH-compatible CPU from scratch [video] youtube.com
79 points by jsnell  12 hours ago   15 comments top 6
1
wyldfire 10 hours ago 1 reply      
Cliffs notes:

- http://j-core.org/ ("Webpage was written by developer who knows <p> tags ... If anyone would like to donate a stylesheet, that would be welcome ..."), git repo coming soon

- Corporation is https://twitter.com/seinstruments

- goals were: not to license any IP, in order to have truly open source SoC that you can trust, minimum impl that can run linux, open source toolchain

- minimal impl supports 32-bit addressing, no MMU, DRAM controller, UART

- named "jcore" to avoid infringing on Hitachi's brand

- target market includes IoT, (AC) power-quality monitors

- in order to avoid the well-encumbered semiconductor space, the strategy is to duplicate an arch whose patent has just expired (Hitachi SH2, SH4)

- VHDL is BSD-licensed

- The cheapest dev board (3rd party, minimal FPGA dev board with little/no I/O) is only $50 + shipping from India

- discussion about breakeven for fab for ICs

- How is this distinct from OpenRISC? Automated transformation from high-level design to low level design outputs

- Chuckles from audience when simulator is revealed to have been written in Ada

- jcore design doesn't fit in biggest FPGA (Xilinx ICE40)

- "We've got linux booting to a shell prompt on real hardware so you can start with a known working environment and then break it."

- SH-FDPIC is working w/musl

2
PinguTS 6 hours ago 2 replies      
They build an open source CPU to run an open source OS namely Linux to make IoT applications possible. Okay,

But then they mention in the beginning with Linux you need minimum 8 MiB Ram, because "less is awkward".

To be honest 8 MiB is a lot for embedded, for IoT. Normally we deal with way less then 1 MiB. Especially in IoT I need low power and less components. A state of the art Cortex M0 has somewhere of 256 KiB of Ram. With that they basically prove, that Linux is not suited for IoT. Is that the point they want to make? Really?

3
sspiff 4 hours ago 0 replies      
Ah, I have fond memories of running NetBSD and Linux (JLime) on SH3 handhelds. SH3/4 represents a meaningful step forward, hope they are able to implement it soon-ish:

> The sh4 processor (dreamcast) has an mmu, but the last sh4 patents don't exire until 2016.

4
ashitlerferad 9 hours ago 0 replies      
Looking forward to the day when they reach SH4 level so the Debian SH4 port can be run on it:

https://wiki.debian.org/SH4

5
alain94040 10 hours ago 2 replies      
How old is this video? What is described would have barely been state of the art in the late 90s. 5 stage pipeline written in VHDL. This is now on every student's resume, as a project for their computer architecture class.

Their proposed "two-process" coding methodology for VHDL is weird. They pretend their entire CPU core is written in a total of 3 VHDL processes.

I guess their only argument is that their target a 180nm process (from 1991), which costs only $25K.

6
justin66 7 hours ago 0 replies      
"Can I walk them through the numbers we ran?"

"No."

These guys are a well oiled presentation machine.

11
Show HN: Virtualenv-mv Move (rename) Python virtualenvs github.com
30 points by brbsix  9 hours ago   23 comments top 4
1
has2k1 8 hours ago 3 replies      
There is https://github.com/berdario/pew which provides a better wrapper around virtual environments compared to virtualenv wrapper.

It also supports renaming.

2
andy_ppp 4 hours ago 1 reply      
This should be called virtualenv-rename right, otherwise what does virtualenv --relocatable do?

Python/Java using the file system to map hierarchies of classes is such a mistake when you start using a language like Elixir; you can move any .ex files anywhere you like within your project and it just works.

3
bonobo3000 8 hours ago 0 replies      
this is cool, ive been burned by trying to move a virtualenv before and discovering everything broke. is it possible to merge this into virtualenv? i think it would be much easier to use that way.
4
RodericDay 8 hours ago 1 reply      
why is this necessary? shouldn't you just delete it and recreate it?

installation via pip ought to take advantage of wheels and such, no?

12
The Squirrel Programming Language squirrel-lang.org
46 points by cia48621793  10 hours ago   33 comments top 9
1
iumtuip2001 7 hours ago 12 replies      
I appear to be a dying breed. A young ( < 30 years old ) programmer, self taught, who loves classic, "mainstream" languages like C, C++, Java...

Why?

Types.

I cannot stand the road that new languages are on, abandoning the type system, or making it optional, or not explicit. Given that I appear to be a minority, I thank you for reading my rant... Have a good day :(

2
dalbin 1 hour ago 0 replies      
This language is used in Electric Imp, an "IoT" oriented ARM Controller with WiFi, used in Wink products and Lockitron : https://electricimp.com/docs/squirrel/squirrelcrib/
3
mingodad 4 hours ago 2 replies      
I did a fork of Squirrel -> SquiLu https://github.com/mingodad/squilu because I found the syntax more familiar and also been made with C++ that make create extensions easier than plain C.Then I started adding more and more C/C++ like syntax options with the idea to have a way to develop fast prototypes in a way that could be migrated to C/C++ when performance was not good enough, the idea is to be able to parse and accept C/C++ syntax but without actually inforce it at scripting level (accept discard/warning) and be able to use a real C++ compiler to do the heavy work.

Example:

 class Klass //struct Klass { int_t i; //constructor() Klass() { print("Klass constructor", i); } ~Klass() { print("Klass destructor"); } } Klass k = new Klass(); typedef int_t int; int test10(int x) { dente: if(x > 10) { goto dente; goto done; return true; } done: return false; } print(test10(23)); output: test-class-constructor.nut:22:6 warning labels are only parsed right now test-class-constructor.nut:25:7 warning goto is only parsed right now test-class-constructor.nut:26:7 warning goto is only parsed right now test-class-constructor.nut:29:5 warning labels are only parsed right now Klass constructor(null : 0x(nil)) true

4
dang 8 hours ago 0 replies      
5
neukoelln 35 minutes ago 0 replies      
I think Ron Gilbert uses Squirrel in his upcoming game Thimbleweed Park.
6
ciroduran 3 hours ago 0 replies      
OpenTTD (http://www.openttd.org/en/) is a Transport Tycoon clone that allows you to write an AI player for the game. The AI player is written in Squirrel.
7
panic 9 hours ago 2 replies      
How does this compare to Lua? It seems to be going after a very similar market.
9
pesha 5 hours ago 1 reply      
I had to do one project in Squirrel when I was in university and I have to say the experience was terrible.
13
The Origins of SageMath; I am leaving academia to build a company [pdf] wstein.org
374 points by acidflask  20 hours ago   169 comments top 41
1
ddumas 17 hours ago 3 replies      
I'm glad William included slide 10 calling attention to the hostile and insulting attitude Wolfram Research has toward mathematicians and reproducible science in general. (I think some of Sage Math Inc's other closed-course competitors likely have similar attitudes, but Wolfram Research seems to be the worst.)

"You should realize at the outset that while knowing about the internals ofMathematica may be of intellectual interest, it is usually much lessimportant in practice than you might at first suppose. Indeed, in almost allpractical uses of Mathematica, issues about how Mathematica works insideturn out to be largely irrelevant. Particularly in more advancedapplications of Mathematica, it may sometimes seem worthwhile to try toanalyze internal algorithms in order to predict which way of doing a givencomputation will be the most efficient. But most often the analyses willnot be worthwhile. For the internals of Mathematica are quitecomplicated."

Reference: http://reference.wolfram.com/language/tutorial/WhyYouDoNotUs...

For comparison, if you want to audit the Sage Math algorithms that your research depends upon, all you need to do is fire up a text editor (or browse their github). And you won't find any statement in the Sage Math docs telling you not to bother because you're too dumb to understand what you're reading anyway.

2
dharmon 19 hours ago 3 replies      
This story reminds me of Prof. Tim Davis of Texas A&M, formerly Florida, who I heard had a hard time getting tenure, after making the software and mathematical world a much better place.

He (and his group) developed CHOLMOD and UMFPACK and other sparse solvers used everywhere. Basically, when you type A/b in Matlab, it calls his code.

It was an incredibly challenging task going from sort-of/kind-of being able to solve linear systems to where we are today. Hardly anybody thinks about it. Again, you just type A/b, even when A is poorly conditioned. You can write a crappy solver in less than 100 lines of code, but if you read his papers, building a rock-solid solver was a very difficult task.

Unfortunately this kind of work is important, but pretty thankless.

3
cs702 17 hours ago 3 replies      
It's SHAMEFUL that academics like Stein who dedicate their lives to developing amazing open-source software do not get funding and frequently fail to get tenure. These individuals truly are making the world better!
4
hypeibole 20 hours ago 2 replies      
For other people confused about the meaning of BP in this context, it means Benjamin Pierce Fellow.

It appears this talk was given in the context of the Benjamin Pierce Centennial Conference:

http://www.math.harvard.edu/conferences/bp/

5
bluenose69 19 hours ago 1 reply      
Setting up a company can help because it will let other researchers support the project by buying the software on research grants. A grant can also pay for consultant-style improvements to software. It doesn't really matter if the software is also being given away for free. The important thing is to have an invoice to give to the university financial services staff.

I'd prefer it if the granting agencies supported this sort of software infrastructure directly, but, lacking that, a company is a way to hire people to tackle some of the weaknesses of Sage, whether they be in its core functionality or its UI.

6
maweki 19 hours ago 2 replies      
The problem with Sage, while it's an amazing piece of software, that it is abysmally documented. As is maxima.Sure, there's an example on how to do integration and anything on a high-school level but every time I wanted to do something a little bit more complex, I was completely lost. I found the Mathematica documentation to be miles ahead.
7
doug1001 3 hours ago 0 replies      
Mr Stein released the first version of SageMath some time ago--not sure how long but maybe 2007.

among other things, SageMath reconciled the confusing namespace soup that is scientific python (numpy, scipy, and matplotlib--three brilliant libraries with partial overlap in functionality and in package names) which SageMath gathered (along with other libraries like SymPy) and put them under a single rubric, 'SageMath'--one (large) install and you have all of scientific python.

SageMath also included a notebook

seems not such a big deal now with Anaconda and Jupyter notebook, but in 2007 it certainly was.

8
noobermin 16 hours ago 0 replies      
I hate to be that guy, but this needs to be said. The guy on page 15 was right. Sage should have at least put a little more effort into their applications, because if it did, it would be much more popular and thus more developed.

Back in 2011 or so, as a young undergrad, I latched onto sage and used it for an undergrad research project. My BS was from a tiny university (one year, I was the only Physics major in the school)..and I tried to turn all my friends and profs onto sage, being small meant there was no dept. standard, so I tried to impress it on the dept. (3 people really) but they stuck with mathematica because sage didn't even have an easy to use ode solver! For pete's sake... I understand that sage is a niche project for the math community, but if that's the case, that's the only place you'll find funding and devs from.

This is often said here amongst the startup nerds: make sure you have an audience willing to pay. Hey, many of us in the "more applied community" would love to have a FOSS tool that rivals mathematica, we exist! But it needs to do things well, or at least well enough that in linear combination with the fact that it is open source, the overall goodness vector for the project's value has a timelike norm. Then, we'd clamor for it, you get downloads, and one day, the funders will go, "hey, that's good shit right there, I better be a part of it!"

They don't need to do things for others, or for others' interest. But then, no one should be surprised when such efforts don't get funding. I mean, doing something niche implies that less people will be interested which implies that less people will fund it, right? It's almost a direct consequence of choosing to serve a niche.

9
gtycomb 16 hours ago 0 replies      
Prof. Stein's earlier post that had appeared here on HN (http://wstein.org/mathsoftbio/history.pdf) is enormously absorbing reading also. It has much more to tell about the significant challenges that stand in the way of developing high quality open source math software.
10
williamstein 9 hours ago 0 replies      
https://youtu.be/6eIoYMB_0Xc is a screencast from the actual talk, including many questions at the end.
11
phamilton 19 hours ago 1 reply      
William and I both presented at a RethinkDB meetup this last fall (SageMathCloud leverages RethinkDB changefeeds in awesome ways) and I got to talk to him a bit about some of these frustrations. It really is a rough spot to be in and I wish him all the best.
12
FLengyel 5 hours ago 0 replies      
All too familiar. I am reminded why I stopped writing software for academic research. It's considered low-academic-value work, on a par with system administration or I.T. helpdesk work.
13
fsloth 19 hours ago 2 replies      
Does make sense - academia is about theory, businesses are part of implementing end user solutions. most of academia runs on a tech stack delivered by commercial entities anyway. Building products is not as much about creativity as delivering a fixed product with a service plan and support chain in addition to product development. Companies have various operations to create full fledged products - academia can supply only the r&d part. And this is a good divisionoflabour, IMO.
14
shiven 13 hours ago 0 replies      
Good luck and Godspeed!

Academia totally sucks in this deeply ingrained ivory-tower mindset that metes out rewards/tenure/grants based on outdated performance metrics.

(Excuse the vitriol, former academic here.)

15
ssivark 16 hours ago 1 reply      
If you're reading this @WilliamStein, firstly, kudos for the whole effort. I think the providing Sage as cloud math software is a powerful strategy, especially with the trend of end users preferring low power devices.

I have some thoughts regarding the comment on Slide15 (making Sage good for some applications): I see that python and Jupyter are very popular for machine learning and allied computations. Can SAGE leverage this to provide a service that a large audience would happily pay for -- and then use that to bootstrap a full fledged mathematical software? (Also, on that note, is there any coherence between the leaders of Sage and Jupyter?)

16
TJSomething 17 hours ago 0 replies      
I like the idea of SageMathCloud. I had a numerical methods class that I took where we used a web-based Python math environment that the professor was having his grad students build. It was pretty buggy and would go down sometimes.

Last I heard, they switched back to MATLAB. Having taught MATLAB, I wouldn't wish that on anyone. But if SageMathCloud had been around, it would have been a good option.

17
Ericson2314 6 hours ago 0 replies      
This is a prime example of the tragedy of the commons with FOSS. People wonder what academic or corporate incentives need to change, but I'd argue in many cases this work lies between research and business and that's OK. I think the closest physical-world analogue is civil engineering of public works, and we'd be wise to make something of this (in the US they perhaps both underfunded, hmmm).
18
mungoid 17 hours ago 1 reply      
This is why I love sagemath, and open source in general. I cloned the repo a few weeks ago after hearing him mention how he is mainly the sole developer. It's a huge undertaking and i know how draining something like that can be for motivation.

I may not have time to do a lot but i am gonna join in and help as much as possible. Documentation, bug fixes, whatever. This project deserves it imo

19
rodionos 2 hours ago 0 replies      
Not being familiar with the software itself, but how is starting a company, which is just a legal interface around the effort, would advance the effort? Several comments here indicated that documentation was an inhibitor. Would the company structure fix that?
20
qrv3w 17 hours ago 1 reply      
SageMathCloud is amazing: https://cloud.sagemath.com/, for those that want to try it out. Lots of neat bonus features like Python notebooks, Latex support, and even terminal access. The free tier works great for most applications.
21
jordigh 16 hours ago 1 reply      
Will, I have been looking for ways to build a company around GNU Octave too. If you find a way to make money, please let me know. Matlab is one of the Ma's that needs a direct replacement. There's so much free Matlab code out there that needs a free environment to run on.
22
emmelaich 11 hours ago 1 reply      
Good luck to William.

I have another perspective though; I knew Allan Steel (Magma guy) as an undergrad at Sydney University. He is an extraordinarily smart person and humble and genial as well.

Everyone should be thankful to him and the University of Sydney for having the wisdom to fund the development of Magma.

23
chestervonwinch 19 hours ago 2 replies      
What is the difference between running sage math vs ipython / jupyter and importing all the relevant mathematical packages?
24
infinity0 14 hours ago 2 replies      
I hope they improve their software development practises. Packaging SageMath for Debian has been impossible; they use about 100 dependencies and have re-invented their own internal package manager to build all of these with sage-specific patches.
25
macawfish 18 hours ago 0 replies      
I love Sage! I used it to find this fractal: https://vimeo.com/155587929
26
keithpeter 18 hours ago 1 reply      
So is the business model likely to be similar to that of RStudio where organisations can buy an annual 'commercial licence' to support the project (in addition to the GPLed version being freely available as mentioned below)?
27
et2o 19 hours ago 2 replies      
Why did he have to leave academia to start a company if he has tenure?
28
rfurmani 5 hours ago 0 replies      
29
crb002 19 hours ago 0 replies      
Congrats. I have mixed feelings about Sage. I regularly shell out to Python for many of the underlying libraries, but I rarely use the Sage GUI.
30
aerioux 19 hours ago 3 replies      
Question: why would anyone Sage > Magma (even though it's now a company. i.e. since there already is a clear and ahead forerunner?)
31
bradlys 15 hours ago 0 replies      
I hope it goes well, William. It was nice to take a couple classes from you at UW in 2014.
32
davesque 17 hours ago 2 replies      
Slightly off-topic: Does anyone know if Sage allows for lisp-style meta-programming like Mathematica?
33
sgt101 18 hours ago 2 replies      
How does the trajectory of SageMath compare to Julia?
34
amluto 8 hours ago 0 replies      
I wonder if a simplified variant of SageMath / SageMathCloud would make sense as a Sandstorm app. IPython on Sandstorm is a generally excellent experience, and the sage notebook is a similar concept.
35
ianai 20 hours ago 1 reply      
Just make sure to add in some DoD specific modules and I think you've got a winner.
36
arghbleargh 19 hours ago 1 reply      
Wait, so will Sage still be open source?
37
penglish1 10 hours ago 0 replies      
Best of luck! Sage is fantastic, and the world needs it.
38
boulos 16 hours ago 1 reply      
Glad to see you take the leap! (I launched preemptible VMs at Google and have admired your work from afar)
39
arca_vorago 12 hours ago 1 reply      
I'm not a math guy, but I am a GNU guy who wants to learn more, and I'm curious what all the math geeks think about GNU Octave? How does it compare to matlab and mathematica? Any quitks or catches that make it unusuable for certain situations? Stuff like that...
40
master_yoda_1 17 hours ago 0 replies      
Thanks for doing this.
41
RhysU 17 hours ago 0 replies      
Jack left?
14
Return of the Obra Dinn Development Log tigsource.com
41 points by mercer  15 hours ago   4 comments top 3
1
deepnet 6 minutes ago 0 replies      
When a lurking mathematician, Brent Werness, invents a sweet new type of detail preserving dithering for Obra Dinn as a devlog comment.

https://forums.tigsource.com/index.php?topic=40832.msg121280...

2
sp332 11 hours ago 1 reply      
I know it's in giant letters across the top, but because of "banner blindness" I missed the link to the preview build: https://forums.tigsource.com/index.php?topic=40832.msg123810... It's short but it gives you a good idea of the main mechanic and the aesthetic. Check out the settings menu to switch between multiple ways of rendering the 1-bit graphics.
3
sunkencity 6 hours ago 0 replies      
big fan. just love this style / mac user since 1986.
15
The Computational Engine of Economic Development medium.com
32 points by avyfain  14 hours ago   11 comments top 5
1
JackFr 6 hours ago 1 reply      
I realize its not a technical publication, but this piece is real hand wavy.

Yes, a modern economy consumes energy reduces local entropy of atoms though agriculture, transportation and manufacture. This can be considered computation.

But then: "Late in the last decade, I developed a mathematical technique that can be used to characterize an economys ability to produce products. This measure of economic complexity, which makes use of information about the diversity of countries and the ubiquity of products, explains a substantial fraction of a countrys level of income, but it also explains future economic growth."

There is no substantial explanation of this measure and how it is constructed. If it can't be stated simply one suspects it is subject to manipulation and its predictive power might just be an artifact of its construction.

2
jamespitts 10 hours ago 1 reply      
I appreciate this perspective on understanding national economies. After the big meltdown in 2008-9 I started thinking that major economic events could also be seen as information processing events.

Some Tofflerian quotes from the article:

"under the hood, products are made of orderor information"

"the actions we use to make products are acts of computation"

"just as objects are a more fundamental form of economic value than currency, the ability to create objects is a more fundamental form of economic value than the objects themselves"

3
2sk21 3 hours ago 1 reply      
Related to this: are there any good approaches to economic simulations of economies at the individual and organizational level? In other words, literally simulate groups of people and the economic organizations that they belong to.
4
amelius 2 hours ago 0 replies      
This rephrases why education is important.

Also, they fail to explain that Europe has lots of computational power in IT, while the real monetization seems to be happening mostly in the US.

5
darawk 11 hours ago 3 replies      
This is very interesting indeed. But I wonder...why is Chile importing so many centrifuges from South Korea? 1.9% of their exports?
16
Cryogenically frozen RAM bypasses disk encryption methods (2008) zdnet.com
44 points by andreyvit  15 hours ago   24 comments top 13
1
moyix 9 hours ago 1 reply      
There has been some more recent work on this lately:

https://www.dfrws.org/2016eu/proceedings/DFRWS-EU-2016-7.pdf

Essentially, with newer RAM (DDR3), the location things end up on the physical chip is scrambled to improve reliability:

> Storage of bit streams which are strongly biased towardszero or one can lead to a multitude of practical problems:Modification of data within such a biased bit stream canlead to comparatively high peak currents when bits aretoggled. These current spikes cause problems in electronicsystems such as stronger electromagnetic emission anddecreased reliability. In contrast, when streams withoutDC-bias are used, the current when working with thosestorage semiconductors is, on average, half of the expectedmaximum.

So once you image the RAM you have to figure out the scrambling and undo it.

Related: https://github.com/IAIK/DRAMA

2
Canada 5 hours ago 1 reply      
I heard that when power is interrupted ACPI still has time to inform the system, and not only that, the CPU will continue to execute many, many instructions before it's finally deprived of power. The computer seems to turn off instantly to us, but at the time scale the CPU operates at it's actually quite a while. I heard this was enough time for an operating system to detect power failure and zero out megabytes of memory.

Anyone know if this is true or not?

3
lunixbochs 12 hours ago 1 reply      
https://en.wikipedia.org/wiki/TRESOR

OS X has a setting called "destroy FileVault key on standby" in `pmset` which mitigates cold boot attacks.

I kinda want the CPU/MMU to support loading encryption keys to transparently encrypt some or all of RAM (could also toss in error checking while we're at it). SGX has this in the trusted containers, but I think it makes sense for general use too.

4
amelius 26 minutes ago 0 replies      
I have the feeling this could be trivially solved by adding reset lines to the RAM design, and triggering them on shutdown (perhaps powered by some capacitor).
6
aaron695 9 hours ago 1 reply      
That's nice first.... first time I saw it.

Any evidence of it in the wild in the past 8 years, like, you know, actually used once?

7
imjustsaying 3 hours ago 0 replies      
So why don't OS's just zero out the RAM as part of the normal poweroff cycle now?
8
mirimir 7 hours ago 1 reply      
Use Arctic Alumina[0] to fill all USB and Firewire connectors, and embed RAM.

[0] http://www.arcticsilver.com/arctic_alumina_thermal_adhesive....

9
sandworm101 5 hours ago 0 replies      
Kickstarter idea: Memory modules with an inbuilt temp sensor. Below 0c, they just stop. Put that tiny circuit into the silicon and the problem goes away.
10
nickpsecurity 8 hours ago 0 replies      
The problem here was already known before the publication of the paper even though the paper was still a clever attack. Most of security research, including high-assurance software, was largely ignoring attacks on hardware. There was a subfield growing that didn't trust the RAM, disk, peripherals, etc. These designs drew a boundary at the ASIC or SOC level where anything tampering outside was protected with crypto, PUF's, etc. The first I saw was Aegis:

https://people.csail.mit.edu/devadas/pubs/aegis-istr-august6...

Joshua Edmison's dissertation lists a number of others along with his own, interesting scheme:

https://theses.lib.vt.edu/theses/available/etd-10112006-2048...

Nobody has learned anything different since for the fundamentals. The fundamentals are still to use authenticated crypto of some sort on RAM to detect attacks there to fail safe at worst. Also, use special IO/MMU's, SOC mechanisms, and software protected by them to handle stuff on disks. Stopping cold boot attack is straight-forward on such architectures that don't trust RAM in the first place.

From there, we move into cat and mouse game of SOC attack and defense. Most of those require physical possession for more than a few minutes, though, with often destruction of the chip as a result. So, this is a significant step forward in security vs just snatching the RAM out of the system.

11
mschuster91 6 hours ago 2 replies      
There's only one solution to prevent this, if you're operating a server that might be of federal interest (which might even be running an open proxy or TOR relay):

1) Rent an entire rack with a 19" rackmount UPS, as well as locks connected to the server to signal if the rack has been opened, and motion sensors, as well as a compass

2) If either the power from outside goes down, or the lock/cage alarm triggers, or the motion sensor/compass detects motion, wipe the RAM section that contains the HDD encryption keys and power down the machine.

Why a compass? Because in case the cops try to move the entire rack carefully (to not trigger a motion sensor with false-alarm filtering), and they rotate the rack, the compass will detect it.

12
arca_vorago 10 hours ago 0 replies      
This has been a known attack vector for quite some time (hence 2008...) One of the best training courses I ever did was a forensics course and this was one of the first techniques taught for a "black bag", along with faraday cage bags for all the things.

I have never gotten to use it irl though.

13
dec0dedab0de 13 hours ago 0 replies      
this research is neat, but was also neat in 2008 when it was released
17
Why not string theory? Because enough is enough backreaction.blogspot.com
147 points by Santosh83  17 hours ago   82 comments top 9
1
colordrops 8 hours ago 9 replies      
The author says that he "became more convinced [string theorists] are merely building a mathematical toy universe." The explanation for his conclusion was that they kept revising string theory in order to fit observations. I know little of string theory, so I am probably misreading his meaning, but isn't that how all science works? Create a model, then revise it when you get more data.
2
tim333 12 hours ago 6 replies      
I've got a theory that string theory is more of a sociological phenomena than real science. It started as a genuine attempt to model how the nucleus was held together and then continued because it's a good area to do maths and write papers rather than because it models reality.

One thing I don't get, which may be down to my own stupidity - take maybe the simplest interaction in physics - you have two electrons in space a few cm apart and the accelerate away from each other due to the charges repelling. I'm not sure how that is supposed to happen if everything is strings. Do they ping tiny strings at each other and how do they know which way to aim? Maybe some string person can enlighten me.

3
dnautics 9 hours ago 1 reply      
"[A]cademia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend."

A really awesome quote there. (Perhaps a bit self-serving for me).

4
visarga 6 hours ago 11 replies      
This is probably a naive question: can we input a bunch of particle interactions into a deep learning system and train it to predict the probability of future interactions? It would be like a "black box version" of physics. If we can predict, then we can find a more elegant mathematical notation and a more intuitive physical interpretation.

Machine learning can observe and learn patterns that are more complex than humans can grasp. What if the perfect Physics theory is more complex than humans can understand and possibly quite unintuitive in meaning? Then we won't like it and steer away from it, and that would be a bad thing in the end.

5
Steuard 10 hours ago 2 replies      
Speaking as a professor specializing in string theory, I'd say, "More power to her." I have no idea whether string theory gets too much attention relative to its actual value in modeling the real world, but I think one essential part of finding the right amount of attention for any physical theory is for theorists to make their best judgement about what's worth their time to study.

In fact, it's quite comforting to me to see people deciding to focus in other directions: to my eye, that means the system is working (though perhaps the author would argue it's not working efficiently), and I'd hate to see other worthwhile angles of attack wither away purely due to lack of attention. For myself, I got excited about string theory in grad school and decided to go that route, and I'm still finding it fascinating today. (And it still feels worthwhile to keep studying it, though perhaps I'm not quite as optimistic about its ultimate success as I was 15 years ago.)

6
kordless 12 hours ago 0 replies      
> Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation.

Dissonance strikes again!

7
cygnus_a 7 hours ago 0 replies      
i vote we change the name. "generalized quantum mechanical phenomena maybe" ... and we can rename quantum gravity to "generalized macroscopic versions of quantum phenomena maybe"

then we can add the "maybe" suffix to all the rest of our theories

8
stesch 11 hours ago 0 replies      
Article isn't about TCL.
9
spectrum1234 7 hours ago 4 replies      
This ties in with my thinking that there is just too much funding to do research for the sake of research. Let the private sector work on moonshots if they want but more realistically no one should be researching super far out problems. Instead the agile "just in time" approach needs to be used in academia as well as private companies.

A good analogy is no one was trying to build electric cars fifty years ago but now they are. But they only are because progress has been made indirectly in other fields to make it worth it now.

18
Overview of Running an Online Game for 3 Years hookrace.net
161 points by def-  17 hours ago   23 comments top 7
1
adynatos 1 hour ago 1 reply      
The author writes:"Reduce the number of syscalls by caching the value of gettimeofday() until a new tick happens or network packet comes in"But I'm pretty sure glibc on recent Linux handles gettimeofday in user-space, without context switch (kernel maps the data to userspace). I guess caching the value locally and updating it 1/sec or something would still help if there are thousands of calls/sec, but not as much as if it was really a syscall.
2
themartorana 12 hours ago 4 replies      
10/m/location is damn impressive. I've run servers for a turn-based asynchronous and real-time casual game for the past three years. With ~700k unique monthly players (about 200k/day) we do about 1500 request/s at peak and pay thousands a month for our AWS stack. I'm not mad at it, I think we get great utility for what we pay, but this is lean and mean for realz.

Kudos!

3
eggy 6 hours ago 0 replies      
Awesome and inspiring to me, great work!

I am now looking at LFE (Lisp Flavored Erlang) and ELM to create a very small online game. It makes me want to maintain C/C++ chops.

It's sad Apple is so walled in that you need a VM to build for OS X, and iOS doesn't even make the list. I have an iPad, but I use an Android phone for that reason, and I only program mobile for Android. Apple is getting better at supporting iOS devs of late though...

4
lccarrasco 12 hours ago 0 replies      
This was really great to read, in-depth and interesting, thanks a lot for taking the time to write it. :)
5
ashitlerferad 3 hours ago 1 reply      
What was the reason for the fork with Teeworlds?
6
mentos 11 hours ago 1 reply      
Hey great work!

Curious to hear what the client stack was? Did you use LibGDX by chance?

7
ObeyTheGuts 2 hours ago 0 replies      
Maidsafe will eliminate all this server problems!
19
A Simple Content-Based Recommendation Engine in Python untrod.com
115 points by numlocked  17 hours ago   7 comments top 4
1
bartkappenburg 6 hours ago 1 reply      
I liked the way the article compared the two approaches (CF and text analysis) and the code was interesting.

At Conversify[0] we literally tested these methods last week(!) We help e-commerce companies with machine-learning optimizations of which a recommendation system is one. The average shop uses the standard tools for recommendations which are terrible, with some intelligence it could be a very important way to increase sales and conversion. [/end-plug]

This is what we learned. We've tested three approaches (all on sites with enough traffic):

(1) CF based on buys in combination with/without profiling

(2) CF based on purely clicks in combination with/without profiling

(3) Text analysis (sklearn, TF-IDF)

We had the best hopes for buys then clicks and as last text-analysis. It was exactly the other way around ;-).

(1) Buys has the drawback that the relationship of profiles to products is a lot to a few most of the time (relatively).

(2) Clicks has the same drawback but we can also see clicks within one session. This makes clicks somewhat better because shoppers often compare multiple products that are alike. This is valuable info.

(3) We didn't think that TF-IDF had evolved the last couple of years this much that we now can do an analysis with a few a lines of code with these results. They were stellar. We tested the outcomes with the business owners as well as customers (we didn't say which method generated which results) and they all went massively for text-analysis.

Here's our outcome for a random product from one of our customers (with a combined buys/clicks recommendation): http://i.imgur.com/lEYYIh9.png

That being said: we are working on a combination of all three methods. We think that text-analysis is for the first filtering of related products and that the CF methods are for ordering of this set. Also using weights for the different methods are useful.

Besides that we are developing a way to distinguish in an automated way between substitute products (samsung vs iphone) and complementary products (charger with a phone). This could be accomplished by using more CF than text (and specifically bought products within sessions). Our first results are promising.

[0] https://www.conversify.com

2
ChicagoBoy11 16 hours ago 1 reply      
The blog post was a bit confusing in the way it was structured, and I'm still not quite sure I understood you correctly: You start off by saying that content-based recommendations are what most people usually have in mind when they have recommender systems. Then, you suggest that actually in many real-life situations those systems are kinda terrible to actually meaningfully move the ball forward on offering recommendations that are non-obvious to users. But then you show a nice implementation of just one-such system which you just said may not be entirely useful for a lot of things, and when paired with the other algorithm often contributes very little... is that right?

As a reader, with all of that intro, I couldn't help but be really disappointed that the implementation discussed wasn't the one relying on buying behavior... seems like a good chunk of the article is devoted to explaining why that one is a superior recommendation system, so naturally I wanted to see how you'd go about putting THAT together...

3
numlocked 17 hours ago 0 replies      
Author here! Love to hear any feedback and hope folks find it useful. As I mention in the article, we use a very, very similar implementation in production at grove.co. It's ridiculously simple, which also makes it very robust and reliable in production.

Example (frames as "customers also bought" but in reality is a set of similar products -- we swapped in the content-basd engine for a CF approach and haven't yet updated the copy): https://www.grove.co/catalog/product/cellulose-sponge/?v=802

4
data_hans 5 hours ago 0 replies      
Awesome article. Using cosine similarity to calculate product similarity is something I have been wanting to try. Have you thought about using running TFIDF on customers' entire purchase history and running cosine similarity between each customer's purchase history vectors and product vectors? I think an interesting study can be done on comparing the recommendation results between CF, Cosine similarity on products, and cosine similarity between customer history and products.
20
We Hired a Blind Coder medium.com
194 points by su_yuen  15 hours ago   86 comments top 24
1
SwellJoe 14 hours ago 5 replies      
One of my friends is a blind coder (and he sometimes comments here on HN), and he's had a very hard time getting regular work as a coder, even for things that seem obvious (like testing and fixing apps and sites for accessibility). He's bright and built his own screen reader for Android, among other things. But, I think that's a common experience for folks with disabilities; they begin to feel like they have to apologize for their disability, even in cases where it does not inhibit them from doing the work.

Even worse, there begins to be a notion that hiring someone with a disability is a charitable act, rather than just making a hiring decision based on their competence for the task. It seems like that would be kinda soul-crushing to always wonder if you were chosen for reasons other than your abilities.

I've seen so many interesting/disappointing behaviors from people when it comes to people with disabilities. Once I was helping with an event that was organized and led by a woman with cerebral palsy and in a wheelchair. Media showed up to cover the event...and kept trying to talk to me about it (the able-bodied white guy who was mostly there to handle technical stuff and had very little interesting to say about what the event was about), even when the organizer was right next to me and clearly bossing people, including me, around. She also has had difficulty finding regular work in the past, despite being really impressive in a lot of areas.

I'm kinda just ranting here, as I don't have good solutions, but I do think it'd be cool for folks to stop assuming that just because someone can't work the way most folks work they can't do the same kind of work. There have been blind developers (that I've been aware of) for about as long as I've been using computers (and that's a long time). We should stop being shocked by it; though it's cool and impressive, it tends to lead to thinking they might not be able to do the job as well, just because they're doing it in such a different way.

2
busterarm 14 hours ago 2 replies      
We contract a blind accessibility consultant to help us evaluate all of our websites (and we have a lot...) and educate us on best practices for making our sites easier to navigate.

One of the best choices that we ever made. We spend a lot of time on getting it right in our sites and have spent many hours in NVDA & VoiceOver with our eyes closed.

It's unfortunate how poor the experience still is on most websites and I wish our industry would spend a little more time trying to get this right. If you aren't using Semantic markup and ARIA, please start now.

You can't imagine how much worse the user experience is on mobile vs a computer.

3
backlava 12 hours ago 7 replies      
I think most programmers don't realize how much vision holds them back. I've been coding blindfolded for over a month and I'm never going back. If you are a touch typist what do you need to see? Not looking at things helps me to focus.

I do take the blindfold off while checking my work an during some parts of debugging.

4
filleokus 12 hours ago 6 replies      
Nice story! One thing I have been thinking about, for completely blind (English isn't my native language, so sorry for not using the correct/polite terminology) users, (that have 0% vision on both eyes) it would be possible to use a laptop without a screen, I guess.

Essentially simulating the screen for the screen reader to work with, but not rendering the output anywhere (or even having the screen hardware to render it on). Just imagine how mind boggling it would be to see someone working at a coffeshop with just headphones on and the bottom part of a laptop, or at a desk with no screen on it!

I get that it probably don't make sense to manufacture a product like this, but still, would be really cool I think! :)

5
ethomson 13 hours ago 0 replies      
When I was in high school, I had a part time job/internship doing QA and writing some software. I was recruited there by my friend, who was blind, a programmer, and also in high school.

He was a great programmer, and very adept at using his screen reader. As far as I know, it only ever gave him one serious complication: since he was a minor, it was set to safe-for-work mode, and it read his coworker's name, a Mr Livshits, as Livsugars.

6
Brajeshwar 5 hours ago 0 replies      
Recently, I tele-interviewed a developer and got him as a consultant, judged based on his skills and his work. I didn't realize he was blind until it came to light that the blind developer in the team was the one I interviewed! Good Developers are good, no matter what.

Prior to that, I also got 2 disabled interns - they are hard of hearing and verbal speaking. I got them working for 6 months, after which they got full-time jobs elsewhere. They were usually present in the 30+ team workshops and meet-ups that I do. I had to slow down, express more expressively with hands, etc. and things went pretty good. They did always comes back to me at the end, picking on few missing parts that I need to elaborate.

I also had a selfish motive - I wanted to learn sign language. I was able to understand them better but I think I need professional training to be proficient doing the sign language myself.

btw, this is all in India, a place where accessibility, disability-enabled stuffs are an after-thought, sometimes totally ignored.

7
insteadof 13 hours ago 0 replies      
Speaking of, The Changelog #206 https://changelog.com/206/ just put out one with a blind programmer as well, "This week on the show we talk with Parham Doustdar, a blind programmer. We talked about the advantages of being a blind programmer, the tools he uses, quitting school, carving your own path, and more."
8
pkrefta 14 hours ago 2 replies      
Great and very inspiring story. Made my Saturday 200% better - seriously :)
9
kendallpark 1 hour ago 0 replies      
At a previous company I worked at, we had a blind dude supporting our web apps (i.e. on the phone with customers). I was pretty shocked (and a little ashamed) to learn this because didn't have any ARIA features built in.
10
rusabd 14 hours ago 0 replies      
I was hired by blind developer once. He didn't discriminate me by my sight.
11
carlob 2 hours ago 0 replies      
I work remotely as well, but at our company we do a lot of screen sharing meetings. I wonder if someone should build some sort of VNC (or zoom, or adobe connect) that plays well with screen readers and doesn't just send a video feed. Or maybe it already exists.

I myself have macular degeneration, which while not disabling right now, has started a bit too early in my life. One day I might be almost blind myself and I will definitely need this.

12
saqibs 5 hours ago 0 replies      
I thought I'd give my $0.02 worth on some of the questions raised.

> I wonder if any blind coders could comment on how they see programmatic flow in their minds eye?

It's all about the structure. I think of it like reading a novel with a complex plot - you meet some of the characters, learn facts about them, and then you get introduced to new characters, and later in the book you find out how they relate to each other, and much later on, you may find out through some twist of the plot that the relationship isn't quite what you thought it was, and you have to recreate your mental model.

As a result, I find that sometimes I'm slower at ramping up on complex codebases (especially if it's poorly written with no structure), but once I have my mental model, I'm potentially faster than someone who has to have things in front of them to refer to. This is purely based on my personal experience.

> How do big companies that love doing white boarding interview blind people?

I've always turned up with my laptop, and proposed that I use it, plugged into a monitor the interviewer can see. The interviewer reads me out the question, and I take notes. I always do the coding in Notepad, so I don't have access to code completion or syntax highlighting.

> I think it is totally reasonable that a blind person could code, but how does a blind person learn how to code initially?

The same way a sighted person would - since most information in this field is text-based. Read the tutorials, do the exercises. In fields where mathematical notation or diagrams are prevalent (e.g. machine learning), some sighted help or adapted material is probably required.

> Why the hell would you hire a blind coder to ship your feature by next week when you can find an equal or better (white/Asian male) engineer to do the same thing? By definition, the blind coder is limited compared to non-disabled people. It's a purely business decision. No hard feelings, but I'd rather not have a blind person on my team or organization. Unless it is for PR.

I respond to this comment purely because, whether I like it or not, such thinking is not uncommon. Let's rephrase the question as "why would you hire a {foo} engineer to ship your feature by next week when you can find an equal or better ({bar}) engineer to do the same thing?". Put this way, I accept that smaller companies who need someone next week will always choose the {bar} engineer, because they're better, not because they're {bar}. But here there is an assumption that every single {bar} engineer is better than every single {foo} engineer, which simply can't be true. Make sure your biases don't prevent you from hiring the awesome {foo} engineer, who out-performs every {bar} engineer. In a larger company, I'd go further and say that you need to have an inclusive work culture, not only because it's the right thing to do, but also because if your customer-base is diverse (which on the internet, it probably is), then a really good way to make awesome products is to have a diverse workforce.

13
danielmorozoff 11 hours ago 0 replies      
I wonder if any blind coders could comment on how they see programmatic flow in their minds eye? I am curious how this shapes the way you write code as well. As a demonstrative example, i imagine very long / complex functional methods would be difficult to worth with as they would require more movement back of the screen reader and may reduce your efficiency.

Maybe people who work with blind devs could also chime in.

Many blind musicians and artists develop heightened perceptions in their respective art and generate some unique, original works. I wonder if this is true for coders/ mathematicians?

14
samfisher83 13 hours ago 1 reply      
How do big companies that love doing white boarding interview blind people. I am guessing due to ADA they have to give them some other test?
15
whitenoice 11 hours ago 0 replies      
Nice post, this reminded me of an article/video I had seen recently -Srikanth Bolla - http://spectrum.mit.edu/spring-2011/living-his-dream

- http://www.dogonews.com/2016/6/9/visually-impaired-srikanth-...

16
kewball 12 hours ago 0 replies      
I started a new job and the sysops guys works remotely. I had been communicating with him for over a month via email and chat before a colleague told me that sysops guy was blind. He is quite brilliant.
17
mcweaksauce 14 hours ago 7 replies      
I think it is totally reasonable that a blind person could code, but how does a blind person learn how to code initially? Did he become blind later on after learning how to code?
18
spoiledtechie 10 hours ago 0 replies      
Whats the screen reader this coder uses? Is there a best screen reader for us software developers?
19
Avalaxy 13 hours ago 0 replies      
Most be a good employer to publicly praise their employees like this! ;) Interesting read, glad he's doing well!
20
andrewjl 13 hours ago 0 replies      
Thanks for sharing this. It's very inspiring and interesting to read about solutions to the very unique problems visually impaired coders face.

Does make me wonder about non-text based input / code display methods that can help here.

21
dudul 13 hours ago 2 replies      
Nice post, but I was kind of shocked by this part: "A question going through your head is probably: Did we underpay him?"

Seriously? What kind of human beings do you think your readers are? Did anyone here really think about that while reading the story?

22
fit2rule 3 hours ago 0 replies      
One of the most brilliant coders I've ever worked with was 90% blind. He had a gigantic monitor, and his term setup was such that the gigantic screen showed big chars, 3 at a time. He'd scroll through code like lightning, reading it 1 or 2 chars at a time, and despite this limitation was one of the best kernel hackers of my professional experience. (We bonded as the only vi users in the dev group..)

To this day, I doubt I'd have the temerity and courage to work in this field as he did, and I have to say that I have immense respect for the disabled who overcome these disabilities and nevertheless remain extremely productive. It says something of a persons resolve that they are willing and able to write and debug kernel code, 1 char at a time .. I really don't think I could do it.

23
9 hours ago 9 hours ago 2 replies      
One of the best things about HN is that by the time I see comments as horrid as this, they've almost always already been flagged by community members. It would be better if they didn't exist in the first place, but that's probably too much to ask.

To all the users who have flag toxic comments like this one: thank you. HN has improved a lot because of you.

(We banned this account for trolling and detached this subthread from https://news.ycombinator.com/item?id=11886308 and marked it off-topic.)

24
Cuuugi 11 hours ago 0 replies      
I wonder if he saw it coming.
21
AI, Deep Learning, and Machine Learning: A Primer [video] a16z.com
253 points by jonbaer  22 hours ago   75 comments top 9
1
aficionado 15 hours ago 3 replies      
Basically this video ignores the history of machine learning in general. Jumping from Expert Systems to Neural Networks and Deep Learning is actually ignoring 36 years (and billions of dollars) of research http://machinelearning.org/icml.html (Breiman, Quinlan, Mitchell, Dietterich, Domingos, etc). Calling 2012 the seminal moment of Deep Learning is quite hard to digest. Maybe it means that 2012 is the point in time when the VC community discovered machine learning? Even harder to digest is calling Deep Learning the most productive and accurate machine learning system. What about more business oriented domains (without unstructured inputs), the extreme difficulties and expertise required to fine tune a network for a specific problem, or some drawbacks like the ones explained by http://arxiv.org/pdf/1412.1897v2.pdf or http://cs.nyu.edu/~zaremba/docs/understanding.pdf.

Those who ignore history are doomed to repeat it. As Roger Schank pointed out recently http://www.rogerschank.com/fraudulent-claims-made-by-IBM-abo..., another AI winter is coming soon! Funny that the video details the three first AI winters but the author doesn't realize that this excessive enthusiasm in one particular technique is contributing to a new one!

2
cs702 18 hours ago 3 replies      
This is an opinionated video that tries to rewrite history. For example, according to it, the "big breakthrough" with deep learning occurred in 2012 when Andrew Ng et al got an autoencoder to learn to categorize objects in unlabeled images. WHAT? Many other researchers were doing similar work years earlier. According to whom was this the "big breakthrough?"

The video at least mentions Yann LeCun's early work with convnets, but there's no mention of Hinton et al's work with RBMs and DBNs in 2006, or Bengio et al's work with autoencoders in 2006/2007, or Schmidhuber et al's invention of LSTM cells in 1997... I could keep going. The list of people whose work is insulted by omission is HUGE.

I stopped watching the video at that point.

3
KasianFranks 21 hours ago 3 replies      
AI is very fragmented. Biomimicry has always been the way forward in every industry and Stephen Pinker made good head way from my vantage.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

Saira Mian, Micheal I. Jordan (Andrew Ng was a pupil of his) and David Blei were not mentioned in this video so they are off the mark a bit. Vector space is the place.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

AI has become the most competitive academic and industry sector I've seen. Firms like Andreessen are trying to understand the impact during this AI summer and they should be applauded for this.

One of the keys to AI is found here: https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

Deep learning has very little to do with how the brain and mind work together.In the video the highlight on ensemble (combinatorial) techniques are a big part of the solution.

4
gavanwoolery 21 hours ago 2 replies      
Interesting to see the amount of "winters" AI has gone through (analogous, to a lesser extent, to VR).

I see increasing compute power, an increased learning set (the internet, etc), and increasingly refined algorithms all pouring into making the stuff we had decades ago more accurate and faster. But we still have nothing at all like human intelligence. We can solve little sub-problems pretty well though.

I theorize that we are solving problems slightly the wrong way. For example, we often focus on totally abstract input like a set of pixels, but in reality our brains have a more gestalt / semantic approach that handles higher-level concepts rather than series of very small inputs (although we do preprocess those inputs, i.e. rays of light, to produce higher level concepts). In other words, we try to map input to output at too granular of a level.

I wonder though if there will be a radical rethinking of AI algorithms at some point? I tend to always be of the view that "X is a solved problem / no room for improvement in X" is BS, no matter how many people have refined a field over any period of time. That might be "naive" with regards to AI, but history has often shown that impossible is not a fact, just a challenge. :)

5
justsaysmthng 16 hours ago 3 replies      
From the presentation:"There's a lot of talk about how AI is going to totally replace humans... (But) I like to think that AI is going to actually make humans better at what they do ..."

Then immediately he continues that

"So it turns out that using deep learning techniques we've already gotten to better than human performance [....] at these highly complex tasks that used to take highly, highly trained individuals... These are perfect examples of how deep learning can get to better than human performance... 'cause they're just taking data and they're making categories.."

I think that brushing off the dramatic social changes that this technology will catalyze is irresponsible.

One application developed by one startup in California (or wherever) could make tens of millions of people redundant all over the world overnight.

How will deep learning apps affect the healthcare systems all over the world? What about IT, design, music, financial, transportation, postal services... nearly every field will be affected by it.

Who should the affected people turn to ? Their respective states ? The politicians ? Take up arms and make a revolution ?

My point is that technologists should be ready to answer these questions.

We can't just outsource these problems to other layers of society - after all, they're one step behind the innovation and the consequence of technology is only visible after it's already deeply rooted in our daily habits.

We should become more involved in the political process all over the world (!) - at least with some practical advice to how the lawmakers should adapt the laws of their countries to avoid economic or social disturbances due to the introduction of a certain global AI system.

6
autokad 21 hours ago 2 replies      
We will be closer to cracking neural nets and are closer to the singularity when we can train a net on two completely different tasks and each task can make other predictions subsequently better. IE: train / test it on spam / ~ spam emails, then train the same net with twitter data male / female.
7
31reasons 19 hours ago 7 replies      
One of the main challenges in Deep Learning is that it requires massive amounts of data, orders of magnitude more data than a human toddler to detect a cat. It could be a great area of research on how to reduce the amount of data it takes to train the network.

One main thing it lacks is imagination. Humans can learn things and can imagine different combinations of those things. For example, if I ask you to imagine a Guitar playing Dolphin, you could imagine it and even recognize it from a cartoon even though you have never seen it in your life before. Not so for Deep Learning, unless you provide massive amount of images of Dolphins playing guitars.

8
epberry 20 hours ago 0 replies      
Pretty basic stuff - the history portion was more interesting than any of the content that followed. Anyone who's been paying the slightest attention in the last few years will be familiar with all of the examples used in the podcast.

On a side note I always admire the polish of the content that comes out of a16z - its typically very well put together.

9
vj_2016 18 hours ago 0 replies      
http://www.videojots.com/davos/state_of_ai.html#2181

Apparently, robots still struggle to pick up things?

22
Canadian doctors reverse severe MS using stem cells vox.com
346 points by yurisagalov  1 day ago   64 comments top 11
1
wildmXranat 23 hours ago 1 reply      
I live in Canada. I just listened to one of the participants talk about the experience on the radio and I found it just incredible.

In her own words, "She could not feel her body from the neck down. After the long and gruel ordeal procedure, she began to get sensation back. Things like hot and cold water began to be discernible. She no longer needed to hold both railings when walking down stairs in her home, needing a cane to get the mail, etc ..."

She said that it gave her her life back. She said that in short time, she began to get bored with doing the regular, tired routine and actually got a part-time job.

I mean, all that sounds phenomenal.

2
tempestn 1 day ago 2 replies      
Sounds like another good reason to consider banking stem cells, as described here: https://news.ycombinator.com/item?id=11830407

In this case the article describes using chemotherapy to stimulate generation of stem cells, then scrubbing them of the disease before reintroducing. I have no expertise in this field, but I would think having a bank of healthy stem cells would have simplified the procedure and perhaps improved the likelihood of positive outcomes.

@markkat if you're reading this, do you have any comment?

3
Hondor 1 day ago 1 reply      
That all happened 15 years ago. The article mentions that it's available at some hospitals, so that suggests it was ultimately successful and it's now part of regular medicine. I suppose it's still only for 5% of cases and only as a last resort and still most patients aren't having any reversal of their disease. There doesn't sound like much hope for it to expand to cure MS in general given how long it's been with apparently no further progress.
4
BurningFrog 20 hours ago 1 reply      
This is the tech support approach to the immune system:

Try turning it off and on again.

5
xor_null 13 hours ago 0 replies      
Quite interesting. But something i don't understand, MS causes immune cells to attack the myelin cells. Depending on the severe of the attack the myelin cells are completly destroyed or even the nerv cells themselves are destroyed. Recreating the immun cells would prevent the immun cells from attacking the myelin cells, but what happens to the tissues which are already destroyed? As far as i know, the body is not able to repair all kind of destroyed nerv cells / myelin cells on his own. So how can this treatment help to repair/recover already destroyed tissue?
6
jawns 1 day ago 2 replies      
Just in case anyone is wondering -- though at this point, isn't it your first guess? -- the stem cells used in this therapy were adult stem cells, meaning they were collected from her own body, as opposed to embryonic stem cells.

I mention it here because the article doesn't give that info until about half-way down.

7
djaychela 1 day ago 0 replies      
There was a comment on this on BBC breakfast yesterday, with an MS specialist saying that it was promising, but not up to the hype that people were making over it? He said it was an Avenue to explore further, and looked to be a good technique, but there was a long way to go - he quoted the stats from the study, and certainly seemed to be familiar with it and the methodology used. Can't find a link to it (it was an interview rather than a feature), alas.
8
reasonattlm 1 day ago 0 replies      
Publicity materials: http://www.eurekalert.org/pub_releases/2016-06/tl-tln060816....

Paper: http://www.thelancet.com/journals/lancet/article/PIIS0140-67...

The latest update for ongoing efforts to test destruction and recreation of the immune system in patients suffering from the autoimmune disease multiple sclerosis demonstrate that this approach is effectively a cure if the initial destruction of immune cells is comprehensive enough. Researchers have been able to suppress or kill much of the immune system and then repopulate it with new cells for about as long as the modern stem cell therapy industry has been underway, something like fifteen years or so. Methodologies have improved, but the destructive side of this process remains unpleasant and risky, something you wouldn't want to try if there was any good alternative. Yet if not for the scientific and commercial success of immunosuppressant biologics such as adalimumab, clearance and recreation of immune cell populations may well have become the major thrust of research for other prevalent autoimmune conditions such as rheumatoid arthritis. Destroying these immune cell populations requires chemotherapy, however, and with avoiding chemotherapy as an incentive for patients, and the ability to sell people drugs for life as an incentive for the medical industry, biologics won. For conditions like rheumatoid arthritis, the aim became control and minimization of symptoms rather than the search for a cure. Only in much more damaging, harmful autoimmune conditions like multiple sclerosis has this research into wiping and rebuilding the immune system continued in any significant way.

It is worthy of note that while these trials were only enrolling a small minority of patients, the approach could be used on every patient. That tends to be the way trials work, picking a small subset. The driving factor for keeping the numbers small is the onerous and risky chemotherapy process.

Beyond being able to pinpoint which tissues are suffering damage due to inappropriately targeted immune cells, the underlying mechanisms of most autoimmune conditions are very poorly understood. Multiple sclerosis, for example, results from immune cells attacking the myelin sheathing essential for proper nerve function. Collectively, the cells of the immune system maintain a memory of what they intend to target, that much is evident, but the structure and nature of that memory is both very complex and yet to be fully mapped to the level of detail that would allow the many types of autoimmunity to be clearly understood. That these autoimmune conditions are all very different is evidenced from the unpredictable effectiveness of today's immunosuppressant treatments - they work for some people, not so well for others. Many autoimmune diseases may well turn out to be categories of several similar conditions with different roots in different portions of the immune system.

Destruction of the immune system offers a way around present ignorance: it is an engineering approach to medicine. If immune cell populations can be removed sufficiently comprehensively, then it doesn't really matter how they are storing the bad data that produces autoimmunity. That data is gone, and won't return when immune cells are restored through cell therapies. The cost of that process today is chemotherapy, which is not to be taken lightly, as the results presented here make clear. A mortality rate of one in twenty is enough to give pause, even if you have multiple sclerosis. In the future, however, much more selective cell destruction mechanisms will be developed, such as some of those emerging from the cancer research community, approaches that will make an immune reboot something that could be undertaken in a clinic with no side-effects rather than in a hospital with all the associated damage of chemotherapy. Autoimmune diseases are far from the only reason we'd want to reboot our immune systems: as we age, the accumulated impact of infections weighs heavily upon the immune system, and its limited capacity fills with uselessly specialized cells rather than those capable of destroying new threats. Failure of the immune response is a large part of age-related frailty, leading to both chronic inflammation and vulnerability to infection, and it is something that could be addressed in large part by an evolution of this approach to autoimmune disease.

9
thatha7777 19 hours ago 0 replies      
Does this teach us anything about the causes of MS, and potentially inform on preventative measures?
10
lvs 1 day ago 2 replies      
A nice longterm study. However, the fact that they need to specify in the title of this lay article that "this isn't hype" really says it all about science "journalism."
11
purpleidea 1 day ago 1 reply      
I believe this might have been the study where 25%(?) of the patients were killed by the treatment. As a result, this is only indicated for the very severe RRMS (relapsing-remitting, not related to Stallman) cases.

I'm sure HN can correct me if I'm wrong, but the point to make is that this isn't a cure.

23
OpenSSL DSA key recovery attack iacr.org
172 points by bqe  21 hours ago   15 comments top 2
1
zaroth 20 hours ago 3 replies      
Make Sure DSA Signing Exponentiations Really are Constant-Time

"...the OpenSSL team committed two code changes relevant to this work. The first adds a constant-time implementation of modular exponentiation..."

"The execution time of the constant-time implementation still depends on the bit length of the exponent, which in the case of DSA should be kept secret [12, 15, 27]. The second commit aims to make sure DSA signing exponentiations really are constant-time by ensuring that the bit length of the exponent is fixed."

"...While the procedure in this commit ensures that the bit length of the sum kq is fixed, unfortunately it introduces a software defect. The function BN_copy is not designed to propagate flags from the source to the destination. In fact, OpenSSL exposes a distinct API BN_with_flags for that functionality..."

"In contrast, with BN_copy the BN_FLG_CONSTTIME flag does not propagate to kq. Consequently, the sum is not treated as secret, reverting the change made in the first commit..."

Exploitation requires a local 'spy' process recording timing signals while the handshakes are running. I assume this is an unprivileged process, otherwise wouldn't the key be directly accessible?

2
vessenes 18 hours ago 0 replies      
Needs to listen to only a few hundred handshakes. Ugh. Something tells me this could be deployed on AWS large instances with some success.
24
Laid-Off Americans, Required to Zip Lips on Way Out, Grow Bolder nytimes.com
228 points by joshwa  16 hours ago   179 comments top 33
1
nwhybrid 13 hours ago 3 replies      
As a former H1-B worker myself (please put down the pitch-forks :)), I can tell you that regular companies aren't hiring H1-Bs to replace you directly. Instead they contract swarms of us through companies like WiPro and other body shops. This, along with the inability to seek a more fitting job on your own regardless of skill level is the main problem I see. You're essentially shackled to these body shops. You can't go home because of their BS employee agreements that force you to pay thousands if you leave before a certain time, you can't ask for higher salary or healthcare benefits, you don't get to choose where you work so it may be Texas today, WA tomorrow, Alabama next week so forget having a family life, you're wife can't work if you bring her and kids with you so you have to just follow along to their whims regardless. It is pretty much human trafficking once you get here. I can tell most folks here are American, they have no idea how messed up the H1-B system is.
2
warcher 15 hours ago 3 replies      
Listen, there's no point claiming the H1 situation isn't a rampant clusterfuck. Actual skilled laborers lie in limbo while busloads of cheap, disposable foreign workers are brought over to get exploited and depress domestic wages. It's a full on disaster.

If you have to have temporary worker visas, fine. But don't tie them to a company. If there's a legitimate need, and they're legitimately skilled, they'll find work, and they'll more than likely find work that pays comparably to a native worker (or they'll get poached, because good help is always hard to find).

3
lazaroclapp 15 hours ago 4 replies      
This causes some pretty strong mixed feelings for me. On the one hand, foreign workers are already disadvantaged compared with domestic workers and H1-B caps are absurdly low. I keep failing to see why we are all supposed to open world markets to the flow of every sort of capital and goods except labor, or why someone deserves a job for being born American, rather than Indian (Note that I feel the same way when Americans are disadvantaged elsewhere for similar reasons[1]). On the other hand, a clause requiring that you never say something bad about your employer nor discuss the situation under which you were laid off seems not only abusive, but repugnantly so. In a country that constitutionally enshrines free speech, the ability to sign away for money your right to complain publicly about a person or organization seems particularly dangerous, and the act of asking someone to do so seems truly vile.

[1] https://medium.com/@rachelnabors/wtfuk-73009d5623b4#.466usni...

4
kazinator 15 hours ago 1 reply      
> Now some of the workers who were displaced are starting to speak out, despite severance agreements prohibiting them from criticizing their former employers.

You can easily reveal the facts it in a totally non-critical way.

At ABC Data, workers were encouraged to collaborate with management to come up with creative solutions to streamline processes and save costs. When I realized that ABC could save money by replacing me with an equally capable, yet much cheaper worker visiting the country on a temporary work permit, I immediately pitched this idea at the next big cross-departmental meeting. There was much resistance among management. They objected on the basis of the unique knowledge and skills that I bring to the team, and how we all "go way back" to the startup days. In the end they saw it my way and agreed to relocate my posterior to that outdoor concrete fixture which separates the road from the sidewalk or lawn. I was unfortunately not able to talk them out of the egregiously generous severance package, though even with that expenditure, ABC ends up ahead. ABC is a great company to invest in with terrific fundamentals and future prospects, and is led by a highly ethical team whose decisions are beyond reproach.

5
muhfuhkuh 8 hours ago 0 replies      
I find it interesting the amount of sudden vitriol by some of the formerly "free market" supporters that thought that the terrible job market for non-STEM majors would never, ever hit the technology industry. Those that decry artistry and creative industry jobs as beneath them and their money, those that felt working class people were just too lazy to find "good" jobs, and generally feeling apathy if not schadenfreude for those suffering from the corporate recovery of 2010-present.

Those same free marketers also agreed with the destruction of unions and protections like minimum wage. Now that the shoe is on the other foot, look at the comments. "H1-Bs are depressing prices" have replaced "why should I pay for music and movies" and "any industry that relies on ads should die".

Well, now with the tables turned, why should working class Americans and artists and content creators give one shit that the technology industry is suffering from "depressed salaries". Suffer with the rest of America.

6
rrecuero 11 hours ago 2 replies      
Former H1-B Worker as well. I was sponsored by Zynga Inc and Moz. Finally got my green card recently. As other people have pointed out above, I believe the problem relies on who is sponsoring these candidates. The following list speaks by itself:

1 Infosys: 23,8162 Tata Consultancy Services 14,0963 Wipro8,3654 IBM7,9445 Deloitte Consulting 7,016

Google, Facebook, Amazon and even Tesla or Palantir should be in the top spots. Setting a minimum wage to $100,000 would filter most of the sweat show applications out.

7
yuhong 15 hours ago 3 replies      
I am not surprised that companies that are broken enough to do the outsourcing in the first place are also often broken enough to have this term in severance agreements. But it really should not be standard.
8
abpavel 1 hour ago 0 replies      
A lot of uproar, but have you ever tried to teach a 60yo veteran working his paper trail to switch to python? The laid off personnel did not have the required skillset, and would be fired regardless of the source of replacement, H1B or local.
9
giis 3 hours ago 0 replies      
As a Indian and worked in Indian body shops (but who never traveled aboard). My views on Indian-body shop immigrants :

In 2005, when I completed university masters degree. Our class has around 60 students, I'm pretty sure may be 5 or 6 of (1%) them are talented and good in computers. Most of us took IT job. We had chances to immigrate to other as early as 2007 through Indian body shops like Wipro, Infy, Tata, CTS etc.

In 2016, now most of them (60%) of them working in aboard. May be 40% in USA and 20% EU/ROW. How did almost 59% of these not-so talented friends landed in other countries? Did they gained sound knowledge in those 48 months or later. I really _really_ doubt that. Its because we are low-cost workers and got close to Indian IT management. Typical Indian IT managers are ass*. They want boot-lickers not skilled developers. So its easy for unskilled people to land in other countries in the name of skilled-person. I know few guys who worked in non-IT companies (later created fake IT experiences) joined these body shops and now living in US.

I assume almost 80-90% IT Indians in US are helping American companies simply because he/she is low-cost worker. Its not like US companies cant find local talent to fill these roles, but they want low-cost solution.

If you got laid-off due to low-cost workers, please remember, its the US company & IT Body-shop bosses & stock-holders financially benefited a lot from these lay-offs not necessarily the worker who replaced you or the poor-outsourced IT slaves.

one simple solution : Please tighten the visa-interview process, make it more like real IT company interview (ask about data-structures and algorithms etc). This will ensure immigration rates from body-shops drop from 60%(from my numbers above) to 5% max.

---

My personal experience with Indian & US/foreign freelancers greatly differ. Even though US/foreign freelancers are costly (1USD:66 INR), I find them worthy. They go-out and put extra-effort to finish the task. With Indian freelancers, they just want to finish the task quickly and get the paid. I find it amusing that US companies hire us for of lack-of talents there :)

10
johndubchak 15 hours ago 3 replies      
At this point, with both Corporate abuses from American corporations and the H1-B holding companies along with the frequency with which we've seen the more expensive American work replaced with a much cheaper "guest" worker, I believe we need to ask ourselves, do we not have enough surplus American workers that are unemployed such that the H1-B program might not be necessary for a year or two until we've managed to get the unemployed American workers back to work?
11
triplesec 15 hours ago 1 reply      
Worth noting the headlines of the 'related coverage':

Lawsuits Claim Disney Colluded to Replace U.S. Workers With Immigrants JAN 25, 2016;

Large Companies Game H-1B Visa Program, Costing the U.S. Jobs NOV 10, 2015;

Toys R Us Brings Temporary Foreign Workers to U.S. to Move Jobs Overseas SEP 29, 2015;

Pink Slips at Disney. But First, Training Foreign Replacements. JUN 3, 2015

12
MaggieL 13 hours ago 0 replies      
It's always amazing that the H-1Bs who have these rare skills that are unavailable in US citizens nonetheless always seem to require training by the native workers they are replacing.
13
dghughes 10 hours ago 1 reply      
In Canada Temporary Foreign Worker (TFW) program was changed. It was getting to the point where entire industries changed overnight out with long-time employees and in with near slave wage "TFWs" (many people just say refer to the people as TFWs).

It got so bad it became a huge political hot potato and the law was changed resulting in many TFWs disappearing as fast as they arrived.

The law here was similar to the TFW laws in the US and it was abused the same way. The law was meant as a way businesses could get help by hiring cheaper labour if they couldn't find local workers. But of course hiring someone at $5/hour versus $11/hour is strong motivation for a business to cheat.

This was every industry too each had its own preferred ethnicity mining (Chinese), fast food (Filipino), IT (India), agriculture (Central American). It's only now changing back to local people born in Canada.

I don't have any ill will against the TFW workers it's the businesses who are the ones who ruined the purpose of the TFW law and now have to suffer for it.

14
johngray0 14 hours ago 0 replies      
Couple of things about H1B: 1. While H1B workers are cheaper, they are also pretty much stuck with their employer. Not 100% but much less mobility than citizen. Sorry, this reeks just a wee bit of indentured servitude. 2. If you take Corp. Execs at their word, and that rising H1B caps is good for country, then would be good for a journalist to probe: what about a 10M worker cap? Or 1.5 billion worker cap? That would make america even stronger, no?
15
xeropho 8 hours ago 0 replies      
Immigration for "skilled" workers is flawed in USPlease look at other countries ( point based system ) and for the love of sanity just adopt best practices ...flaws: 1) Skilled worker immigration should be based on "skills" (country of origin is incorrect: Indian and Chinese immigrants suffer, US workforce also suffers) 2) Immigration should be tied to individuals, influence from employer has conflict of interest. (employer wants to make money even if it comes at the cost of employees ability to immigrate)3) Immigration as whole should be based on what new individuals bring to the table (skills, age when joining workforce, true intent and ability to adopt new country)

Point 3 is complex and involves true value for US or country of adoption. *Immigration overall is not as complex or difficult as most politicians publicize it, it has become far more political than it needs to be, thank you President Obama.

16
35bge57dtjku 11 hours ago 0 replies      
I've never even had the option to sign a no disparagement clause for a settlement like that. I've had to support my family with what I already had whether I liked it or not. So what they did is shitty, but I have a hard time feeling bad for the 1% who complain that they have the option of getting more money by signing away their rights if they so choose.
17
pm90 13 hours ago 0 replies      
Another commenter alluded to it, but I find both the article and the discussion miss what is really going on. It seems like a lot of firms are basically outsourcing their IT workforce; something that has been going on for a long time now.

For most businesses, IT is a cost center, and if that cost can be minimized, they will do so, full stop. What the articles doesn't go into detail is whether those workers that are being trained will continue to work in the US, or whether they are in the US temporarily to understand the IT infrastructure and processes. It seems to me that the latter is the case here. Again, outsourcing has been going on for the longest time, and it really confounds me how many times the same issue will be brought up.

18
FLengyel 4 hours ago 0 replies      
These companies seem not to believe in the free market if they suppress the free exchange of information about their labor practices.
19
johngray0 14 hours ago 0 replies      
"Marco Pea was among about 150 technology workers who were laid off....." Going on a limb but guessing Mr.Pea might be of hispanic origin. Most talk about Trump has been tone, but little of substance. Sizzle vs steak, and all. My hunch is that most of Pundits&Pols class focus too much on tone and too little on what Trump would actually likely do. And that there are many Mr.Pea's for whom, while the would prefer nicer words coming out of TV set, are more concerned with what their wallets are saying. And that while Trump might take us to uncertain unknown, Clinton is doubling down on a crappy known.
20
fiatmoney 14 hours ago 0 replies      
There is a simple solution, particularly in the context of congressional or other government investigations: subpoena them. Nondisparagement clauses don't & can't cover compelled testimony.
21
thegayngler 12 hours ago 1 reply      
It's just businesses abusing the system. We need more legislation in place to keep businesses from Harmon Americans and shooting themselves in the foot long term. American workers who support unfeddered capitalism always end up wondering why they are starving when they supported tax cuts for the wealthy and big business while leaving no money to pay for the fallout of their disasterous decisions. Sometimes you get what you asked for.
22
franciscop 13 hours ago 0 replies      
Totally independently of other things, naming it "Required" sounds better than "took 10.000$ for zipping their lips"
23
sjclemmy 15 hours ago 2 replies      
Free market capitalism.

You can't have it both ways.

24
beatpanda 7 hours ago 0 replies      
So, is it time for a union yet?
25
the_ancient 11 hours ago 0 replies      
I have long been an open borders supporter, anyone that wants to come to the US (or any other nation) and make a life for themselves should be free to do so.

I am also 100% opposed to even the existence of the H1B visa program, this program is a modernized version of Indentured servitude that allows companies to take advantage of those employees by conditioning their immigration to their employment. These people are often coming from circumstances they do not want to return to, poverty, oppression, persecution, etc. So the threat of being fired and deported is a coercive force that employers use to exploit these workers.

I am fine with immigration, I am even fine if immigrant are willing to work for lower wages, provided that agreement is free from the threat of deportation...

26
lifeisstillgood 15 hours ago 2 replies      
I'm not sure I get this - are the workers coming in to replace them actually in the USA physically? Are they staying in the USA physically?

I mean is this offshoring / outsourcing or is it replacing with cheaper workers?

Why can't the existing employees compete for the jobs? I mean WIPRO must have going through some pretty big pre sales workups, so who else was competing?

I think what I mean is that doing this in secret implies that no one doing the actual job was ever consulted about the viability of outsourcing / replacing them / so you never get a real understanding of the costs or opportunities for improvement (essentially automating a bad process)

In summary - any company that does this cheap shot is usually one that is going to get its ass handed to it by a better more automated competitor.

27
known 9 hours ago 0 replies      
To promote Entrepreneurs/Local Jobs

1. Impose tax on corporate revenues, not profits

2. Regulate market capitalization of corporations

28
golergka 7 hours ago 0 replies      
Honest question: if a guy in India can do the same job as a guy in the US, but coming from a country with worse economy agrees to do it cheaper, why would I sympathize with the american guy in this situation?

Honestly, to me it seems like american working class have been really privileged compared to the world's population in 20th century, and globalization finally brings some equality which might not be a good thing to american middle class, but is a great one for workers from China, India and all other 3rd world countries.

29
known 9 hours ago 0 replies      
30
VladKovac 14 hours ago 1 reply      
Sorry, I don't feel bad for any of these people. Even the poorest Americans have better material conditions than a lot of Indian workers. Why do you think the Indian workers are willing to work for less?
31
pmarreck 9 hours ago 0 replies      
> Two years later, his work with a local tech contracting company pays $45,000 a year less than his Eversource salary. Many of his former co-workers are also struggling, Mr. Diangelo said, but stay quiet to avoid provoking the company.

So basically, the company wasn't making enough money to support the affluent pay of its workforce so it had to take drastic measures to stay afloat.

If you are suddenly making $45,000 a year less than you were... Perhaps you were getting paid above your true market value

32
ccrush 11 hours ago 1 reply      
Why not ask the recruiter or company you are applying for if they are willing to sponsor or work with an H1B visa (pretend you are in that position) and move on if they say yes. It will be interesting to see how well these companies do when all they can hire are H1B workers. After all, they can only get so many of those visas, and there is only a 30% chance that the application will actually be approved. They do need a certain number of US workers to have a stable workforce. With that gone, all their projects will be at risk of failure. How much of this risk do you think they can keep up before they are forced to give up these H1B abuses? I say abuses because there are clearly qualified local candidates willing to work at the market rate, but they lie about the market rate and lie about not finding local candidates to hire their cheap labor. Ideally, they should have a Job ID assigned to all jobs where they are considering H1Bs and post the name, wage, and qualifications of the hired candidate if they hire an H1B candidate. That way, potential employees that were passed up can see who they were passed up for, and complain to the right people if they were unfairly skipped on for a cheaper employee with poorer qualifications just because they were cheaper or would be cheaper in the long run because they would never get a raise or benefits or unemployment or pay into social security and medicaid.
33
Hondor 7 hours ago 1 reply      
This is simple "they stole our jobs" rhetoric. It's just like taxi drivers with their "Uber stole our fares". If you can't compete, go to another market. If you're already overcharging, charge less. "Depressing local wages" is a good thing - it reduces the cost to society of getting productive things done.

The other argument about foreign workers being exploited is a clear sign of people pretending to care but really not caring. Sure there are some who were tricked into debt traps, but for most, they know what they're getting into and they know it's better than what they have back home. So they're making a step up in life. You want to kick them back down because you care about their welfare? What it means is you only care about people in America and once they leave, they lose their status as worthy human beings who deserve good working conditions.

I used to be a foreign laborer. I was paid minimum wage to do grunt work that locals didn't want. It was wonderful. The currency was worth more in my home country than where I was working. It helped me pay off my student loan. I would have hated to be forced out by someone trying to protect me from myself. My coworkers loved it too, they'd send money home to help their parents run their farms and pay off their own loans. They'd laugh at the lazy local workers who were mostly overweight and doing the same job more slowly.

25
Coursera shuts access to old platform courses reachtarunhere.github.io
538 points by reachtarunhere  1 day ago   236 comments top 44
1
latenightcoding 1 day ago 6 replies      
It is truly sad to see Coursera getting greedier by the day. I can honestly say this website changed my life, I was living in a third world country and still in high school when I enrolled in Andrew Ng's machine learning class and thanks to that MOOC I was able to get a machine learning job building recommender systems for a Canadian company straight out of high school. There are plenty of amazing MOOCs that Coursera has completely removed from the website or are only available for people who want to pay upfront. Please don't be like Udacity Coursera.

BTW you have until June 30 to download your courses.

2
znpy 1 day ago 9 replies      
In case you want to download as may courses as possible before they fade away, here are some notes.

Please forgive me some mistakes, I wrote this a bit in a hurry.

===========================

1) Spawn a virtual server on DigitalOcean.

I am using the 40$/month in order to have 40GB of space, but my plan isto shut it down in a day or two.

The advantage is to have storage space AND super-fast connection.

If you don't want to spend money, here is my referral code:

https://m.do.co/c/867be540644c

This will give you 10$ credit for free.

2) Install screen, python-virtualenv, python3, python3-pip

3) edit ~/.bash_aliases:

 #!/usr/bin/env bash alias download="./coursera-dl -u <<username>> -p <<password>>" alias download_preview="./coursera-dl -b -u <<username>> -p <<password>>"
4) Install coursera-dl: see https://github.com/coursera-dl/coursera-dl#alternative-insta...

name the virtualenv "coursera", and place it in the root home directory

5) patch to use python3:

* pip3 install -r requirements.txt* patch coursera/coursera.dl:

 #!/usr/bin/env python # -*- coding: utf-8 -*-
Becomes:

 #!/usr/bin/env python3 # -*- coding: utf-8 -*-
6) Edit ~/.bashrc

Add this lines at the end of the file:

 cd coursera source bin/activate cd coursera
=======================

The setup process is done. Here is how to use it:

1) Start a screen session: `screen -S coursera`

download_preview compilers-004

you can download more courses in parallel by creating another window (C-a c) and typing donwload_preview $coursename.

3
haches 1 day ago 2 replies      
Worthwhile to point out that edX is a non-profit [1] unlike Coursera [2] and Udacity [3].

[1] https://www.edx.org/about-us

[2] https://www.crunchbase.com/organization/coursera

[3] https://www.crunchbase.com/organization/udacity

4
lovelearning 1 day ago 2 replies      
I feel Coursera's pricing strategy is located at two opposite extremes and misses out an entire range of options in the middle. They either make an entire course completely free with no option to pay even if the student wants to, or they put it behind a paywall where one can't even start without paying.

They along with their institutional partners are missing out revenue from people like me who'd like to pay some amount, but not the amount they fix. I'm happy to pay some amount without a certificate. They should consider giving a pay-what-you-like option for all their courses.

5
wibr 1 day ago 6 replies      
https://class.coursera.org/ml-005/lecture Machine Learning, Andrew Ng

https://class.coursera.org/algo-003/lecture Algorithms 1, Tim Roughgarden

https://class.coursera.org/algo2-003/lecture Algorithms 2, Tim Roughgarden

Edit:https://class.coursera.org/compilers/lecture/preview Compilers, Alex Aiken

What else?

6
tgokh 1 day ago 2 replies      
Coursera actually converted a course I was enrolled in to paid-quizzes-only, while I was actively enrolled in the course and on the second to last week of eight. They finally converted it back after 2 days of many of us contacting support but never gave me a straight answer as to whether it was accidentally or intentional :-/ Definitely lost my faith in Coursera as a platform over these recent changes.
7
mohsinr 1 day ago 2 replies      
Disappointed by Coursera and Udacity (they positioned for free MOOCs and now they are taking everything back they offered).

More power to KhanAcademy and MIT Open Courseware! For staying true to their mission of providing Free Courses...

8
LouisSayers 1 day ago 4 replies      
If anyone else is wondering, MOOC is Massive Open Online Courses.

It really gets on my nerves when people don't expand their acronyms when introducing a topic. Of course there are exceptions, but is MOOC really that common an acronym?! I just find it a bit inconsiderate.</rant>

9
avodonosov 1 day ago 3 replies      
Script to save course materials: https://github.com/coursera-dl/coursera-dl

I haven't tried it yet. Just was asking around how to save course materials (videos, slides, notes, etc) of an old platform course I want to return to sometimes. Got this advice:

 > app which can help you download all the > materials at one go. > https://github.com/coursera-dl/coursera-dl > Doesn't work all the time, but for old > courses should work.

10
raldu 1 day ago 0 replies      
I have been feeling more and more disappointed with their step-by-step implementation of paywalling learners, and a general decline in community engagement. This decision to cut access to the old material is very short-sighted, and it would do more harm than good to their "business".

The old content would have been perceived as having a historical value, as being among the first courses published in the first actual MOOC platform, not to mention the tremendous value those courses contributed by successfully reaching wide audiences around the world, changing many lives. Now they are making a bad image out of themselves.

Coursera has been getting progressively worse. There is no community engagement. I cannot be surprised or engaged by non-discussion going on, which is also the case with edX, by the way. I have done mentorship in one of the paid courses at Coursera and all I could do was to mechanically answer technical questions. Nobody cared about the critical aspects, nobody cared about generating interesting and thought-provoking discussions, even when some mentors have encouraged it. As mentors, what we were doing was just free technical support for the course providers.

Further, the recent content is at best feels like "best seller" stuff for whatever trending industry anyway. Even the UI has been getting slower.

This example provides the meaning of backing up (and further sharing) data stored in the cloud. Mostly we do not think it would be necessary to backup since the data is going to stay there "forever", right?

As a final note, I was surprised that nobody mentioned FutureLearn (https://www.futurelearn.com). It is a new MOOC platform with somewhat "European" feeling to it. I have surprisingly had the best community experience with quality discussions in one of the courses provided there. The overall content is very diverse and interesting. And yes, the UI is faster!

11
jsturner 1 day ago 0 replies      
A good friend of mine who works at Coursera attributes their descent to the brain drain they've had over the past year.

Apparently, management is sweeping the problem under the rug, and forcing a false rhetoric that the departures were good. Even their Glassdoor page[1] seems doctored now. Sad times.

[1] https://www.glassdoor.com/Reviews/Coursera-Reviews-E654749.h...

12
osivertsson 1 day ago 1 reply      
I agree that removing/limiting access is lousy by Coursera and Udacity, for courses that used to be free and contain valuable fundamentals.

I don't agree that the golden age is necessarily over though. The MOOC space is getting crowded, just look at all the offerings at https://www.class-central.com

MOOCs by government-backed traditional universities from Europe / Asia is taking over a large chunk of the "market" meaning that Coursera, Udacity, etc. is finding it difficult to get any returns.

13
brhsiao 1 day ago 11 replies      
I'll probably get shot down for being that typical negative HN comment, but do MOOCs like Coursera actually do much in the way of making education more accessible or society fairer? All the content offered on Coursera already exists on the internet. Really motivated people will aggressively look for study materials, and they generally don't have a problem finding it.

It seems to me that it's actually the internet that improves accessibility and fairness, through which curated collections of study materials are then delivered as MOOCs. Which is terrific, but then it's hardly shocking that they'd eventually have to monetize themselves. We've seen worse attempts to crack down on the internet.

14
dimdimdim 6 hours ago 0 replies      
Dear Coursera,

Thanks for all the free courses for the last couple of years. I understand the need to be profitable and make this a real business so you don't have to fire all your good employees who have helped provide free education for so long.

I for one welcome what you are doing - as I understand that's is impossible to sustain a free model forever.

All the best!

15
WalterBright 1 day ago 1 reply      
If I was a prof, I'd have every one of my lectures recorded and put online for free. I wish I had recorded the lectures I attended in college. Not recalling the lectures means the notes I took in class don't make much sense.

Heck, I record all of the presentations I do, and they get posted for free on the intarnets. I put a fair amount of work into them - why hide them?

16
bradleyjg 1 day ago 2 replies      
I just got a somewhat confusing email canceling my enrollment in the ever elusive Cryptography II course. I guess this is what that's about. If so, it's too bad, I had a great experience in the Crypto I but it wouldn't have been nearly as good without the quizzes and assignments.
17
linux_devil 1 day ago 1 reply      
There is no point of calling them MOOC , if they are not 'O'pen anymore.
18
bitL 17 hours ago 0 replies      
They should implement per region pricing depending on GDP PPP or similar. Paying for a course means commitment; while it's no big deal for me to shelve $50-100 on their course, it's a big deal for Eastern Europe, Africa, majority of Asia etc. If they adjusted prices to locally reasonable levels, they could increase both profit and completion rate.
19
Myrmornis 19 hours ago 0 replies      
This seems very sad. I was just considering taking the neuralnets-2012-001 class from 2012 and so although I can download I won't have access to any of the discussion forums etc. Is there an official position statement from Coursera on this decision?

I'd be happy to pay a bit for it if that's what they want. Or is their view that Geoffrey Hinton teaching neural networks in 2012 is just kind of cruft cluttering up the internet?

20
veddox 22 hours ago 0 replies      
Shame! I've done courses on both Udacity and Coursera in the past. Not very many, but they did shape me and taught me stuff I either couldn't have learned any other way or could not have learnt as well.

I understand that they are both businesses that need to take care of their finances if they want to survive. Nonetheless, I am still disappointed - when they started out some years back they were all full of vigour and idealism about free education, and somewhere along the line they have been quietly dropping that idealism. They didn't even try to explain why they were doing what they were doing and why they were changing. In essence, they betrayed who they were at the beginning, and that's what makes me sad.

So thank you, Udacity and Coursera, for who you were and what you gave me, but I fear our roads shall part here...

21
z3r0c00l 1 day ago 0 replies      
It would be great if you guys made a torrent out of the downloaded courses
22
master_yoda_1 1 day ago 0 replies      
Please teach a highly complex technical subject free for one month. Then write these kind of blogs.

Otherwise keep calm and mind your own business ;)

23
plinkplonk 1 day ago 3 replies      
I always wanted to work through the algorithms MOOCs from Princeton but I kept putting it off. Profs Sedgewick and Wayne are phenomenal teachers. Anyone know if this course will be available in the future?

(probably not, since they had no certificates etc, and I don't see them going along with paid-for-quizzes courses, but it doesn't hurt to ask)

24
sreeramvenkat 1 day ago 2 replies      
I hope edx does not follow coursera way.
25
ajmurmann 22 hours ago 0 replies      
I understand that they need to make money. However, something like this provides so much value to society that we as a society have a large interest in keeping it as available to everyone as possible. Therefore I think that we need to have a publicly funded platform like this or one run by a non-profit like Wikimedia Foundation. The lower the barrier for everyone to take classes the better.
26
greenmoon55 1 day ago 1 reply      
Would anyone be kind enough to provide a script for downloading assignments and quizzes?
27
etiam 1 day ago 0 replies      
Does anyone here know of an automated solution to get a faithful save of a whole course?As I recall it coursera-dl doesn't capture quizzes and forum, for instance.
28
mattfrommars 20 hours ago 0 replies      
I have to blame my procrastination and never having the habit to complete or start any of the courses I had 'enrolled' in. Now coursera shutting down, I'm leaving all my hopes to the knight of the internet to archive these. Will download and hopefully get back.
29
znpy 1 day ago 0 replies      
I wonder what would have happened with a flat subscription model... Like "Pay 19.99 monthly and take whatever class you want".
30
wtf_is_frp 1 day ago 0 replies      
The only thing I hate about the new platform is that you can't access the info until a week after you enrolled into self-paced courses. It is fucking stupid. Beyond retarded.
31
agumonkey 1 day ago 0 replies      
I liked their first offerings a lot, very very well done too and very capable platform compared to some others. It's sad that the model couldn't sustain.

ps: about downloading the courses pdf and videos... it's really the low hanging part, in the sense that lots of universities have open pages with lectures and sometimes videos too. What MOOCs brought were exercices + auto graders (+ student group).

32
znpy 1 day ago 0 replies      
33
andretadeu 18 hours ago 0 replies      
I confess I didn't get the point about shutting down the old platform. The new platform still allows you to enroll to a course for free and the content of several of them were updated. Some courses weren't offered a second time since 2012 or 2013 due to massive dropouts and very few students that finished the courses.Nowadays I'm taking some courses for free at Coursera, such as 'More Chinese for beginners'. I chose to pay for several and other ones I attend for free.
34
Dowwie 1 day ago 1 reply      
The author of this blog post has taken to a soap box to shout out, "The Golden Age of MOOCs is over" and that he hates Coursera.

Wow. Really? Do I want to even read what this is about? Fine.

"Of late we have seen MOOC providers caring less about the students and more about the $$$".

Oh boy, here comes the assault on reason.

"they should stop the game of telling people that they care for students and are here to provides universal access to the worlds best education."

Yeah, that's enough for me. Tarun Vangani needs a reality check.

Coursera has brought much good to the world. It has to provide good to the world in an economically sustainable way.

It would be great if the Macarthur Foundation gave its $100 Million grant to Coursera so that it could continue to focus on its mission. Hopefully, Coursera qualifies and applies for it.

35
simunaga 23 hours ago 0 replies      
why are disappointed? if everyone had certificates, how much valuable would they be for employers? it's odd. just think about it for a minute.
36
hyperpallium 1 day ago 0 replies      
This is opportunity for free courses.

Freemium doesn't just entice customers; it also denies oxygen to competitors.

The specific difficulty for courses is reputation - but wikipedia has managed it, so it's possible.

37
kercker 1 day ago 0 replies      
I can not see how Arab Spring made the society fairer. Internet helped the Arab Spring develop, but the Arab Spring is not such a good thing, because look what it left to middle east.
38
znpy 1 day ago 0 replies      
So I spawned a virtual server and I am downloading some courses I wanted to take.

What a shame. Farewell, free learning.

39
the_wheel 1 day ago 0 replies      
You can't deliver on your mission of democratizing education or operate as a VC backed business (which enables these attempts in the first place), if you're not making moves toward profitability. These companies are pioneering a space and searching for a viable business model in the process. They're surviving.
40
Rifu 1 day ago 0 replies      
To save people like me a trip to google, MOOC stands for Massive Open Online Course. Today I learned!
41
fiatjaf 1 day ago 0 replies      
What is a MOOC? These people should use the <abbr> tag.
42
nickpsecurity 21 hours ago 0 replies      
This sort of thing would obviously happen as Coursera is a high-value, VC-backed firm. This sort of thing is best done open and nonprofit. One I know like that is EDX: non-profit with open source software with courses from MIT, Harvard, etc. Check it out people.

https://www.edx.org/about-us

43
ZenoArrow 1 day ago 0 replies      
Thanks to the author for the heads up, would be real shame if these courses are removed without someone downloading all the material first (I don't care if it's against the ToS, I still think a torrent is the way to go, free education has a greater value than copyright protection).

I'm a bit confused about which courses will be removed and which ones will stay. Is there a list of courses that are present on the old platform but not on the new platform? Also, I don't know where I'd access the old platform and where I'd access the new platform. Am I right in thinking this is a course on the new platform?

https://www.coursera.org/learn/build-a-computer

If so, where do I go to see the old platform?

44
wonkaWonka 1 day ago 3 replies      
Learning is lovely, but without the advatage of being able to directly apply what you've learned toward actually improving your life, it's all just so much education porn.
26
Why do they love electric cars in Norway? bbc.co.uk
60 points by m-i-l  15 hours ago   43 comments top 11
1
reitanqild 3 hours ago 1 reply      
Live here. My brother-in-law just bought the Leaf. From the people I talk to it is always the same reasons:

* price (leaf, vw etc)

and/or

* love Tesla: compared to any other new 400+ hp luxury car the Tesla is a steal for the moment both because of buying price (around 600' NOK for the entry level Model S is cheap compared to any other new sports car around here.)

and/or

* being allowed to drive in the bus lane

On top of this you save fuel, toll roads, park for free in a lot of places, get free ferry tickets etc etc. My former neighbours who drive for 40 minutes to and from work said not buying the Leaf would be failing basic math.

Edit: and lets not forget being allowed to drive in the bus lane.

2
sandworm101 9 hours ago 4 replies      
One very small part of the arctic circle. That title is appropriate once electric cars are popular in the northern regions of canada and russia, who are by far the largest occupiers of the arctic.

Some data: It looks like the leaf's range is basically halved in cold weather. That, and the greater distances between everything in the true north, says to me that they are a long way from adoption.

http://www.greencarreports.com/news/1087587_what-does-it-tak...

Fyi, running the heater on a gas-powered car does nothing to the range. That heat comes from engine coolant. You are doing the engine a favor, something to remember if your engine is overheating.

3
LoSboccacc 13 hours ago 4 replies      
So basically it's incentives, inventives and incentives on top of incentives.

Wouldn't call that love but hey whatever it's still a good result, right?

4
pipio21 1 hour ago 1 reply      
If you travel to Norway, you see they have sea, mountains, and water, which means very cheap electric energy.

Probably the only place in the world with similar cheap electric energy/person is Iceland.

When I was in Iceland I could not stop thinking about the electric cars possibilities there. I have not lived in Iceland in winter though, which certainly will have more problems.

5
orik 2 hours ago 0 replies      
I can answer this question; because of tax benefits.
6
Arnt 4 hours ago 1 reply      
The oil age is ending in Norway. It's considered a fact of life.

You all have heard of the petroleum fund. That money is going towards pensions. Of course it's easier to buy an electric car if you're used to thinkinf of the oil age that something that ends within your own working life.

7
shermozle 12 hours ago 0 replies      
Demonstrating that tablet app for the Nissan Leaf perhaps isn't the best demonstration of great technology...

https://www.troyhunt.com/controlling-vehicle-features-of-nis...

8
beloch 12 hours ago 3 replies      
I wonder how well EV's cope with arctic conditions. In areas near the ocean temperatures generally don't get too extreme (e.g. Anchorage), but inland they can easily get into ranges that must be challenging for batteries to cope with.
9
davnn 4 hours ago 0 replies      
Combine incentives with enough purchasing power = profit.
10
dba7dba 5 hours ago 1 reply      
Look up 'bjrn nyland tesla'.

It's most watched vlog about Tesla S by someone who lives in Norway. He used to live in Oslo but also lived in the very far northern part of Norway. He offers very detailed reviews of Tesla. Beautiful winter scenery of Norway is a bonus.

11
1024core 8 hours ago 4 replies      
I was feeling all good for Norway, till I read this:

> It helps that Norway is also the biggest oil producer in Western Europe and the world's third largest exporter of natural gas.

Hmmm.... You know what would really help the environment, Norway? If you stopped drilling all that oil and gas. Your measly consumption of gas isn't the problem; your mega export of oil is.

27
Thoughts on Algolia vs. Solr and Elasticsearch opensourceconnections.com
85 points by softwaredoug  16 hours ago   30 comments top 8
1
latenightcoding 10 hours ago 0 replies      
I remember a previous employer asked me to talk to one of the authors because I have NLP and search engine design experience (I forgot which author). I remember he kept saying they don't do big data and that most NLP stuff other search engines use are irrelevant because their product works with the type of search they do. I asked a couple of complex questions which they disregarded as not important for their product.

This was probably 3 years ago or less so the product might have changed, but what I got out of that chat was that Algolia is for websites that want to add a search functionality with a nice UI without much hassle.

But if you are doing something complex it's does not compare to Solr or Elasticsearch

Again this was a while ago.

2
softwaredoug 12 hours ago 1 reply      
(Author here) Case in point where Elasticsearch shines is one of yesterday's hacker news articles

http://sujitpal.blogspot.com/2016/05/elasticsearch-based-ima...

Here Elasticsearch is used as a framework where "search" really means some kind of distributed feature similarity. But even text search where your incorporating a lot of data science, external signals, or semantic awareness can fall more into the category of distributed feature similarity.

Algolias strength is in a simpler path to straightforward and easily understood search. I'm not sure it's the path to amazing and deeply customized search (or search-driven features). Algolia gets far more right out of the box at a configuration level many need when they can't put a team around building an amazing Solr/ES experience.

My hope is Solr/ES can learn a thing or two in the ease of use dept with relevance!

3
dansingerman 14 hours ago 0 replies      
I've had great experiences with algolia. I've used it on numerous client projects, and my own product https://appapp.io .

It is great as an out of the box search solution that will probably fit 90% of cases. It is blindingly fast and my clients love it.

I expect there are some cases where elastic or solr will be a better fit; but these are most definitely edge cases for very specific search requirements. Algolia is a killer app for generic 'good' search.

4
bladecatcher 6 hours ago 1 reply      
I recently had to build a product search functionality. I was able to pick up nearly all important features of Elasticsearch in about 2 days and had live-indexing and search up and running within 2 more days. Of course, this wasn't production level scalable code. But it was fairly easy to hook a Logical Decoding Output Plugin on Postgres, which would stream database mutations (in form of JSON) to a Kafka cluster, from where the Elasticsearch layer will ingest the data and index/update it appropriately.
5
ronack 13 hours ago 3 replies      
In my experience, Algolia is OK for toy sites but not for anything even moderately complex. Even on HN it's often a challenge to find the right results. They seem to have gone with the philosophy that speed is more important than relevance. Keyword matching is only one small aspect of a good search experience.
6
sciurus 9 hours ago 0 replies      
Of course, if you like what Doug has to say and are building your own search functionality on top of Solr or Elasticsearch, you'll want to buy his book:

https://www.manning.com/books/relevant-search

7
krokoo 13 hours ago 1 reply      
One of the least talked about search engines is NodeChef Cloud Search. Anybody using any language could use it insofar as they have a mongodb driver. Anyone who used it could comment on it? https://nodechef.com/nodechef-search-and-sql-analytics
8
mozumder 9 hours ago 2 replies      
Why would one use Algolia over Postgres's built in text search features? Ex: http://rachbelaid.com/postgres-full-text-search-is-good-enou...

I already have Postgres search queries running on the order of 1-3ms for large queries, basically faster than Algolia, with seemingly the same feature sets.

I don't understand the value proposition over something like Postgres, for sites that already have Postgres databases. (And I'm sure other databases might offer similar built-in search capabilities, I just don't know those other RDBMSs)

28
What If PTSD Is More Physical Than Psychological? Evidence from new study nytimes.com
154 points by jcfrei  23 hours ago   90 comments top 17
1
Madmallard 18 hours ago 5 replies      
Or, get this, and this is super radical of an idea I know:

There is not much of a separation between physical and psychological in the first place. Every part of the body is interconnected--as people develop disease, often times mental symptoms develop secondarily so. Depression and anxiety are super common in all sorts of illness states. There are numerous studies showing cellular dysfunction and rampant oxidative stress connecting to the more serious mental illnesses.

2
heisenbit 5 hours ago 1 reply      
A more accurate title for the NYT would have been

What if SOME PTSD is more physical than psychological.

The are a number of mental conditions that have more than one cause. Brain is a complex system and one disturbs it in one place or another and at the surface it all looks the same. Computers are similar - one place it goes wrong and the screen it looks like BSOD.

There is a real value of finding one clear physical cause->effect opening targeted opportunities for treatment and prevention.

But then there are plenty of people with PTSD that have not been anywhere near a blast. It may be one piece of a huge puzzle.

3
parasubvert 18 hours ago 3 replies      
I found this article interesting but frustrating. Conflating PTSD (a mental illness that is caused by traumatic experience) with concussive traumatic brain injury isn't helpful.

I kept thinking it would be more interesting (and aligned to the headline) to look at PTSD sufferers brains when they didn't have a known physical injury to their brains.

4
sandworm101 5 hours ago 0 replies      
PTSD is common amongst soldiers, but it is far from a soldier's disease. It hits lots of people far away from any hint of a battlefield. Train drivers suffer it (they may witness many suicides). Doctors suffer it. Some types may even suffer it at a higher rate than soldiers (pediatric oncology).

If there is a physical injury that results in similar symptoms, then that syndrome should probably be given a different name in order that it be separated from the wider disease that seems to have no physical trigger.

5
k-mcgrady 19 hours ago 1 reply      
Shouldn't a distinction then be made between 'shell shock' and PTSD as experienced by people that haven't been in war zones? In other words what soldiers experience is not PTSD, it's a different physical ailment (probably in addition to PTSD actually).
6
slr555 17 hours ago 1 reply      
The title of this article extremely poorly chosen. The DSM-5 criteria for PTSD defines this condition in terms such that a great number of patients meet the criteria who have never been exposed to blasts of any kind. Yes, there may be significant numbers of patients like those the article describes who suffer from a previously uncategorized blast injury phenomenon that has sometimes been attributed to PTSD but the author is drawing the wrong Venn diagram for the reader. The new entity does not begin to subsume the entire PTSD population. This matters because too many patients whose symptoms have been stigmatized anyway will now be left with the impression that "real" PTSD is only associated with a history of blast exposure.
7
ChuckMcM 20 hours ago 3 replies      
I can certainly believe that there is more going on than just stress, and the argument that compression waves travelling through the brain can damage it is pretty compelling.

The question then is how would you block it? How would you protect yourself from a compression (blast) wave such that it went "around" your head rather than through it. Is there some material that could conduct that energy preferentially?

8
_delirium 20 hours ago 1 reply      
A historical quibble with the article,

It was first known as shell shock, then combat fatigue and finally PTSD, and in each case, it was almost universally understood as a psychic rather than a physical affliction.

This underplays a pretty significant history of doctors considering it a physical condition, as the old name "shell shock" itself suggests. The article does mention one such doctor, Fredrick Mott, but there were a number of them, and the debate recurred again during World War II, when a number of doctors used the term "postconcussion syndrome", again implying a disorder caused by physical brain trauma. Here's a review article: http://www.simonwessely.com/Downloads/Publications/Military/...

The article is right though in that in the past 50 years or so it's been mainly understood as a psychological disorder.

9
ruffrey 19 hours ago 6 replies      
With a background in philosophy and neuroscience, I cringe at articles like this drawing a distinction psychological and physical. They are simply the same thing.
10
jcoffland 16 hours ago 1 reply      
The desire for such a distinction is strong. We prefer to attribute illness to physical causes because in our society we blame the patient for mental but not for physical illness. This disparity drives the push to find physical causes to which we can more comfortably redirect blame. Such as DNA, leptin, bacteria, enzymes, etc. It is wise to be skeptical of any new findings which fit too well with human desires and thus make "good" articles.
11
birdDadCawww 16 hours ago 0 replies      
It is 2 way street physical damage can cause mental damage, vise versa.Pure psychological damage takes sometime to really cause physical damage I would imagine. PTSD Is a thing that we all suffer from as we get older and get bumps and experience trauma. this is a great find. +1.
12
InclinedPlane 17 hours ago 0 replies      
Sigh. What they mean is "what if two different conditions with different causes have been previously considered to be the same thing?"
13
supgg 18 hours ago 0 replies      
And what about a weaponized myco? https://www.youtube.com/watch?v=sT25HhAVhhU
15
ajarmst 18 hours ago 0 replies      
What if we had decades of good research showing that the physical/psychological distinction is not a helpful one, and therapies that don't take both into account are generally less effective for most pathologies?
16
nickpsecurity 19 hours ago 0 replies      
I have it. I'm pretty introspective, too. I'd say it's almost certainly physical in many if not most cases. I can actually feel that. The thoughts and mental energy normally ran smoothly through my mind. After injury to back & center, especially the stress of it, it's as if a shockwave went through my mind blowing fuses or something. That's best way I can describe feeling and effect. Certain paths and things just aren't there or light up when they shouldn't. The wiring from input(s) to outputs(s) is broken where it's no longer performing the function it should.

Far as stress aspect, it similarly ignites electrical and biochemical activity that flows through sensitive parts of the brain. We know those parts organize themselves expecting certain flows or weights. I'd default on position that they could be damaged by overflows slamming them. They'd get damage resulting in impaired function with rerouting of sorts attempted. The result would be replacing that function or maybe new + old happening side-by-side with old still broken. Would explain intermittent failures that seem to relapse to an imprint of whatever caused the stressor.

So, just a few thoughts combining what I learned studying those parts of the brain with my own experience of how a broken one works and doesn't.

17
lwwlww99 20 hours ago 4 replies      
What if not only PTSD but all psychological phenomena were, by their very nature, rooted in the physical chemistry of the brain?

How bizarre would that be?

29
Unraveling Mbius strips of edge-case data oreilly.com
8 points by wallflower  10 hours ago   discuss
30
Turn your handwriting into a font myscriptfont.com
129 points by dclaysmith  21 hours ago   54 comments top 14
1
WalterBright 17 hours ago 10 replies      
One of the charms of a printed book is the imperfection of the fonts and the impressions of the fonts. Each 'a' impression is slightly different - maybe a little higher, a little lower, a little blotchier, etc. But if I read an ebook, the letters are always identical.

I've often thought that if I wrote an ebook reader, I'd use a font that mimics the imperfections in printed works. I'd have maybe 20-30 different 'a' images, and select one randomly and then 'jitter' its positioning a bit.

I'd also use a background that looks like paper, rather than the perfect white or sepia ones current readers do. Heck, it would be easy enough to scan a few dozen blank sheets of paperback paper, and then pick one randomly for each page.

2
corysama 20 hours ago 2 replies      
Has anyone managed to assemble a workable font based on Dijkstra's handwriting? Last I checked, there we few attempts but they were all missing characters.

http://joshldavis.com/2013/05/20/the-path-to-dijkstras-handw...

3
kazinator 9 hours ago 0 replies      
Better do it five or six times, and then switch among all those captured fonts in the document, to have some variation in the letter forms. Otherwise your document will be the typographical equivalent of a drum track from 80's synth pop. Ooh, hit me with that bit-for-bit identical snare drum sample again: snap, snap, ...
4
gravypod 13 hours ago 1 reply      
This might be a great crypto tool! I can turn my hand writing into a font and then keep printed hard copies of all of my data! No one will be able to read it!
5
alfanick 3 hours ago 0 replies      
While idea is neat, there are artists who design fonts based on your hardwriting with better quality. While I would like sometimes to use my handwriting digatally, you cannot create "real" effect without having every ligature (two, three leters) written in the template.

Just a thought, instead of having every ligature you could let a camera/tablet observe how you are writing certain "test sentences" and turn this data into autoencoder neural network that would turn any text into "your handwriting".

6
phaed 18 hours ago 1 reply      
Which are the inner auxiliary lines? The black grid lines? The light square box lines? The inner light horizontal lines?
7
treystout 8 hours ago 0 replies      
In case you want something a bit more real than what font engines offer, check out https://handwriting.io (disclosure: I work there)
8
soylentcola 19 hours ago 0 replies      
Huh. I remember having a program that did this on my old Toshiba convertible laptop back in 2005. I got it on eBay for fairly cheap and it was neat to have a pretty lightweight (for the time) laptop with a screen you could flip around and use as a tablet with active digitizer.

Wonder if I still have the old font file floating around anywhere.

9
conchy 18 hours ago 1 reply      
There used to be a company that offered something like this in MacWorld and PCMagazine in the early 90's.
10
khedoros 14 hours ago 0 replies      
I've made a font of my handwriting in the past. I used a similar template, scanned it, and used a TrueType font creation program. It auto-converted the image into vectorized curves, A selected the parts of each curve, and put each glyph into a box. In each box, you could change the letter spacing, kerning on all the sides, alignment of the pieces (dot over the 'i', for instance).

I went through probably a dozen iterations, tweaking the spacing and alignment until it looked fairly natural. Now, I wonder if there was a way to automate that, or if the fonts created with this site take a little manual tweaking as a final polishing step.

11
Crito 12 hours ago 0 replies      
Neat idea, but what's the point of a font that nobody can read?
12
ecesena 18 hours ago 0 replies      
I can see an application for creating a company/product logo. I'd use it for my side projects.
13
parennoob 20 hours ago 2 replies      
Are there potential security concerns here?

I know not many people use handwritten letters any more, but this potentially gives this company a complete sample of your handwriting, which they can then use for whatever purpose they want.

14
Aelinsaar 19 hours ago 0 replies      
You know, I could see this being a fun tool for people who are into calligraphy to make their own fonts. I'm not sure that just "Turn my natural handwriting into a font" is what this does though. I tried it, and it worked much more cleanly with calligraphy.
       cached 12 June 2016 13:02:01 GMT