A year later, a new project comes along. It requires shaders. I don't remember how to use then and have to look it up again...
Repeat.
A year later, a new project comes along. It requires shaders. I don't remember how to use then and have to look it up again...
Repeat.
"Vertex program" is a better term.
That brings us to "pixel shader." That's actually a good name in order for beginners to learn the concept, but it's imprecise. OpenGL insists on calling it a "fragment program" because with certain forms of antialiasing, there are multiple "fragments" per pixel. "Program" is also a better name than "shader" because there are things you can do per-pixel other than change the color. For example you could change the depth written to the Z-buffer, or you could cause the pixel to be skipped based on some criteria, like whether the texture color is pink.
Anyway, it's just a tiny program that executes either per-vertex or per-pixel. For example you could write a vertex program which moves each vertex in a sinewave pattern based on time. Or you could write a fragment program to change the color of each pixel from red to green and back based on time.
Then there are more advanced/recent concepts like a "geometry program," which lets you generate triangles based on vertices or edges.
Sometimes I wonder if it's overly complicated, or if the problem domain is just complicated. It took me years as a kid to finally grok this, but once I learned it, it turned out to be very simple. Honestly it wasn't until I got up enough courage to sit down with the OpenGL specs and read through them that everything clicked. They're dry reading but not difficult.
http://greggman.github.io/webgl-fundamentals/
In particular
http://greggman.github.io/webgl-fundamentals/webgl/lessons/w...
Because of that and similar incidents, I've learned to import argparse up front but nothing else unless necessary. Once argument parsing is done, then importing other modules begins.
That said, it's very much a cool hack, and worth doing on that basis alone.
`#{context}.lineTo(#{x}, #{height})`
...you're doing it wrong. Please do figure out a way to "automagically" make things work without `...` or just don't bother.`...` should be a last resort option for when you just need to do something that can't be done any other way, not something that should be peppered over the entire code.
If you want to use Windows 8 Metro version you must "link" to a Microsoft account and change your login to use that method. It's the first thing that it does, but the UI is very subtle about it. IIRC, it even wants to change your desktop login settings. Phone software should not be changing system settings. Also the latest version removes the option to hide the fact that you have a webcam. With Skype you absolutely must read all the fine print and dialogues.
Thankfully, I never shared my genuine data with Facebook or opened an a/c on Instagram or Whatsapp ever so I'm good at a certain level when it comes to Facebook.
In my opinion Twitter is the only option that is sane at the moment.
As a side note, amazon affiliate accounts are equally bad.
Check the online test https://panopticlick.eff.org/ and the paper https://panopticlick.eff.org/browser-uniqueness.pdf for a good read.
Automotive systems communicate over a CAN [1] bus, not ethernet. In fact, this bus is usually physically separated between drive-critical bus (which controls things like ABS) and "comfort" bus (such as electric window controls, central door locks, wheel-mounted audio controls). Ethernet has none of the industrial strength qualities that make CAN a valid automotive control bus, such as signal hardening and real-time guarantees.
As far as these users have found, this ethernet port is connected to the infotainment system: the 17" display.
I would be deeply disappointed in Tesla if the infotainment system can modify drive-control devices with anything less than signed binaries and commands. As an aside, I wonder what the legal requirements of such safeties are.
The sketchy things: Jailbreaking a car seems pretty dangerous, especially since as far as I'm aware, the electronic systems control things including the brake. I know this only because Tesla recently released a software update that added "hill assist" which will hold the brake in place for 1 second when at a certain incline to avoid rolling back. Imagine a malicious software update that disabled the brake! Personally, I would jailbreak a phone, but not a car. :) HOPEFULLY the system the ethernet port provides access to is firewalled out of being able to update any software (i.e. the software update mechanism is some other device), but who knows.
The phone home can also be considered sketchy, but any Tesla owner is well aware the car pings home and relays diagnostic data to Tesla. At the very least, Tesla owners know it must ping home to check for updates periodically.
If anything, I thought it was kind of cool that Tesla engineers detected it and reached out so quickly. Imagine if you weren't tampering with your car and it WAS a high-tech attacker. It is good to know that they can detect the basics.
http://www.teslamotorsclub.com/showthread.php/28185-Successf...
Interesting in particular is one poster's claim that Tesla gave him a seemingly-dismayed call...
http://www.teslamotorsclub.com/showthread.php/28185-Successf...
So long as you don't cause any damage they can't void your warranty in the US thanks to the MagnusonMoss Warranty Act.
Whoops.
Also, looks like Tesla has got international deals with mobile carriers for data flatrates. I'm looking forward to see the first guy stream youtube or youp*rn on the dashboard :D
I would like an option to contact home base to verify that all files and configurations in my car are exactly like their suppose to be, else either disable the car or download the correct software.
Maybe a way to enable a developer mode which can only be used on a non-public road.
I just can't imagine modifying an electric vehicles computers and settings for anything useful. Please offer some suggestions if you can.
The title is 'All the things she said', which originally was a #1 Top 40 song by the Russian pop group 'Tatu'. However the picture is definitely not the Russian duo. Is this a German cover version of some sort?
As a driver who will have to occupy space around people playing with this while driving...F#&*!
'stty' is not recognized as an internal or external command,operable program or batch file.
I would love to read comments instead of downvotes. Where am I wrong?
I've implemented a few simple implementations of basic (and not so basic) type systems[1]. Currently, only 3 type systems are finished (Hindley Milner's Algorithm W, and Daan Leijen's extensible rows and first-class polymorphism), while gradual typing is almost done (based on [2], in branch 'dev').
[1] https://github.com/tomprimozic/type-systems
[2] Jeremy G. Siek, Manish Vachharajani - Gradual Typing withUnication-based Inference - http://ecee.colorado.edu/~siek/dls08igtlc.pdf
Often, the code is more complicated than I find reasonable, while omitting things that make a lot of sense in "real" code, and it's very hard to know as an outside reader what the exact motivation for each decision was, by the author.
A few such things that caused me to WTF:
The initial few examples use a pointlessly static and global line buffer, instead of declaring the line buffer where it's being used.
There is hardly any const in the code, even for cases where it obviously should (to me) be used, i.e. for variables that are never written once given their initial value.
A magic number (the buffer size 2048) is repeated in the code, and even encoded into a comment, instead of just using "sizeof buffer".
I do think I found an actual bug (on http://www.buildyourownlisp.com/chapter4_interactive_prompt)... the readline() implementation ignores its prompt argument and uses a hardcoded string, instead. The same function also does strcpy() followed by using strlen() to truncate the string it just copied; that really doesn't sit well with me.
Also, I liked the cat pictures and hope you'll add more of those in the next edition, perhaps.
Props for mentioning conditional compilation early. It's underrepresented in books but essential for real life.
Cheers for your hard work. :)
Currently this claims to be Lisp, but it is some strange version of it.
Using a Scheme or Lisp has a lot of advantages:
* one can compare it with other actually working implementations
* there is already a wealth of material to learn from or where code could be reused
* many books exist
* the language actually has already got some thought
A good example is the book on Scheme 9:
https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_...
Over time it's accumulated other features: http://akkartik.name/post/wart. But hopefully it's still easy to understand, because it has thorough unit tests. If you have a question about some line of code you can try changing it. Seeing what tests break is a great way to get feedback as you learn, in my experience.
If you have an HP printer with a Postscript renderer and you can get an image of it, you can find a digitised photograph of him too :)
First off, a small note: our project used C to implement Scheme (a lisp dialect), so it's similar but not exactly the same as this.
I'd recommend starting off with reading about the principles and coming up with a solution to each of the problems on your own instead of following a specific pattern as outlined in the book. For example, in our project, we decided to learn about the C basics, then just figure out how to make it 'act' like Scheme if given an input. Eventually, we thought of doing everything in a linked-list style to make organization and recursion easier and more natural, but coming to this conclusion on our own was very helpful.
Another thing is valgrind. As far as I could find, the text only mentions valgrind in one paragraph, but it's an excellent tool to check for memory leaks and errors.
Also, as mentioned in the book, a bonus is adding in GC. This turns out to be a pretty easy and a fun exploration of the different techniques available if you try a couple out for performance.
Our code in case you're interested: https://bitbucket.org/adamcanady/cs251_interpreter/src
[1] http://www.meetup.com/London-SICP-Study-Group/messages/board...
f_ = proc_self(lambda(v0)); t_ = proc(lambda(v1), f()); id_ = proc(v0, f()); pair_ = lambda3(op_if(v0, v1, v2));
For the moment I gave up on it though but maybe it might serve as inspiration ;)EDIT: Fixed now.
Is there any tree building capabilities using C?
For the projects, it was awesome. This was a long time ago, so I don't remember all the excruciating details, but it made coordination and collaboration on big documents pretty trivial. We also had some group messaging and file-storage accounts that went virtually unused because of Wave.
Our use-case was in writing large-ish documents (a few hundred pages each) as a committee. And it was pretty trivial to just create a wave for each section of each document, then use top level comments in the Wave for each subsection, and capture everybody's brainstorming for each section. It was like a living collaborative outline that eventually filled itself in and turned into a section. We used links off of the discussions into Google docs for collaborative editing of the documents and when we felt everything was good, somebody would simply go in and copy-paste all the text into master good doc for final cleanup.
Having worked on similar projects in the past, coordinating this kind of activity with email and word docs (or even google docs) is a huge PIA. When we decided to move it to Wave for a small trial (to figure out the workflow) it was pretty trivial and sort of worked naturally. There was a minimum of document syncing issues, or confusion about who said what in which meeting or email. The entire past history of discussion, with threading and everything was open for review. It was amazing despite many of the obvious issues with the Wave client.
The big discussion "groups" on the other hand were mess. It was impossible to find where new comments in old threads were posted, and once the conversations got big enough, the UI slowed to an unusable mess. Wave didn't last long enough for anybody to figure out how to deal with this.
Outside of those two use-cases I really didn't use Wave for much else. I suspect I would have found other uses as time went on if it had survived (and especially if it had flowered and federated).
I've thought long and hard about why Wave failed and it really does come down to 2 things:
- lack of focus
- poor user experience that never seemed to get any better
Wave tried really hard to be all things to everybody, with some really neat tech demos to show use cases (arranging a group meeting by embedding a poll and a map etc.). I think it was kind of like the C++ of communication mediums. It's sort of everything, but you can only realistically use some subset of the functionality in practice and the parts you don't use just end up seeming useless and weird.
On the user side, carving out just the functionality for your use-case was also hard. And the slow as syrup client really was a huge turn-off. Weird, non-standard scroll bars everywhere (which never got fixed and never worked like anybody expected), nobody liked real-time global echo as they typed (brought about by a confusion of how IM actually worked in practice), and way too many half-baked widgets and bots and things.
I think Wave should have simply focused on a few simple use-cases, nailed and refined those, then grown all the other awesome ideas organically so the user-community could start to slot those into their workflows.
Wave might have worked better if it was launched simply as a threaded messageboard with real-time replies showing up in a post. Users would have also needed 1 more layer of organizational abstraction, a "Wave container" to carve out different groups of Waves. In my use-case above we really needed to have a container for each document, with each Wave for each major section. But in the most general case, a "pg" type person could have created a "Hacker News" container, and each submission and comment history would have been the individual Waves.
When Wave launched, everything was a wave and there was no way to organize them, so people ended up using top-level comments in the waves as the "topic submissions" and the Waves went on for thousands of comments across dozens of topics before they started to break. It just wasn't a good organizational metaphor, but the system and the client didn't offer a good alternative.
Then the client was clunky and slow, nothing else on the web felt as slow even with such little graphical sparkle. It was basically a side-by-side email client by look, yet acted like it was folding proteins or mining bitcoin in some worker thread.
To be in any way seriously useful, this should be reimplemented as from scratch with a strict separation of UI and wave server back end, with a massively simplified deployment process. (Go would be a good choice imho).
The ideas behind wave are interesting, but the technical debt that Google dumped out when they abandoned wave is so massive, I consider the current wave code base a completely lost cause.
Seriously; interested developers drop into the mailing list form time to time; look at the code base, then run screaming. The reports barely even get done.
I can see why Google gave up on it but it's disappointing that they haven't incorporated these ideas into other products. And it doesn't seem like Apache Wave ever gained enough momentum to move forward.
What other projects are looking at similar chat/email/collaborative editing hybrids?
What they should've done was simply expose their real-time technology stack, then let people create documents backed by whatever (sandboxed) Javascript they want. When you open a wave, the Wave client would download the relevant Javascript, then use that to generate the user interface for the document, while managing the complexities of operational transforms and federation itself.
Part of me wished it stayed because I had a single letter user id.
Anyways, meeting minutes were something that it did well in my observation. Liveblogging was also interesting with it especially if you had maybe 2-3 editors and everyone else was view-only. Live tweeting events is rather feeble compared to what could have been done with Wave.
What idiot greenlighted that feature? :P
"Yeah let's make a distributed social network but don't let them connect to the one EVERYONE IS ALREADY ON"
I also used it for other things, but organising groups of people was the main use. Once it was discontinued I tried to run the open source version, but it was never really that stable and in the end we swapped back to emails.
It's a great shame to see this dead
Serious question: Am I a bad person for closing the tab as soon as I noticed that the first sentence is missing a verb?
Programming is all about details, and I guess I see it as a strong signal if a project can't get details right on their landing page.
On the other hand, maybe this unfairly biases me against projects maintained by non-native English speakers. And even among native speakers, perhaps I shouldn't be biased against people who choose to spend their time on pursuits other than writing perfect English.
I have run Apache Waves a few times, easy to set up and the simplified UI is very nice.
What I am missing from Apache Wave is a platform for writing software robots. Does anyone know of any useful options for this?
Wave had amazing technology and perhaps a "before it's time" communication model, but it needed a better narrative or training step.
Still, it moves on.
I think we all just wanted to be part of the "Google Wave croud" and the hype was more of a focus than the actual product. Thinking about it, I don't even remember what Wave actually was or why Google dismissed it.
Unless you have time & money to burn, you really should never never never trade your car into the dealership. It's the worst price you're going to get along the used car supply chain. And even if you think you're getting a good price, it's because you're probably pairing it with buying a car (and paying too much for the car you're buying).
The worst-case used car dealer supply chain works like this (going backwards):
* You buy a used car for $20,000.
* The used car dealer bought it from a wholesaler for $19,000
* The wholesaler bought it from a wholesale auction for $18,000
* The dealership who sold it at auction paid $16,000 as a trade-in.
(*I saw worst case because not all of these steps happen each time)
Note the chain begins with trade-in -- because dealerships know that people who are trading in are pressed for time and want a no-hassle deal. But in doing that, they're giving up $thousands to each step along the chain.
So when selling a car - going direct (or using a service like InstaMotor) is always going to net you more $$ than trade-in.
And now ya know :)
Nicholas here. I started Carlypso.com last summer when Chris (my co-founder) and I had helped a number of our classmates sell their used cars after graduation.
About 9 months ago, I had the pleasure of dealing with Instamotor founder Sy when he bought his Audi A4 with our help. His feedback over email was really encouraging:
"About your idea:
1. I was looking a bit at ways to remotely unlock a car over a network connection. It seems to me that the technology story around that is a bit weak, but not completely infeasible. First, there's automatic (http://www.automatic.com/) though i don't know if they'll ever ship a product, but in theory it's really what you guys want. Second, there is a myriad of small stuff like this http://www.text2car.com/
2. I really liked the concept of being able to take the car for a drive, free of any awkward conversation with the owner. It's really pleasant actually, and it gave me the time to think about the purchase and inspect the car. The user psychology is really nailed on that one. That's the way I want to buy every used car from now on."
Long story short, Sy and Val liked our service so much that they decided to just copy it. I'm all for competition and welcome Val and Sy into the used car market. It is sorely in need of a better solution, one which will make buyers and sellers happier and eliminate the inefficiencies and scamming of the dealer model.
I'm not a big fan of plagiarism but different people are different I guess.
If you'd like to stop by our offices, meet our team and see how we're doing, just PM me and we'll hook it up. Chris might even let you drive his 500 hp formula car, which he's been building over the last couple of years! :)
Cheers,
Nicky and the Carlypso team Nicholas@carlypso.com
There's often a ton of communication between me and sellers to determine things like:
- mileage
- automatic / manual transmission
- whether car has been in any serious accidents
- whether there's any major upcoming maintenance
- whether the driver smoked
- whether the title is Clean
Then I need to get the VIN and run a CarProof (Canadian equiv. of CarFax).
Then, assuming it checks out, schedule a test drive.
Then, assuming the drive is good, get a mechanic's inspection done.
Then, pay and sign papers.
It's a royal pain, compounded by the fact that CL and Autotrader individual sellers seem to be a generally sketchy or uncommunicative lot: no email replies, missed appointments, lying about the title, etc. are all common.
Car buying won't be done like it currently is in 10 years from now; glad to see you're doing something to improve the experience.
As an engineer, that really bothers me.
Apparently you need to add these tags?https://developers.facebook.com/docs/web/tutorials/scrumptio...
Wed love to get some additional feedback on the service. Any feedback provided is super beneficial. Were specifically looking for some feedback on:
1. When you get to the site, within the first 30 seconds, do you understand what we do. Why or why not?
2. If you do understand what we do, would you ever use a service like this? How much would you be willing to pay for this service?
3. Whats the biggest benefit to the service that you see?
4. Any general feedback on the site the content, the service as a whole would be great.
Thanks for the help guys, We always appreciate it!
That is a lot different than individual sales...which is how this service sells your car. So yeah, you get a better price for your car, but the service still eats a good chunk of the sale price difference between dealer-trade and individual sale.
And "all the paperwork" amounts to downloading a bill of sale form and title transfer online with the DMV. It's not rocket science, nor time consuming. Certainly not worth hundreds of dollars.
Questions that immediately come to mind - how do you handle negotiations, and how do buyers pay? How do you pay the seller afterward? (cash, verifying a cashier's check - other comments point out the common scams etc.) (Or is Instamotor just a transaction facilitator)? Where is the vehicle listed? Do you handle smog checks as well (for California)? <- Those are the most common things I go through the used vehicle process.
Also, how do you filter out non-serious buyers, especially for performance vehicles? A lot of sellers ask for some sort of proof of payment or cash ready before a test drive. It seems like you're mainly handling high-end cars where the 5% commission will pay off too, how does the model change for say a $5K vehicle?
Finally as a buy I would want to do my own inspection unless the inspection is at a mechanic I already trust - there is way too much conflict of interest having the selling side do an inspection.
Some ways to get it more buyer-friendly would be to offer a CARFAX as well.
Some dealer called me and left a message saying that they would sell my car for me for $200. Are they lying? Are there other hidden fees? Why should I pay InstaMotor $1000 instead of $200 to this other guy?
One advantage, I see is that I get to keep and drive my car. Except my car is parked in a lot, and I only have 1 keyfob to get it out.
Wish it was more prominent that this is Bay Area only for now. I doubted that it would be in my area, since it sounded like something that required local personnel and nobody seems to do startups in Houston, but you have to scroll to the bottom to see it stated.
Some of the text looks terrible on my Windows 7 - Chrome environment. Especially everything in How It Works.
The website is all about the sell side, and you say the buy side is ebay, craigslist, autotrader, etc. Why not post the cars in a store on your site too? Last time I checked those sites, they had a terrible UI.
In fact, that idea expands the business from a seller's assistant kind of thing to more of an online-focused CarMax. I think that could be really useful to everyone. I'm thinking kind of a cross between AutoTrader and CarMax. On the buyer's side, it could be like the existing online sites, except that the photos and details are collected by your people, so they're consistent and high-quality. Maybe you take the car for a few hours, have a mechanic check it out, wash it, and take high-quality high-res photos in a controlled environment. The buyers can know and trust that you've done all this, and that they're getting a good representation of what they're thinking of buying.
The sellers get pretty much what you're already doing, all of the legwork done for them, a better price, and hopefully buyers who know what they're getting, and are vetted as serious buyers before a test-drive is set up.
I suppose the main difference between you and Carmax, then, is that you are all online, thus have a much better online presence, and don't need to maintain a car lot. Does that translate into enough of a cost savings to offer a better deal to the customers than the current incumbents, though?
I would suggest making the sell price in the big red letters instead of the "youll earn" part. That whole section is a little confusing and I thought it was telling me I would only get 2k for my 20k car. I only really care about how much I would be getting for my car, the comparison to the dealership is important but secondary, and oh by the way I will have already probably looked that up on kbb.com.
1. Redundant content: I felt like you repeated the same value proposition in nearly every section of your website. However, each section had one small tidbit that wasn't addressed anywhere else (e.g. only available in bay area).
2. Excessive calls to action: Although it's great that you have placed many calls to action (various buttons, sliders, etc.), it really confuses the user. Having one consistent message and/or placement might improve conversion. Just a hypothesis and one you should definitely A/B test.
3. Poor navigation: The "home" button in the footer navigates the user to an awkward anchor tag on the home page. The About page has a large image that makes it seem like a homepage rather than getting in to the actual content of the page. The Contact page is an overlay modal and may not even be worth creating as a separate page. Perhaps placing it in the footer is sufficient.
4. Commas: In your slider showing the different amounts you can save, you should be sure to a comma as a thousands separator or a decimal point + "k" or "thousand". This will increase impact and hopefully conversion as well.
5. Reposition: Seeing this on "Show HN" and seeing lots of copy/images in the beginning made your site seem like it was a just placeholders. It wasn't until I scrolled to the bottom did I see there were testimonials and cars that were actually sold. This content should be placed further up since it demonstrates some form of validation/social traction. It would be great to have a ticker or some other indicator of cars sold at some point when it gets bigger.
Best of luck and feel free to reach out for any clarifications.
I've tried another service but due to a small dent in the car I couldn't sell it - the car's actually in the shop right now getting that fixed, because using a service to sell is so much better than the alternatives. My current plan is to go back to them when it's repaired.
The way it says "you earn" I can't help but think that that is the total amount you will get. I would prefer it to say "you earn $95,000. $10,000 more than you would have with a dealership". Or something similar.
What stops someone from taking a 1 hour test-drive? (and wasting you gas on their errands)
If someone gets in an accident during a test-drive who is liable? Do all insurance plans cover this?
What does remote-key access mean? "We verify all buyers and schedule flexible test-drives with remote key access."
How quickly do you plan on expanding to other states?
Your site has a 1px left margin which is making a horizontal scrollbar which is annoying me.
@media screen and (-webkit-min-device-pixel-ratio: 0)html {margin-left: 1px;}
That's much too steep for me. If there was a cap of say 1K, I'd be a customer in a heartbeat. But 5K gross is certainly worth the inconvenience of selling it myself.
I like the idea of the service but I'm wondering what the difference is between dealer trade in pricing and market pricing. Even if I have to sacrifice a couple grand to not have to do the whole process, I'm ok with that but I'd rather know pricing from that point of view.
I'm also curious how you plan on doing the legal paperwork as I've done a lot of this too and it's not easy getting setup to handle all the paperwork especially if it's between two parties.
Ofc, you'll be awesome, so then the Seattle Government will issue legislation banning your awesome idea because Used Car Dealerships won't be able to compete, and they'll use their Good Ol' Boy network to shut it down.
I would write blog posts on interesting success stories with different types of cars and seller stories -- often cars are hard to sell for various reasons.
Offering to take a professional picture is your ultimate in.
Instead I am getting this error: Sorry, we're not able to save your info at this time. And the div with that error is added each time I submit the form so you can get them to start stacking on each other.
<div class="alert ng-isolate-scope alert-danger" ng-class=""alert-" + (type || "warning")" ng-repeat="alert in alerts" type="alert.type" close="closeAlert($index)"> <button ng-show="closeable" type="button" class="close" ng-click="close()"></button> <div ng-transclude=""><span class="ng-scope ng-binding">Sorry, we're not able to save your info at this time.</span></div></div>
Who's too stupid to not figure this out and needs a service?
Okay, so they pay you more than the dealership. That's a selling point. But I could find someone on Craigslist and sell it to them for more.
A "voting ring" is when people get friends to upvote their stuff. This is against the rules. We want stories to be on HN because they're good, not because they were promoted.
It's sadly common for a great Show HN post to get demoted because its creators, eager to get it on the front page, tried to game it. I've noticed a pattern, too: usually their gaming technique is pathetic. Perhaps that's because they're creators, not promoters. Unfortunately, it has the side-effect of making it certain that the ring detector will nail their otherwise good post, while we carry on the real cat-and-mouse game with people pushing crap.
I've got what I believe will be a sweet solution to this problem, but it awaits time for implementation.
Please everybody, don't ring-vote your posts; just take your chances with HN's randomness. If a post is solid and hasn't gotten any attention yet, a couple of reposts is ok. Be careful not to abuse that, though, since we penalize accounts for reposting too much.
I'm going to demote this comment as off-topic so it won't get in the way of the real discussion. Send any moderation questions to hn@ycombinator.com.
Our example (country specific mobile app for doctors), spent 100 on AdWords, end result was literally 0 app installs, 0 sign-ups, 0 everything. Medical keywords are expensive, no chance of sending them directly to the App Store/Play Store (that we saw at least), and no other useful targetting.
Here come Facebook mobile install ads. 40 spent so far, 500+ app installs, 200+ sign-ups, great retention. We can roughly target medical professionals, take them directly to the app stores, and the clicks are cheap as hell.
I have no doubt that AdWords work much better in other cases, and that FB can be useless, but it's not black and white, you need to know which tool fits the purpose.
I created two FB mobile advertisements to direct traffic to the website, though the website is more eCommerce/service than any type of sign up. Budget $50 over 3 days reach was ~20,000+; the click through rate was .5% and .4% for the 2 ads; just under 100 clicks to the website with none resulting in conversion.
More disturbing was the fan page promotion through FB (paid "Likes" in my own words). $10 budget per day over 3 days; reach = 3,000+; total likes 34. What disturbed me though was when I would go to the profile page of the users who "liked" the fan page as a result of the promotion, many of the user profiles did not appear to be legit. Moreover, the majority of these users who liked the page had a single facebook post in their entire facebook timeline. As unlikely as it is that of ~30 paid likes nearly all were were inactive facebook users who were otherwise compelled to interact with my paid promotion, it is equally unlikely that facebook would be so brazen in committing fraud on advertisers by creating and managing fake accounts to click paid promotion/ads which could easily be proven. Nevertheless is begs the question what are these accounts (fake, bots, ect...) and who controls them and why?
Think of it like a computer program. If 99% of the program is right but one thing is broken, the entire thing won't work. Marketing is, in a lot of respects, the same way. You can be missing one single variable and your entire campaign falls apart.
Look at all of the variables in this campaign - title, image, targeting options, whether you do sidebar ads, newsfeed ads, or mobile newsfeed, and most importantly the product/service offered on the other side (not to mention the conversion rate of the specific landing pages). Apparently this campaign wasn't profitable, but I run a half dozen profitable campaigns on Facebook at any given time (most of them CPC), and I know people who spend $10,000/day on Facebook ads.
Facebook ads do work under the right circumstances. Concluding that they don't after one try is a little absurd.
There are unlimited examples of failed advertising campaigns on every single medium where failure can be seen measured. Most campaigns fail. They are a cost of doing business. Generalizing based on those would be very mistaken. Facebook is a new but giant ad program. The tools are still rough and "best practices" are even rougher. The consultants...
That doesn't mean that good campaigns can't be run on facebook. Facebook allows campaigns to be run that would be impossible to run anywhere else. In some cases the ROI is ridiculous. In others it's one of few things that works.
The number one reason for all these Facebook sux rants seems to be "it's not adwords." People want their adwords campaign to work on Facebook. If Coca Cola wanted to tell you that they're "the real thing" on adwords, it would be an uphill battle. A budget app on Facebook might be hard going on fb. Maybe not impossible, but it's a squeeze.
If you want to advertise a local children's art exhibition taking place this weekend, Facebook ads will work like magic. 'Friends of friends of the gallery who live close by and have kids.' There is no other platform that gives you anywhere near the reach, relevance and context that FB gives you for a campaign like that. I would expect the "ROI" to be under a dollar per physical ass-through-door.
Here is a good presentation from the quantcast guys about the "natural born clicker" problem. The people clicking on your display ad are probably anything but actual potential customers.
Clicks is just an easy holdover metric from the paid search side of digital advertising. It doesn't make sense in the context of early funnel ads. You need to measure the effect your display ads are having on your purchasing endpoints. Which is what the whole cross channel attribution industry is about.
Its quite possible your are getting good value from facebook ads, you've just inadvertently focused in on the worst subpopulation, the clickers.
[1]http://www.slideshare.net/hardnoyz/display-ad-clickers-are-n...
I've spent mid six-figures on Facebook CPC ads over the last several years and can definitively say that they work very, very well - depending on your use case. Mine is not the OP's use case (though I've sold a metric a-ton of SaaS on FB).
I advise everyone here thinking about FB ads to do the following:
- If you try it, dedicate a serious amount of money. Nothing less than $500 will suffice as you need to get statistically significant data across all your targeting sets.
- Focus very narrowly on your target market. Trying women age 22-29? Do that in your metro area only. Keep your targeting sets small so you have fewer variables to contend with.
- Don't lose your nerve. If you give up too quickly you'll know nothing.
Finally, I do understand the OP's frustration with click numbers from FB vs. GA. Don't let it get you down, as this is common on every platform. Optimize for your actual logged data and you'll profit.
They're in a tough spot. But they should at least start to turn the ship in the right direction before their total ad business collapses as "ineffective".
While you can self-serve advertising, it is not necessarily a good idea, in the same way that representing yourself in court is not necessarily a good idea.
Facebook cares much less about fraud than Google does, because FB has been under much less external pressure from shareholders to do it. That is not to say that various Google properties do not have fraud issues still. This is reflected in the price differentials.
After all, entire IPOs built around Adsense fraud occurred in the mid-2000s. There have been countless small businesses built around link fraud. It is quite likely that some of the major media names built on social traffic are also based in part upon defrauding social advertisers, because as of yet, few have cared about it, and many investors will just reward companies based on trivially faked traffic metrics.
But guess what? Circulation fraud is a problem that has been with us for over a century in media. Some combination of the price system, auditing, direct response ad testing, corporate incompetence, the good ol' boy network, and other methods have kept it from making advertising either totally useless or totally risk free.
Despite this, here are some issues that could help you advertise better in the future:
1. This is not a good ad. The copy is bad. The illustration is bad. The call to action is unwieldy. The logo placement is haphazard. The headline is Wrong. The human figure is in the wrong position. The button placement is haphazard. You would be better off plagiarizing ads from Mint and swapping out the logos and colors. If you want to keep the lady accountant mascot, put her to the left of whatever copy you want the visitor to read, and make her look at it.
2. The demographics you selected might as well have been at random. Market research is not throwing a dart at the entire planet and targeting whatever the dart landed on.
3. FB != Adwords in the same way that a newspaper != the yellow pages != a niche interest magazine != radio != flyers != e-mail spam != direct mail != a catalog and so on and so on and so on.
4. Your budget is so small that it barely qualifies as a test campaign. You ran a test campaign and discovered a hazard to avoid. That is the point of the early tests. If you run out of budget before you can discover a profitable marketing strategy, your tests will uncover that you are out of business.
In this case, you are dazzling yourself with your measurements because it is easier for you to do so than it is to think at a higher level about your objectives and the methods that you want to use to achieve them given your resources. You could call this Silicon Valley Degenerative Metrics Dementia. Sadly, there is no known cure for SVDMD.
I could personally care less if Facebook goes out of business, but as long as real people with wallets continue to use it, it will have some utility to advertisers, so long as they put forth at least some good faith effort to control their bot/fraud/misclick problems.
Considering some of the things that I have seen with Facebook, I am not confident that they really care, because many investors will reward them when they count bot users (or human users living in third world conditions) as if they were humans with first world bankrolls. There is no comparable Matt Cutts figure for Facebook. I think the real money on the platform, like was the case with Google for a long time, is on the criminal side.
Hopefully some short sellers are paying attention to these stories, because terror is the only thing that will induce Facebook to stop its absurd gyrations on the product side and actually police their platform. Short sellers can orchestrate a PR campaign and either pressure Facebook to start caring or can just make a lot of money by torpedoing the firm through aggressively publicizing its failures.
All that being said, I hope that this is helpful to you, and I am glad that more businesses are learning that online advertising is difficult, complex, and risky (like advertising everywhere and always in all mediums over all time periods using all sorts of technologies).
FB do let you set a Desktop Only audience for ads. You need to use Power Editor (Google Chrome only) and select Desktop under Placements.
I'd like to see a re-run with Desktop targeting only.
Edit: https://developers.facebook.com/docs/reference/ads-api/targe...
I don't know what Facebook's long term business model is. IMO, this isn't it.
I did really well running dating ads in every English speaking market, and a lot of Spanish speaking markets as well.
FB ads were the second step in my post-college process of bootstrapping myself as a viable economic entity amidst the fallout and financial devastation of the sub-prime mortgage crisis.
So thank you Mark Zuckerberg, if it wasn't for your creation I might have had to get a real job.
If FB's traffic is almost or even largely from mobile devices, paying to show ads for a non-mobile site to that traffic seems just silly. The site is downright hostile to mobile users; the text loads last, it starts with a video and a worthless image, and the actual text ping-pongs across the page to accommodate the clip art and screenshots.
Given this exact same data, the OP could spend a week making at least his landing page mobile, run another FB ad, and make a blog post about A/B testing your landing page for mobile users. But no, it's all Facebook's fault, because bashing Facebook will always, 100% get you upvotes on this site...
I'm not going to say that copy was not good or that the number Facebook tracks are correct. I find the copy of the ad used pretty good overall. However I've some consideration about it:
- I totally agree that Facebook must improve its tracking and must do more to prevent clicks fraud ... a problem which is still very relevant
- Lot's of Facebook Ads traffic comes from mobile nowadays. This can be good or bad. If you're promoting a website and aiming at conversions on a non mobile-friendly website you MUST disable mobile targeting.
- Overall $50 budget is not enough to get to any relevant conclusion.
- On a product like this (budgeting, finance, etc.) it's critical to find a very good audience to target. I'd suggest using a lot custom audiences.
- Facebook Ads bounce rate & overall quality is very often lower than Google, Yahoo & Bing, this is implicit in the nature of the platform. On Google you're getting traffic from people who are actively searching for a keyword strictly related to your product. On Facebook you're targeting people based on demographic profile and a vague interest. However Facebook is very often much cheaper than Google.
- CPC & CTR are meaningless metrics. You should always have conversion tracking and measure the overall CPA to acquire a customer. Click frauds, wrong reportings etc. ... they exists. You cannot do anything about it. You should not give a crap about it. Just check your Cost to acquire a customer and see if it makes sense.
- Sometime for some markets Facebook Ads for direct conversions simply don't work. Create valuable content like eBooks, webinars etc. to get cheaper leads and then close the sales funnel with targeted emails.
My 2 cents, hope it's useful for someone :)
FB Ads is a very stubborn creature. There's a lot to learn in order to make it work, their editorial team is trigger happy with account bans... but the volume is massive and the targeting options are amazing.
Running a 60$ is nothing on FB, you need to run volume and optimize.
I"m doing a lot of mobile right now and you can go anywhere from .10 to .50 per install and basically scale to infinity if you like.
Compare the author's: "Easy to use, free online budget" to"Scared of being in debt? Get your FREE budget report instantly. Click here to request info."
I'm not saying that's the ideal copy, but you have to get people's interest and explain more. Make it specific to a location like "Virginia" or "Sydney" or "Melbourne" or "Kentucky" and target those specific places you'll get a higher CTR and conversion. The mobile vs. desktop part is a whole other discussion.
Two comments:
1. This media is sold in an auction. If the quality of the traffic vs what you pay for it is bad value then the bids are set too high. If I pay over the odds for something on ebay it isn't just ebay that is at fault.
2. Doing online advertising well is harder than Facebook and Google are incentivised to make clear. In some cases this stuff is very hard which is why there are people whose full time job it is to get it right.
As someone with some expertise in biddable media reading posts like this must be like a coder reading about how a programming language is flawed because the Todo app scaffolding doesn't quite do what the author expects.
Google in the same timeframe has had a measurable ROI and is converting at ~10% for us; even mobile clicks.
This is just data, it's worth experimenting for yourself but I definitely feel that something sketch is going on. Make sure to use utm_ codes and something like MixPanel so you can track the originating source for your paying customers.
(Sorry, this is a little off topic.)
Can the OP or someone else fill me in on how he was able to target people who like other pages (that he doesn't own)? Is it through lookalike audience or is there a more direct way to do it? I've been trying to do the same (target similar pages) but I'm clueless as to how to do it.
Simply exclude the countries that are known to be click farms from seeing your page at all.
On your page settings, you'll see a "Country Restrictions" section. http://i.imgur.com/snkv77Q.png
When your page is not visible to a certain area, Facebook will not serve ads to people in that country.
Bam?
Call me a hater, I am one, and completely revel in the privilege :)
I have dev'd for an advertising company, have worked with several campaigning networks like HasOffers, and have found similar results. This is more than common.
Still very interesting that the bulk seems to be Android (mobile traffic). A must know if you are not targeting mobile..
...and therefore every single derived stat is completely nonsense. A percentage you say, on a sample size < 100?
Whats your confidence level on that?
(I also think that Facebook ads are a waste, and the conclusion is plausible; but the stats in the post are meaningless and probably deceptive)
And try different ad text. Acknowledge that this is a different platform than search and you need to advertise differently. Don't be so quick to dismiss it.
Edit: and I was comparing apples to oranges anyway. If I use your Google Analytics data for both measures, we get a range of 0.39%-7.7%. This upper bound actually exceeds your Google CPC result. You don't have enough data.
If you stick with those, you're pretty much guaranteed to be targeting real people, and not bots or fraudsters.
Also, learn to use the Facebook Power Editor, as you get a lot more control over how your posts appear, how your ads work, etc.
However, I have had little luck with adsense for the same company. Honestly I think picking an ad network for your market is a much bigger decision than "tuning" a network you are set on using!
What is Facebook CPC??
Even if you optimize for mobile web, I'm sure they won't experience the true power and wow-ness of your app unless they visit it thru a desktop.
It is a bit more complex but you will get more possibilities with this editor: https://www.facebook.com/ads/manage/powereditor/
The post's conclusion is that there's a strong indication of Facebook charging for mis-clicks and double charging for non-unique clicks.
Simply run your campaign for $X and measure your resulting sales, $Y, and now you know if you are wasting money or not.
If Facebook 'fix the problem' then the CPC rate will simply increase.
For this type of website, he should be bidding desktop - will pay maybe 40% more per click, but much better site engagement.
FB Ads still have a long way before the tools are as robust as Adwords, but learn the platform and run more tests before you trash it. Unless you're going for something ultra-targeted it's rare to nail a CPC platform on the first go.
First facebook ads would drive online ads pricing towards the bottom, then it would make obvious something almost all of us know: online advertisement is mostly an overpriced scam that doesn't work and most netizens despise.
Then the usual business model to support costs for running a website would crumble and disappear.
Sadly I can't find this article now (thanks to google tweaking its search engine, it's now hardly possible to find an old results or anythine relevant past the first half of the first results page), but I remember it pointed out that facebook users are much less receptive to ads than google search users. People using a search engine are actively looking for something and ads can be actually be useful to them, but for people looking for social interactions with people they know ads are quite useless and an annoyance.
Right now facebook lack of transparency and accuracy in their ad business means more profit and less trouble for them while hiding the elephant in the room, so don't expect the situation to change soon unless they're given incentive to do so.
How can Facebook fix this? They need to work like Google.
but they simply cannot/unwilling to do this because they are NOT google, otherwise they'd already have done this. I think come earnings report, they will have a lot to answer to, possibly lawsuit or investigation happening.- Why fix something that's not broken? It's working exactly as Facebook intended it.
Don't do business with scumbags.
Facebook is professional scumbaggery.
Just a taste
https://en.wikipedia.org/wiki/Criticism_of_Facebook#Privacy_...
https://en.wikipedia.org/wiki/Criticism_of_Facebook#Data_min...
https://en.wikipedia.org/wiki/Criticism_of_Facebook#Inabilit...
http://www.dailyfinance.com/2010/06/03/facebook-ceo-mark-zuc...
http://www.socialmedianews.com.au/zuckerberg-in-trouble-over...
http://www.techrepublic.com/blog/it-security/why-you-should-...
There are so many ways this post is wrong. First, a .4% CTR for a newsfeed ad sucks. That means either your demo targeting sucks, or your ad sucks, or both. Second, if android visits don't convert, change targeting to desktop visitors only. Third, traffic sources behave differently. You can't jump to the conclusion that they're scamming you just because one traffic source worked and another one didn't. Another possibility is that you haven't tried hard enough.
Folks, this is a very big deal for Microsoft. Who would have imagined this 10 years ago?
Here is an image that shows what they are putting into the communityhttps://pbs.twimg.com/media/BkT9oBcCQAAHIAV.jpg:large
C# is a great language, and I hope to see it flourish outside of the MS walled garden. Miguel de Icaza does what he can with Mono, but it can be so much more.
Competition is great for everybody and Microsoft is making all the right moves!
There are a number of things people are doing, based on Linux, which are basically using Linux as an OS and then layering on some custom drivers or such into a product. Whether its a web 2.0 company using it as the server OS or an embedded signage company. All of these were "impossible" when you had to have your own OS team to support them, and Microsoft benefited from that. Now the OS "maintains itself" (such as it is) and so businesses get everything they got from employing Microsoft tools but at a much lower effective cost. They don't need to pay big license fees, they don't need to hire programmers to maintain a lot of code that isn't central to their product, and they don't have to spend a lot of money/time training people on their own infrastructure. That is a pretty big change.
Its nice to see folks realize it isn't the software that is valuable, its the expertise to use it that has value. By open sourcing the C# compiler Microsoft greatly increases the number of people who will develop expertise in using it and that will most likely result in an increase of use.
Cloning into 'roslyn'... remote: Counting objects: 10525, done. remote: Compressing objects: 100% (4382/4382), done. remote: Total 10525 (delta 6180), reused 10391 (delta 6091) Receiving objects: 100% (10525/10525), 16.94 MiB | 1.69 MiB/s, done. error: RPC failed; result=56, HTTP code = 200
Edit: This looks like an incompatibility between GnuTLS and whatever Microsoft is using for TLS. Using git+libcurl linked against OpenSSL works fine.edit: Ah nice, Apache 2 license explicitly calls out a patent license is granted for use. I wonder how much cajoling it took to get the lawyers to agree to that!
Certainly Microsoft wouldn't mind just throwing some millions at them and buying them outright, so are we to deduce that any such offer was rejected?
See Locke1689's comments here, especially:
https://news.ycombinator.com/item?id=7524722
"the native C# compiler (that's what we call the old C# compiler that everyone's using in VS right now)"
Is this the return of the original MS?
http://roslyn.codeplex.com/SourceControl/latest#Src/Compiler...
Something I've just learned from looking at the code is you can jump between cases in a switch statement :
switch (a) { case '1': ... case '2': goto case '1'; }
Never realised you could do that.[1] http://www.microsoft.com/en-us/download/details.aspx?id=1412...
[0] - http://roslyn.codeplex.com/SourceControl/list/changesets?pag...
Also, I read a lot of comments saying this way good for mono ... how is this ? Wouldn't an open source CLR be more useful ?
It has actively been pushed by some of Microsoft's evangelists (Phil Haack (ex employee, works at github now i think) and Scott Hansselman to say the more popular names).
I believe they got some playfield to do things and now the community has more and more impact (eg. Nuget and software like myget which is based on Nuget (Nuget for Enterprise))
Also, the CEO isn't Balmer anymore, that probably helps to.
With Valve pushing their Debian fork and more gaming support for Linux in the last time, Microsoft wan't to appeal to the Open Source community the reduce the "bashing" which ... which could actually loose some force behind it. Not that it could actually benefit Linux with better Mono support etc.
But, I dunno. I'm extremely skeptical of Microsoft's ability to put long-term momentum into any of their non-core strategies. All these things are one re-org away from becoming basket cases.
Case in point: XNA
Microsoft, please add unix terminal instead of start button in Windows 8.
It's not an advertising company like Google. Google makes money when you use the Internet; Microsoft makes money when you pay for its software.
"But I do, they are on this website here and there is a link to them at the very top of the README!"
Didn't matter, I got told off for what was really "you don't have docs in the usual place on GitHub."
Very frustrating.
@patio11 I think it was made a comment in a blog about don't put Open Source on GitHub because you really build up GitHub's name not your own, which is an interesting point to discuss. EDIT: found it https://training.kalzumeus.com/newsletters/archive/do-not-en...
"This is one reason why, while I love OSS, I would suggest people not immediately throw their OSS on Github. That makes it very easy for developers to consume your code, but it does not make it easy for you to show the impact of that code to other people, particularly to non-technical stakeholders. To the extent that people's lives are meaningfully improved by your code, the credit (and observable citations) often goes to Github rather than going to you. If you're going to spend weeks or months of time writing meaningful OSS libraries, make a stand-alone web presence for them."
(For my project I'm using GitHub Git, Github issues, but everything else is on a website on a domain I control.)
This can be useful when people are commenting on your pull request and you are not sure whether they have a final decision on the merge.
next up: a well-written & concise guide on writing proper commit messages. could be based on http://tbaggery.com/2008/04/19/a-note-about-git-commit-messa... and http://robots.thoughtbot.com/5-useful-tips-for-a-better-comm...
In contrast to those approaches, Lamport's ideas seem quite reasonable and have the benefit of being language agnostic. He definitely has a lot of interesting ideas here!
Jeff Atwood describes the app-installation-headaches nicely here: http://blog.codinghorror.com/app-pocalypse-now/
It's painful to have a bunch of permissions in the manifest that aren't used by 100% of users, but 100% of them have to allow them if they want to install the app.
Android needs this desperately. One of my apps has ~15% of users never updating because I added an additional permission and when you do that you can't auto-update. I wish I could just ask for it at runtime, since it's a Camera permission and I added picture-taking to my app. I'm sure 100% of users would say yes at that time.
I'm an Android user, but I prefer the unix philosophy, I just want an app to do one thing and to do it well. It's hard to find apps that do that I find.
Examples that have turned me away from apps before: a filesystem viewer doesn't really need the ability to control my wifi. An ebook reader doesn't really need access to my contact list.
Our experience at Theneeds is kind of strange in this regard. We have the "classical" initial join page with social buttons, and we ask for permissions when the user tap one of them. Surprisingly, we realized that many users click on Facebook icon, but next they "Don't allow" permissions. This forced us to implement a web fallback to still be able to authenticate the users (without forcing them to go to the iphone settings).
"SuperApp would like to send you push-notifications"
If you can make your users feel more comfortable about the legitimacy of your app and help them to feel more at ease with giving away those permissions, then you're doing a good job.
We just got done giving a talk on NTVS & PTVS at Build in SF. The reception was great (given this is a primarily .net conference). I did an informal poll of the audience (180 or so), asking whether they were planning on deploying node/python in their enterprise. The response was around 75-85% Yes to both, which was somewhat higher that I had expected.
The cool new feature is this Beta are TypeScript integration, Remote debugging (inc. linux), Edit&Continue (no server restart), free edition (NTVS + VS Express), etc. and numerous bug fixes.
To address a few comments regarding strategy - most are correct, though some are over thinking it a bit :). The project was proposed & started by the PTVS (python) folks, and mgmt was rather lukewarm about it. It was definitely not part of some uber P1 strategy. I wish it was. However, since then it's gained some momentum thanks to the community and it's become important enough that Scott Guthrie mentioned it in his keynote, and Soma (SVP for developer division) just blogged about it.
a few new videos (pls excuse the production, we do our own videos...)
new npm UI (community contributed) https://www.youtube.com/watch?v=AwSzxFY5CMI
twitter sentiment app -- https://youtu.be/9tf6HmG9VAA
remote debugging https://www.youtube.com/watch?v=ZAroJmb6XY4
So what's next for MS? I think they are getting the direction right for opening up for external MS product users, and now it's time to recruit top talent again. There are just too many great hackers think MS is old (just look at some of replies in this story), which to large degree is true, and it will take time to fix that, but it can be possible done with: 1) create openness [culture, keep taking more open-source project like open-day-light, keep opening tech inside MS to others, etc.]; 2) buy early-stage companies through acqui-hire. It will be an uphill-battle and I am not an expert on this, and I am very interested in what other people here on HN thinks.
http://nodejstools.codeplex.com/releases/view/104141
https://www.youtube.com/watch?v=ZAroJmb6XY4
It was in the original title, but has been edited out.
For ref, I deal with Microsoft a lot and wrote a ton of c# over the least decade (more than anything else probably) so I'm not biased against necessarily but all-encompassing announcements like the ones over the last couple of weeks make me suspicious.
Edit: to extend my thoughts on this some more:
I don't think we're seeing embrace and extend. I think we're seeing "go on - use our tooling". Once you're in a tool ecosystem it's hard to get out of. I mean really hard. Same goes with cloud ecosystems which neatly integrate with their tooling. Their offering is to host all of your stuff (Azure) and mediate between you and what you're working on (Office/VS/Xamarin potentially).
A fully heterogenous system with a sole vendor mediating your access becomes an interesting situation when for political, financial or legal reasons you want or need to leave.
Now on this Node.JS IDE, it actually makes me want to use Node.js because it's on Visual Studio, however, I'm also open to alternative IDE.
My favorite is Jetbrain's IDE products, I use webstorm and phpstorm, and pycharms. I love them all, would be nice if they had one for Node.js, as I'm not sure if webstorm has extensive support for it.
Plus, the interviews he's given on 60 minutes and Fresh Air never once mention the terms "market order" or "limit order". If you don't explain those two basic terms at the heart of the HFT controversy, you're not giving people information, you're only giving them disinformation.
That said, there's obviously lots of shady ass shit happening on Wall Street every day, but Michael Lewis is not helping the situation one iota, from the looks of it.
From acquaintances that knew Katsuyama personally, he was described as a genius marketer, not a technologist. Before even Lewis came along, he had crafted a large part of this narrative: the Thor matching technology succeeded on a compelling story. Lewis got sucked in.
The personal reactions you may have seen (William O'Brian on CNBC) are authentic: HFT participants (and those who deal with them) have been villified in an industry already viewed in a negative light. There are some bad apples, but there are also many who genuinely believe that they are doing a service for the market.
I don't blame Lewis for this. I just hope that there is an author that can create a compelling story that doesnt fall for the tired trope of the evil HFT trader. The story exists - it is just very technical and nuanced at times. Unfortunately, many HFT participants have been shamed away from standing for what they believe in so there are very few left to tell the story.
If you want to read a rebuttal and learn more about the markets at the same time, check out this analysis by Larry Tabb - a market research consultant prominent in the US execution technology market:http://www.scribd.com/doc/215693938/No-Michael-Lewis
You may not like homeowners and think some of them are or should be convicted, but it's ridiculous to say someone is 'shilling for homeowners' when they point out that real estate brokers are overpaid and skimming.
Some more balanced discussion - http://streetwiseprofessor.com/?p=8333
http://blogmaverick.com/2014/04/03/the-idiots-guide-to-high-...
http://blogs.hbr.org/2014/04/high-frequency-trading-threat-o...
In a nutshell, people who run big portfolios don't want to give away information about what they're doing, and they don't want HFT types to be able to pay exchanges to get first crack at front-running them.
On the other hand there is a legitimate market-making function, and there's a tradeoff between transparent markets and forcing people to share info that lets other people trade against them.
The only critique that matters is whether or not front running of buy orders by HFT traders is real or bull. Anything more is just an attempt to cloud the waters.
Perhaps his public statements in interviews haven't been so nuanced.
Also, these rebuttals to the book don't really address the front-running issue. Is it simply an unavoidable consequence of the physical reality of separate markets? Should anything be done about it? Is it even still occurring or has competition among HFTs and savvier buy-side order routing eliminated it? I would like to read a rebuttal that discusses this.
That is the kind of unintentionally ridiculous anecdote which undermines the moral center of this book. SAC Capital, after all, is the same fund which ran into one of the largest insider trading cases in history, which is also about taking advantage of information in a, let's say, special kind of way.
I know a lot about this subject, probably too much to let the judgment fall cleanly in one camp or another on HFT, but with all the hubub right now, I find it might be useful to get biblical for a second - let the person who is without sin cast the first stone.
http://www.zerohedge.com/news/2014-04-03/bats-admits-ceo-lie...
This is actually very simple. Natural buyers and sellers do not need intermediaries, but intermediaries do need the natural traders. So if the natural traders can coordinate, they should be able to set rules that favor themselves and disfavor intermediaries. I won't say that what HFT does is "unfair" (capitalism does not contemplate fairness), but I think it's highly ironic that HFT and their supporters are complaining how "unfair" it is that natural traders are working together, and yes, marketing their new exchange.
http://www.amazon.com/review/R3PJO6KJGRMWUE/ref=cm_cr_pr_vie...
Yet they are more than happy to sell their order flow to market makers who use HFT. (To allow them to trade against it before going to the exchange - as is customary - nothing wrong with that)http://www.schwab.com/public/schwab/nn/legal_compliance/impo...
Seems like they are jumping on the populist bandwagon by claiming "HFT bad!" but I think what they mean is "HFT bad - unless it's from one of the firms we sold our order flow to!"
The Blind Side -- the story of how one of Michael Lewis's classmates as an Ole Miss booster, gave impermissible benefits to a high school recruit and got away with it.
Moneyball -- the story of a GM with 0 World Series appearances and a .530 WP.
(Some people mentioned "Dark Pools" by Scott Patterson. Although also interesting, that books was often quite painful to read because it was quite clear that the author did not understand basic financial and programming concepts. "Flash Boys" is much better, in my opinion. Although if you are really interested in the subject, you should read both.)
In addition to long term capital gains, and short term capital gains, there would be "intra-day" capital gains and "sub-second" capital gains.
I was thinking the intra-day gains Federal tax rate would be 50% and sub-second capital gains Federal rate would be 90%, but these values are arbitrary.
For both new types of capital gains, LIFO trade accounting would be used.
If anyone thinks that these side business do not victimize participants you should look back to examples like Knight Capitals glitch that most certainly caused retail investors to lose trust and pull their money from the market, taking a loss. The introduction of non-relevant code is an unnecessary risk that does corrupt the system and does victimize your average investor.
1. http://www.telegraph.co.uk/finance/newsbysector/banksandfina...
I guess it would be technically possible already, but Google would have to snap their images with much smaller distances, and make them load in much faster... space requirements would be huge lol.
Viewing this "Night Walk" demonstration, I felt an inkling of regret about returning Leap Motion. As others have noted, the experience is very immersive and exciting. The only thing holding it back, in my opinion, is the medium of mouse and keyboard. I wanted to move fluidly through Marseille instead of incrementally, through clicking and jerky motions of the mouse. If this kind of 3D / WebGL / geospatial content becomes more prevalent on the web, I can see a stronger practical use case for everyone owning Leap, or something like it.
Same guy disappearing into the wall in two different places, I wonder if there is more of him?
I didn't care much for the narration, but perhaps that's because when I travel I prefer to explore things on my own. The captions and videos were helpful though.
I was pleasantly surprised to discover that you didn't have to follow the green track (though you miss out on the pictures and videos). I discovered it by accident, actually. At some point in time I found out that I had become disoriented and had been going backwards for some time. Rather than go through the entire track again, I wanted to see if I could take a shortcut. It works.
You know what would be cool? Incorporating some sort of "Choose Your Own Adventure" elements into something like this. Kind of like Myst, but in real-world settings.
How it works: http://dev.opera.com/articles/view/w3c-device-orientation-us...
Live demo: http://richtr.github.io/threeVR/examples/vr_basic.html
I was thinking the next step up from this would be to setup 360 degree cameras every 10 or so feet along this path and have them all record for say an hour. Then you could 'walk' from point to point and see/hear/track the city.
But I actually think this curated approach is much better as it helps you cut out the noise and tell a better story.
Neal Stephenson's "Command Line" comes to mind, where he talks about how experiences are distilled and summarized for an end user. I have a vague negative feeling toward this, but I can present no argument.
Also, videos are not playing right for me on firefox (audio only).
Atypon has [a relatively small client list](http://www.atypon.com/our-clients/featured-clients.php). Compare it to [Highwire](http://highwire.stanford.edu/lists/allsites.dtl). I'd be willing to bet that all journals hosted with Atypon share this spider trapeven journals that are supposed to be open access where spidering should be OK.
Scientific publishing is weird. Source: I work in scientific publishing.
I mean, it's marked by comment tags that say "spider trap" right on them! Its the worst type of disambiguation system: likely to generate false positives, unlikely to catch real violators.
Tl;dr: researcher is browsing source code of a research paper's web page and finds a strange link (but same domain). She clicks and is informed that her IP is banned for automated spidering.
Apparently, this research site is meant to be open-access...
-------
Pandora is a researcher (wont say where, wont say when). I dont know her field she may be a scientist or a librarian. She has been scanning the spreadsheet of the Open Access publications paid for by Wellcome Trust. Its got 2200 papers that Wellcome has paid 3 million GBP for. For the sole reason to make them available to everyone in the world. She found a paper in the journal Biochemistry (thats an American Chemical Society publication) and looked at http://pubs.acs.org/doi/abs/10.1021/bi300674e . She got that OK looked to see if they could get the PDF - http://pubs.acs.org/doi/pdf/10.1021/bi300674e - yes that worked OK.
What else can we download? After all this is Open Access, isnt it? And Wellcome have paid 666 GBP for this hybrid version (i.e. they get subscription income as well. So we arent going to break any laws
The text contains various other links and our researcher follows some of them. Remember shes a scientist and scientists are curious. Its their job. She finds:<span id="hide"><a href="/doi/pdf/10.1046/9999-9999.99999"><!-- Spider trap link --></a></span>Since it's a bioscience paper she assumes it's about spiders and how to trap them.
She clicks it. Pandora opens the box...Wham!
The whole university got cut off immediately from the whole of ACS publications. "Thank you", ACS
The ACS is stopping people spidering their site. EVEN FOR OPEN ACCESS. It wasn't a biological spider. It was a web trap based on the assumption that readers are, in some way, basically evil..Now I have seen this message before. About 7 years ago one of my graduate students was browsing 20 publications from ACS to create a vocabulary. Suddenly we were cut off with this awful message. Dead. The whole of Cambridge University. I felt really awful.
I had committed a crime.And we hadn't done anything wrong. Nor has my correspondent.If you create Open Access publications you expect - even hope - that people will dig into them. So, ACS, remove your spider traps. We really are in Orwellian territory where the point of Publishers is to stop people reading science.
I think we are close to the tipping point where publishers have no value except to their shareholders and a sick, broken, vision of what academia is about.
UPDATE:See comment from Ross Mounce:The society (closed access) journal Copeia also has these spider trap links in its HTML, e.g. on this contents page:http://www.asihcopeiaonline.org/toc/cope/2013/4
you can find
<span id="hide"><a href="/doi/pdf/10.1046/9999-9999.99999"><!-- Spider trap link --></a></span>
I may have accidentally cut-off access for all at the Natural History Museum, London once when I innocently tried this link, out of curiosity. Why do publishers booby-trap their websites? Dont they know us researchers are an inquisitive bunch? Id be very interested to read a PDF that has a 9999-9999.9999 DOI string if only to see what it contained they cant rationally justify cutting-off access to everyone, just because ONE person clicked an interesting link?PMR: Note - it's the SAME link as the ACS uses. So I surmise that both society's outsource their web pages to some third-party hackshop. Maybe 10.1046 is a universal anti-publisher.
PMR: It's incredibly irresponsible to leave spider traps in HTML. It's a human reaction to explore.
Not sure it checks for styling before prefetching them.
2. subscribe
3. click link
4. sue them for breach of contract and damages. (they didn't deliver the content you paid for, it damaged your main source of income: providing knowledge to paying students)
5. repeat.
This goes against the nature of the internet and information, it is bound to be free.
There are bad actors out there, they exploit services, and one of the ways the services detect them is to create situations that a script would follow but that a human would not. When they do something bad you've got a couple of choices, cut them off or lie to them (some of the Bing markov generated search pages for robots are pretty fun))
So she sends an email to the address provided, they talk to her, she gets educated and they re-enable access. If it happens again the issue gets escalated. Its the circle of fraud.
http://tools.ietf.org/html/draft-grothoff-iesg-special-use-p...
https://github.com/jminardi/syncnet
It was fairly easy to integrate
The technical fact that it is blockchain-based really doesn't make much difference as it's incredibly unlikely to be adopted worldwide, due to the network effect of the already-established domain name system.
Whitepaper and FAQ are not quite up to date but you get the idea. From: https://github.com/nmushegian/dns/blob/master/whitepaper.md#...
- Namecoin issues new coins to miners as a reward for performing merged mining with the Bitcoin network. The namecoin supply is being inflated at nearly 30% per year for several more months, then over 10% for the next several years. Domainshares only ever shrink in supply, when fees are destroyed as implicit dividends.
- Namecoin attempts to service multiple namespaces at once. .p2p is highly specialized for servicing the .p2p TLD namespace. The use case is the same as Namecoin's "d/" namespace, which is used for the .bit TLD.
- Namecoin's name registration price is fixed at any given time and is independent of the name itself. Domainshares utilizes an auction-like mechanic to incentivize price discovery for names, making sure the final owner pays what it is actually worth. The majority of the final cost will have gone to the network as dividends by the time the auction is over, with a small fraction having gone to bidders as a reward for price discovery.
- As a result of the fact that domains are expensive and there are dividends on shares but not domains, there is a high opportunity cost to squatting: holding a domain without making good use of it.
The system really maps the real world onto computers very well, in that it reduces what used to be technical issues to "political" issues. These systems work on concensus, and as such, require a significant amount of interested parties to work in the expected way. They are very much subject to network effects that only occur after critical mass is reached.
Namecoin, in particular, was subject to a major security issue last year: http://www.reddit.com/r/Bitcoin/comments/1ohyom/fatal_flaw_i...
Blockchains are being used to implement solutions to different problems, and they could really solve some significant problems such as decentralized identity and reputation management (!). The difficulty lies in creating a significant enough "currency" so that miners will become involved and make the blockchain stable and reliable.
Bitcoin is a currency in a much stronger sense than namecoin or any other of "not-really-money-coins" around. I wonder if piggybacking on bitcoin might actually be the solution for this situation (i.e introduce other information in the bitcoin blockchain instead of using a brand new one).
Sadly, adding external information to the blockchain could be construed as "spamming the blockchain" and therefore not deemed worthy for inclusion in the bitcoin blockchain by miners. So there is a big challenge there.
If you are interested in this topic and want to work on related projects feel free to reach out (google my username).
$ dig @dns.dnschain.net okturtles.bit
$ curl http://dns.dnschain.net/d/okturtles
Both of them will resolve to whatever info is stored in d/okturtles domain.
With the (soon-to-be) DANE support (for those who forgot: DANE is about distributing TLS keys through a channel you trust (it comes from the domain you're visiting) but that is not the same as the final application (it's DNS, not HTTP/SMTP/IMAP/XMPP/etc), so you can prevent MiTM), I don't see what's missing technically to have our own internet.
There is a reason the root nameservers only delegate the act of name lookup at the top level. It's just not practical for them to have a complete list, and it's not even particularly desirable for users of it to have their list of names completely public (think internal servers).
It is possible to name a delegate nameserver through namecoin, I believe, but last I looked it was a bit iffy and it doesn't require any kind of authentication of results from the delegated nameserver a-la dnscurve.
Think about it...
Yes. Very true. For anyone interested in the above statment i'd highly recommend checking out http://twister.net.co/ - a decentralized micro-blogging spin-off of bitcoin. Unbelievable innovation is happening!
So the monkey research needs to be meshed in with human research that shows that mildly "overweight" (if fit) human beings have better mortality outcomes than human beings of normal weight[5] to tease out what the causation is for health outcomes of different patterns of nutrition. The example of Rimonabant[6] shows that sometimes an animal model doesn't adequately predict treatment effects in human subjects.
[1] http://www.prb.org/Journalists/Webcasts/2010/humanlongevity....
[2] http://www.demographic-challenge.com/files/downloads/2eb51e2...
[3] http://www.slate.com/articles/health_and_science/science_of_...
[4] http://www.nature.com/scientificamerican/journal/v307/n3/box...
Mouse models had suggested years ago that calorie restriction could lead to ~%50 increase in lifetime. However, the problem with mouse studies is that they are pretty different, and also the mice they use are really inbred and perhaps non-ideal examples. The conclusion from the primate studies is really stacking up to be a common sense "eat in moderation, healthy, and you'll live at least a little longer, maybe a lot". Not really groundbreaking stuff, to be honest. And still not conclusive when you consider the resources that went into these studies. This also teaches us nothing about mechanisms, which would be really useful. Just my cursory assessment so far.
[1] http://www.nytimes.com/2012/08/30/science/low-calorie-diet-d...
Caloric restriction has a whole bunch of knock-on effects, any one of which could have a huge impact on health and aging. For example, restricting calories means that you're restricting protein. Most people think of protein as a good thing, but that's what stimulates the hormone IGF-1 to be secreted, which is necessary for growth of all kinds--muscle growth (which is why bodybuilders eat as much protein as possible), but also including cancer.
I've seen research that suggests that cells don't go into "repair mode" in the presence of IGF-1. This is just one example of a possible mechanism that caloric restriction could have a hugely beneficial effect on aging and illness in general.
I have a half-written blog post about this I should push out. I'd love to get some more conversation going around this.
... were fed a semi-purified, nutritionally fortified, low-fat diet containing 15% protein and 10% fat.
The monkeys without CR ended up getting diabetes and they were giving them insulin. This study made a big splash, but as others point out it probably really only helps prove that eating less crap is good for you.This paper appears to include some of the same authors of the Wisconsin study and tries to explain why the NIH performed a study that did not replicate their results. This paper claims that the control group in the NIH study actually underwent CR by comparing them to a database of captive primates. If that is true, then the title still seems strange, because it doesn't mean the NIH study provides meaningful supporting evidence, it means it was an invalid test of the CR hypothesis and instead it provides some extremely weak supporting evidence of the CR hypothesis.
As a side note, there is evidence that the CR benefit is from protein restriction and possibly just avoiding protein imbalances.https://chriskresser.com/do-high-protein-diets-cause-kidney-...(scroll to Is protein to blameor is methionine?)
However one thought that I find interesting, is that for our generation, living just two or three extra years could potentially make a huge difference.
If you subscribe to the idea of a coming technological singularity, or even to the idea that we're a few decades away from SENS escape velocity, you'd hate to miss it by just a couple of years.
http://www.sciencedirect.com/science/article/pii/S0092867412...
It will certainly FEEL longer.
Open Access fees:[^1]
(CC BY) (CC BY-NC-SA) Region ------------------------------------------------- $5,200 $4,800 (The Americas) 3,700 3,425 (Europe) 661,500 612,150 (Japan) RMB33,100 RMB30,600 (China) 3,150 2,915 (UK and Rest of World)
[^1]: http://www.nature.com/ncomms/open_access/index.html1) they seemed frail and weak - the man seemed to have a constant runny nose. I felt that if he fell down he would break his hip. The risk of injury and death from physical weakness seemed like it would counter any benefits from CR for lifespan.
2) They put so much effort into measuring every ingredient, and running computer programs with recipes to get the optimal nutrients with as little calories as possible. It seemed to take so much time in preparation, and you could mostly only eat at home.
Am I the only one who finds this crazy? It takes 6 months from submission date till publication!
The peer review process should move faster and become modernized. I undrestand, you want to be published on prestigious journals, but Nature and others can modernize to publish more. You can argue, we lost 6 months or half a year of progress because of the pace of antiquated publication process.
http://www.sciencedaily.com/releases/2014/03/140331194030.ht...
So the only real finding here seems to be that we are now able to extend the lifespan of some monkeys.
Monkeys do best on perfectly ripe and fresh tropical fruits. Is that what they were eating? I really doubt it.
Are there any benefits besides the obvious 'write-once, run-anywhere' one?
EDIT: I just want to be clear that I don't have any negative opinion of F#, I am genuinely curious why someone would want to use this on a platform that is not Windows.
is it just that people don't know that half the things that were open sourced have already been open sourced(like the asp.net stuff) or do we just copy and paste microsoft press releases here?
OR am i missing something crucial that someone can elaborate on please?
https://github.com/fsharp/fsharp/
EDIT2: thanks to the responses, it's about accepting contributions
EDIT: for those that don't know the asp.net developments had a lot of influence from the alt.net movements. it was ms' attempt to keep the c# web developers from moving to other frameworks that let you do similar things much easier.
[edit]
I suppose if I'm complaining I ought to make some constructive suggestions:
* central page listing my subscribed/voted issues/discussions. * The only history link on a projects homepage is for the wiki. It should have a prominent link to the latest changeset with a date or age. * Project wide search: issues, code, wiki, discussions * In fact, remove discussions completely, everything should be an issue.
A few of the top contributing F# devs are there, it's an amazing company. Investors include Joe Lonsdale (founder of Palantir).
They are helping oil companies optimize oil and gas production - not your average startup problem.
> At that point there is simply no other option for that, because persistent storage is not available
This was about overloading an already used option by another team building a core system component -- the kernel. A debug for kernel's command line is for the kernel.
> It's the option an admin can specify which tells him why the system doesnt boot,
Ok so he does and now his system also doesn't boot but now it is either because of the original problem or because it gets flooded by systemd logs.
And then, he goes and posts to the kernel mailing lists saying how kernel is a piece of shit.
> That turns this into some kind of power game, which I am totally not interested in.
also
> We are putting together an OS here after all, not just a kernel, and a kernel is just one component of the OS among many, and ultimately an implementation detail.
I think due to their attitude towards both testing, towards the kernel community, they shouldn't be building core system components. And did he just write that kernel is just "an implementation detail?".
Maybe systemd was a mistake. Integrating and dumping socket acceptors, logging, and the whole kitchen sink into one component. So when it breaks it really breaks.
Retweeting was a genius move by twitter, a post by someone you don't follow appears in your timeline and it looks like any other and attribution is perfectly captured.
>What I mind is people closing bugs and not admitting mistakes. If Kay had even said "sorry, the excessive output was a bug in systemd, it's already fixed in current -git", that would have been a valid reason to close the bug.
So Torvalds is saying he doesn't like dictatorial project maintainers who reply in an abrupt and abrasive manner to contributors? He is so concerned about the lack of politeness and professional discourse that he just had to raise this issue? Hilarious.
The physical integration is pretty simple. I'm using libimobiledevice to get a screenshot over the cable: http://www.libimobiledevice.org/ . Recognizing the board isn't very hard.
An Arduino and Adafruit motor shield are controlling two stepper motors with cardboard/tinfoil arms. Stepper motors are important because they won't drift over time.
The AI gets 3k tiles ~30% of its games, and 6k tiles ~3% of the time. These are simulated games--currently the robot is streaming its 5th complete game.
I was an everyday Twitch consumer, but I practically stopped watching it overnight. Not because I don't enjoy the content anymore, but because my Chromium crashes every time it tries to load Flash.
Now, while this shows how lazy I can be in this regard, this is also a great insight. Sure, I could just fire up Firefox or even try to fix the Flash issue, but this is one step too much for my taste.
From watching Twitch everyday to watching practically never, only because of this issue.
So if you spend months developing an application and say to yourself "The user just has to do the little step X to fully enjoy my work...", I'd beg you to reconsider. Sometimes, the smallest friction can stop a user.
As my last keystrokes about depression here on Hacker News pointed out, there isn't just one disease known as depression. Depression is a symptom pattern (prolonged low mood contrary to the patient's current life experience) found often in the broad category of illnesses known as mood disorders. Behavior genetic studies of whole family lineages, genome-wide association studies, and drug intervention studies have all shown that there are a variety of biological or psychological causes for mood disorders, and not all mood disorders are the same as all other mood disorders. I know a LOT of people of various ages who have these problems, so I have been prompted for more than two decades to dig into the serious medical literature[1] on this topic. (I am not a doctor, but I've discussed mood disorders with plenty of doctors and patients.) I've seen people who tried to self-medicate with street drugs end up with psychotic symptoms and prolonged unemployment, and I've seen people with standard medical treatment supervised by physicians thrive and enjoy well off family life. The best current treatment for depression is medically supervised medication combined with professionally administered talk therapy.[2]
The human mood system can go awry both by mood being too elevated (hypomania or mania) and by it being too low (depression), with depression being the more common symptom pattern. But plenty of people have bipolar mood disorders, with various mood patterns over time, and bipolar mood disorders are tricky to treat, because some treatments that lift mood simply move patients from depression into mania. And depression doesn't always look like being inactive, down, and blue, but sometimes looks like being very irritable (this is the classic sign of depression in teenage boys--extreme irritability--and often in adults too). Physicians use patient mood-self-rating scales (which have been carefully validated over the years for monitoring treatment)[3] as a reality check on their clinical impression of how patients are doing.
As the blog post kindly submitted here points out, a patient's mood disorder influences the patient's whole family. The more other family members know about depression, the better. Encouraging words (NO, not just "cheer up") are important to help the patient reframe thought patterns and aid professional cognitive talk therapy. Care in sleep schedules and eating and exercise patterns is also important. People can become much more healthy than they ever imagined possible even after years of untreated mood disorders, but it is often a whole-family effort that brings about the best results.
[1] http://www.amazon.com/Manic-Depressive-Illness-Disorders-Rec...
[2] Combination psychotherapy and antidepressant medication treatment for depression: for whom, when, and how.Craighead WE1, Dunlop BW.
Annu Rev Psychol. 2014;65:267-300. doi: 10.1146/annurev.psych.121208.131653. Epub 2013 Sep 13.
The only problem with this is that it's essentially a PR line that both doctors and the general public have mistaken for science. We don't know all that many facts about how brain chemicals work with regard to mood disorders. We have empirical results from clinical trials and broad use of antidepressant and antipsychotic medications, but there is no basis to believe that medications "balancing out those chemicals" serve to repair mood disorders.
If the medications helped, then great, I know they have helped a great number of people, but they also fail to help a great number more and these success stories have an unfortunate tendency to marginalize people that do not get good results from medication. It often results in victim-blaming, to make sufferers of depression wrong for stopping their medications for legitimate reasons (let's face it, all of these results are highly subjective), and for overstating the ability of our current medications to cure all mental ills.
I self medicate with marijuana. And more than just using it to make me happy, smart, excited and hungry. I grow pot medicinally as well and that makes me feel really happy. There is loads of scientific evidence that points to having a garden and lessening depression. From my experience, I can say that growing marijuana really lends itself to a lot of the benefits of having a garden. Because you can harvest 5 to 6 times/year it makes it something you need to work on every day. Progress is relatively fast, and if you do a good job, you can take it to a shop and get enough spending cash for that new macbook apple just announced. And if you are a champion, you can find your nugs in magazines. (my ghost og kush is featured in culture this month...) But ultimately nothing feels better than smoking my own herbs on Friday night after a long week of gardening and programming.
Just throwing it out there as another alternative for someone that is struggling. Been there, you just gotta find the light.
That said, we're still understanding how the brain works. One recent study showed that depression often has an associated and underlying, undiagnosed, sleep disorder [1]. Treat the depression without treating the sleep disorder and the depression comes back. FYI: This work has not been published yet.
Given that scientists have just figured out that sleep clears the brain of toxins [2], similar to the lymphatic system clearing the rest of the body of waste, these results shouldn't be surprising. We don't know the exactly reasons why people get depressed, but the evidence is clear. Depression has a root physical cause just like any other illness.
[1] http://www.nytimes.com/2013/11/19/health/treating-insomnia-t...
[2] http://news.sciencemag.org/brain-behavior/2013/10/sleep-ulti...
However, like Wil, I seem to be getting angry at the most trivial things. I am considering starting again, but I am about to graduate and take my last finals in a week or two.
I have been scared of taking medication because of what ADHD meds did to me in my youth. Though now, knowing everybody on my mothers side and my sisters needed help for depression at some time or another, I highly support getting help in this domain.
I tried to write about mental illnesses and the startup community, which I think is something that needs to be talked about. But my submissions get deleted and censored.
Sometimes your best talent has a mental illness, how do you manage them? Most just fire that talent when they discover they are mentally ill. It is something that has to stop!
I know that the "chemical imbalance" explanation is a poor excuse for "we don't know exactly how it works". But there are so many progresses done in our quality of living that were done because someone had a hunch and some practical, reproducible results showing that it worked. Think about the practice of washing hands when going from one patient to another in a hospital? When it was suggested, people couldn't see a connection with dirty hands and spreading diseases.
I lost a son that suffered from a mood disorder to suicide. It is heartbreaking and it happened when he was apparently getting over the hump of his darkest moments... I have other two children that also struggled with depression and what I found that worked the best for us so far is communication. Being opened about our struggles, talk therapy in conjunction with medication.
Reading Whitaker's "The Anatomy of an Epidemic" (https://en.wikipedia.org/wiki/Anatomy_of_an_Epidemic) should be required for anyone considering long-term use of neuroleptics, benzodiazepines, or anti-depressants. And for those who care for them.
We're rarely the target customer and rarely behave like "average Joe". We're naturally resistant to superfluous redundancy ("My phone can already snap a barcode, I don't need a separate device") when consumers don't even see the duplication let alone the issue. They don't separate devices (or even apps) has having layers of similarity and just see things for their end functionality.
My mother would see a phone and apps as completely separate functionality to a physical device like this. She probably would have the Amazon Fresh scanner, the (theoretical) Google Shopping Express scanner and the (also theoretical) Whole Foods scanner and wouldn't even consider the duplication, let alone be frustrated by it. She doesn't care about the potential for an "open standard"/"common standard".
She also has an AppleTV and a ChromeCast connected to the same smart-TV that also has native apps within it (she mostly uses the native apps). Again, she sees no issue with that and might even buy an Amazon FireTV if she felt it was more compelling for one use.
Ultimately we shouldn't assume consumers value convergence, especially when it creates ever increasing complexity in user experience (eg opening an app to snap a barcode vs pressing a single button on an Amazon Fresh scanner)
ADDED: If you don't have parents that also work in tech, go visit them and just watch them use technology without prompting. Ask them about their experiences, their frustrations, their decisions behind purchasing specific equipment and downloading particular apps. It's very insightful.
Break out your phone, load up your barcode scanning app (there's 20 seconds right there even if the phone is in your pocket). Now try to actually scan something with it. You'll spend another 30 seconds lining up the little on-screen window with the code, rotating things, waiting for the camera to focus, and even having to move to another location if you're not in bright lighting. It's a terrible experience and that's why you don't see stores checking people out using the camera of an iPad.
A barcode scanner, on the other hand, just works. You point it in the general vicinity of the barcode, press the button, and it's scanned. You don't have to perfectly align anything, be in specific lighting, or wait for a camera and an app. I'm sure you've seen cashiers run multiple things over a scanner in under a second.
Amazon Dash isn't just a subset of your phone's functionality. It's a dedicated barcode scanner, which is hardware you don't have on your phone.
A lot of people already kind of do this. They go to a shop, find the items they like and look up on the web if they can get a cheaper price by ordering online.
This version of the product might not be so practical for this use case though since it requires a WiFi connection and can probably only scan AmazonFresh barcodes.
Or do I just have a cold, black heart?
Seriously though, it worries me that there are more and more 'listening devices' in my home.
We've seen what has happened recently with the NSA listening to calls. What is to stop the authorities getting a back door into all these devices and just recording everything?
Which means it is only available in three locations (SoCal/SF/Seattle).
Can easily see this evolving into an Amazon price comparison tool for mobile use. Maybe I get a flash discount if the GPS has me standing in a Best Buy already.
1. Find your phone
2. Unlock
3. Swipe left to home page three or maybe four
4. Visually scan for the AmazonFresh icon and tap
5. Wait for loading
6. Start scanning action
7. Confirm and pay
Number of steps to scan grocery by Dash:
1. Get device from drawer or pantry
2. Press one button and scan
3. Confirm and pay
For the target demo (30+, married, households with children), option 2 wins hands down. Because you will easily be distracted and stop using option 1 and not complete checkout.
Amazon knows CPG and commerce better than you do.
...
next day what shows up, exactly?
6 granny smith apples?
a 15 pound bag of golden delicious?
3 MacBooks pro?
ummm
I am curious what the upgrade cycles of these products will end up being. Can Amazon charge a subscription and keep giving me a new one?
(1) Order frequency - Right now, a typical customer likely picks up groceries when they're out and it's convenient. This very well could be on the commute home from work, later at night, etc. With Dash sitting around the kitchen, Amazon has now created a very tangible reminder in the form of the Dash device to order your groceries, rather than waiting until it pops into your mind (and possibly not buying on Amazon).
(2) Average order size - As someone posted above, it takes 1 or 2 button clicks to reorder an item using Dash. Compared to the current way of online grocery shopping, Dash eliminates a lot of possibilities of forgetting to reorder something you intended to, because it is so simple. Compared to on the PC when you may forget to browse the snacks category, for example, and you forget to order chips and cookies. Way less likely to happen with Dash.
This doesn't address price concerns, but in terms of convenience for Amazon Fresh customers & increasing Fresh orders/order size, this seems like a massive win-win for Amazon and their customers.
http://hiku.us/what-is-hiku/
(Half joking: Or is it a Speak Friend And Enter kind of thing, where you have to speak the WiFi credentials.)
Amazon acts like a startup still. Good for them!