"Checking Card Adjustment POS Pin (Credit) $1.00"
So I sent him $1 back (to: my friend, cc: firstname.lastname@example.org, subject: $1). And it instantly sent it to him. I didn't have to verify my details or anything.
I'd feel a lot more comfortable if there was a security blog explaining how they are validating that I indeed sent the email and it wasn't simply spoofed.
Edit - I did this from Gmail which I presume authenticates all of the emails via dkim? I'm guessing this won't work as automatic for other providers?
Edit2 - Just attempted with another friend and had to verify manually. The automatic-authorization appears to only apply when it's between two previously validated parties.
- Take an existing known medium (in this case email) and makes it way more useful.
- They didn't try to build a bunch of new UI for connecting your Facebook so you can find and invite and pay your friends, paying out to your card, etc.
- It magically hides the messiness of an enormously complex problem (fraud, different types of debit cards & banks all over the world) behind a very simple interface.
- Unlike every other P2P payment system, I can actually sign up and receive money (or convince my friend to) using only what's in my pocket (debit card)... not hunting down ACH/wire details.
The Durbin amendment regulates the cost of debit transactions over the Visa/Mastercard network. It's $0.22 + 0.05%.
Mossberg reports that Square is planning to monetize via "premium options" like international transfers. But still, $0.22+ is a lot to lose every time someone uses your mass-market service.
Good thing they raised $341M of VC money.
Who said the dot com days aren't back??
Planet Money recently did a great episode all about the US's ACH system and why it works the way it does.
Twitter login and author info, standalone posts (submitted links/stories), and Disqus comments.
It feels broken already, using a strange mix of identity from one communication tool and the interface from another .
What I like is that by connecting to Twitter and using the USV team's accounts as the source you get a great idea of the character and interests of the VC team and fund.
What I dislike is that by opening that up to anyone, the front-page just becomes a mini-HN and the insight into the character of the team is immediately diluted.
I also dislike that they use Twitter identity as the author for a conversation/debate, but then use Disqus as the medium for the debate. This has two effects:
1) It breaks the feel of the audience, people present themselves slightly differently to different groups, for example how many HN profile pages carry identical info on the individual's Twitter page?
2) It splits the debate across Twitter (where some will reply directly to the author) and Disqus.
I also find the blog post placement weird. All of the design hints on the blog posts (the grey squares to the right) make me think that they are stories, just "Hot" stories that are being featured. Not the case though, grey squares are blog posts that are masquerading as submitted stories (the design consistency of the block).
It's a weird experience overall. I liked the effect that was achieved early-on of gaining insight into what the team are following and debating, but it feels confusing. Ultimately I think the best thing to do is just to follow interesting people on Twitter to gain this insight, follow trends and interests.
Should probably be pointed out that Twitter and Disqus are both portfolio companies of USV, and perhaps that's why they chose to do this weird mashup. Makes me wonder about the comedy gold or real opportunities that might be achieved from mashups of other portfolio company offerings. Code academy lessons that start where you left off, every time you get a cab using Hailo?
Voting rigging is the norm, companies flag posts about their competitors, there's zero transparency about moderation or flagging. The community has become a lot more negative, less supportive and less startupy.
There have been a number of attempts to build HN clones/rivals but generally the people creating them have focused on the technology rather than the community which is the important thing. USV is someone who could potentially build a great community and we should applaud them for trying.
Honestly, the page looks pretty decent. It seems less of a clone of HN, than just a page where you can comment and vote on links, which frankly, wasn't a concept created by Hacker News.
Who edited the title? You just took a discussion of how this was a clone of HN website and half way through turned into... into what exactly?
The submission was about USV and HN designs being similar, not that USV got new design. You just nuked the context. Seriously, mods, get your shit together.
Disclaimer: I "cloned" HN when making http://lifestyle.io. I didn't have a preexisting community, but a small handful of people find it useful. Is there a lot of overlap in content? Sure. Do I discover stuff I might have missed on HN? Yep.
I just wish I were a better community organizer.
Unlike HN, you must submit text in the body of a submission. Which is somewhat redundant.
Not your fault, we're experiencing a server error. Try again in a moment!Fail-Fred "
face plam thats all I get when I click on comment.
Other than connecting people to unique content, I don't really get what's going with the whole redesign and HackerNews-esque feel.
If growing them feeds people efficiently, we'll see it here soon enough.
Potatoes didn't exist outside the Americas until after 1492. Then many cultures viewed them as lowly and not worth eating. But you can feed more people per area than any other food and they grow in more types of land than many other edible plants. Cultures would reject them until a famine struck. Then the ruler would eat them out of necessity. Then everyone would eat them. Now potatoes are in more cuisines of the world than any other food.
If cockroaches are efficient, I would expect a few shocks in some commodity markets to put them on a few cultures' dinner plates, then to spread. Like roaches, if you'll pardon the pun.
There's a Brooklyn company called Exo (Exo.co) which is doing just that after a kickstarter campaign + media blitz. Cricket flour is super high in protein, and also very sustainable. Could see it becoming the next acai, chia seed, quinoa, etc.
As I see it, 150 is seven and a half times as much as 20. I didn't use any exchange rate of any sort to compute that.
Seven and a half times $3.25 is $24.375 . Using the most charitable interpretation I can think of, that's a return of $21.12 on an investment of $3.25, but put in the same terms as the original quote, I'd call it investing $3.25 and getting back $24.38. What happened there?
They're generally more expensive than the competition, but you get what you pay for, you know? I'm sitting here trying to think of a time when Rackspace has ever let me down, and I can't. Being able to have that kind of confidence in your hosting environment is nice.
Marco is correct that shared hosting is a disaster area, so much so that Rackspace doesn't really compete there, so I'm always hesitant when people ask me to recommend a shared host. I generally end up recommending Dreamhost too; it's not great, but it's better than what you'd get for the same money anywhere else.
(My personal site has been on Site5 for over a decade; they have mostly been pretty good)
I don't agree at all that for many website hosting customers the process is "easy".
A typical web hosting customer is not tech saavy they either have it being handled by their "tech guy" or they can't even remember how their files got onto the server in the first place with their static site and sometimes they don't even know who is hosting their site .
 Source: We're a registrar and we get the calls and emails of confused customers who have no clue where they are hosted. They don't even know enough to look at the whois and see the dns to give them a hint. Actually you'd be suprised how many times someone will access our whois and think we are their registrar.
This line right here is absolutely sage wisdom. Here are some of the companies I've bought services from, as well as what I remember happening to them:
ClubUptime Closed in a disastrous closure due to basically being conned. DirectSpace Still around, haven't changed much VolumeDrive Very sketchy, I don't really know how they're still in business Fazewire Local Seattle hosting/colocation company. Originally founded by a guy when he was 15, he sold the company when he went to college. URPad.net Still around, only used them for a short period of time. OVH Amazon Digital Ocean
GoDaddy does a lot to support their customers. Friendly people over the phone. They've walked my dad through some hosting issues he had when he was trying to set a site up. They call me every couple of months to make sure I'm satisfied with everything (and probably try to sell me on that bundled registration). Making them out to be The Devil is too dramatic. And transparent too when he could have linked to the #Philanthropy heading on their Wikipedia page but chose to focus on #Controversies to support a position.
That being said, the companies I've had good experiences with, have heard others, and will continue to use/pay are: AWS, Linode, DigitalOcean, and Webfaction (webfaction is amazing for a small cheap shared hosting environment). Other ones that cross my mind are OVH and Hetzner.
If I need to host a more complex or demanding web application it goes onto a dedicated linode (or may share one).
Dedicated servers that are reliable are very very expensive (Hetzner in my direct experience is nowhere near reliable) where with linode across 3-8 linodes at various times I've had no down time in coming up for 5 years.
Fantastic support, they don't oversell their machines.
Sure if I shop around I can get a similar spec (whether it delivers who knows) for half the price but is it really worth saving 20 bucks if I don't sleep at night worrying about my vps provider going down.
I also like DO, I still won't host anything important with them but for a quick dev/test box they are pretty good.
I've never really gotten why the VPS market is quite so price conscious the difference between 5 a month and 20 a month is so meaningless in the grand scheme of things (I suspect I spend a lot more than 15 a month on coffee on the way to work).
I would argue that Mediatemple kind of killed themselves in many ways, I can't see how GoDaddy will do much worse to be honest. People put them up on such a high pedestal as they got bigger, they just couldn't live up to their glowing reputation because of how big they were growing which is a problem not many companies can say they have, Support stayed timely until the end, but Media Temple lost out to Digital Ocean and Linode big time and just couldn't keep up in the end.
I wish GoDaddy all the best, but for the moment I am very happy with my Linode 1024 virtual server plan which never buckles under anything I've thrown at it thus far. Even hitting the front-page of HN once upon a time didn't cause it to break a sweat.
So, we are overpaying for our current hardware, but haven't had the stomach for another migration. Contrary to what the article states, small companies with already limited resources don't want to spend time moving a moderately complex infrastructure around, on top of the considerable work already on the table.
But, yeah, GoDaddy engages in questionable practices. Automatically adding stuff to your cart (and/or making it confusingly easy for you to do so), bumping renewals to 5 years by default, and otherwise making their UI "consistently inconsistent" in ways that miraculously always seem to benefit them are part of the equation. To be pushy with upsells is one thing, but they take it a step further.
These are kind of ingrained business practices and part of the same ethos that says selling IT services with sex is OK. It is hard to imagine them acquiring a company without that company getting at least a little of that stink on them.
Our company did reseller hosting for about 5 years and went through all of the acquisition stuff Marco mentions. We had to exit SoftLayer because they were horrible, only to be brought right back.
Hosting is a horrible business. To be good at it and have marketplace success you need to deliver over the top support; which is just unsustainable at scale.
One of the big pains in the webhosting world is maintaining legacy systems...we had about 15,000 clients on ancient servers running RHEL4, under a proprietary VPS platform. (And as far as I know, a big chunk of them are still there.) Needless to say, this resulted in a really crappy service for the clients on those servers, and there never seemed to be a big push to get everybody migrated off of them and onto our newer servers running cPanel. We were working towards it, but it was a big endeavor that would leave a lot of clients extremely upset when things invariably went awry. So rather then putting some good development time towards automating the process as much as possible and hiring more support for those accounts that didn't migrate properly, the problem just sat there for years.
I think people will always pay for service, quality, and experience. Whoever can deliver that consistently will make money in hosting.
The support person refused to do so, but instead, asked me to subscribe to a dedicated server. I explained that I didn't need a dedicated server, as it was clear from my statistic that all faults were on their IO throughput side. He just won't listen and still insisted on up-selling me a dedicated server.
What a horrible experience! Anyone encountered the same thing as I do? Is IO throughput a PITA for your hosting experience?
Many people spend time comparing the different services, but in truth they're all the same!
Also, you get much better specs with the free tier of OpenShift, but I guess that will change once enough people switch to it (just like AppFog changed their free tier).
I can find links to a website by typing its name into a search box now. I can see a list of my own links now. Those are nice improvements (or dis-catastropic-mistakes) over the last version.
I still can't sort search results by anything other than, well, randomness it seems. Certainly not by number of points. Maybe by date of the last save?
So yeah, halfway there. Even the font is halfway between the terribly oversized font from the redesign and where it belongs. I once got 20 links to a page. Then I got 4. Now I'm back to 9, which at least looks like a list.
I'll check back in a couple years to see where they're at. But I'm not going to start using them again. Fool me once, and all that.
Over the last 3 or 4 years they made their UI progressively worse: replacing spaces for keyword separators by commas, slower autocomplete and now barely visible input boxes. Delicious has been extremely unkind to it's longtime users.
On first blush, it's underwhelming. A lot of monotone flatness, which I'm sure hit some kind of trend that got it greenlit, but doesn't make for good UX.
The interface is a solid wall of text that makes it very hard to distinguish one link from the next. There's no signposts to make it easy to tell where you are, or what you're able to do.
I ran through the bookmarking process, and it's clunky. Still asks you to tag things, only now things like the suggested tags are gone, meaning you have to think even more about what you're doing as you save bookmarks.
There's nothing really new that I can see, just a coat of paint and a lot of gratuitous flatness. Flat for flat's sake is definitely this year's skeuomorphism.
Full disclosure: I work with a clipping service that might be considered a Delicious competitor. (But I'd have the same critiques even if I didn't).
But since the redesign it's all been working incredibly smoothly for me. I'm happier with it than I've ever been.
(I didn't actually realize Yahoo sold Delicious. Interesting..)
Think the last time you saved a page on twitter/facebook but just few hours later the OP decided to delete or hide the post.
And perhaps facebook-like graph search based on metadata extracted from page. E.g. search a archived page talking about presents published before 2013 Christmas, author's name is vaguely start with letter J, from website XYZ
So who needs in unlimited scroll?
Some front-end libraries i found:
My advice: Stop worrying about perfecting your application or trying to game it. Stop worrying about trying to get into YC. Submit it, forget it. Go build a company. You don't need their validation, and you certainly don't need their permission.
Just go freaking make something people want.
> From this, based on the volume of repairs in San Francisco...we could be operating at about $70k per day in just SF.
A bottom-up story can just be a top-down one in disguise. Capturing 10% of a market doesn't sound too hard until the word "billion" appears. Instead, we can capture 114% of one region. And $20 million per year is too big, so let's give it per day.
Just saying "the average dishwasher needs a $200 repair once every two years" tells me more than that entire paragraph.
YC is investing $18k. If you build a $10 or $25 million business off that springboard, even with an angel round, that's awesome for all.
As software fills out the niches of the world, that's where the real wins are going to be.
So in short, saying your company will be worth a billion dollars isn't cool. It's just douchey.
Why would they care about how quickly you can build something? If you took 2 years to build an iPhone app in your spare time, but that iPhone app ended up being Candy Crush, wouldn't that make that metric pretty meaningless?
Till then time, we will and we should doubt it.
What do they mean by this? Do we literally don't know who created Truecrypt?
Somewhat james bond-y idea but you get the point.
In Mayor Nutter's keynote at PennApps last month, he talked a lot about how Philly is really pushing for open data/open government. It's great to see those initiatives coming to fruition.
I see at least NASA on there (who just got in trouble with their IG for improper use of un-accredited cloud services).
Or does this include GitHub Enterprise users?
$ curl -s http://4.bp.blogspot.com/-MPv_CwvvwKQ/Ulrw3TfdgyI/AAAAAAAAAEw/YsRPmU6C5xM/s1600/trefoil_rotate_white.gif |strings|grep -i created UCreated by Wolfram Mathematica 9.0 for Students - Personal Use Only : www.wolfram.com
That's why math is fun. You can always participate in the analysis.
Second, your users can now benefit from accelerated content uploads. After you enable the additional HTTP methods for your application's distribution, PUT and POST operations will be sent to the origin (e.g. Amazon S3) via the CloudFront edge location, improving efficiency, reducing latency, and allowing the application to benefit from the monitored, persistent connections that CloudFront maintains from the edge locations to the origin servers.
So you get to save on having to configure and serve multiple domain names (which isn't all that hard), but you have to add logic to your server side code and user exposed uploading UX to check and display to the user if content is finished being uploaded by the user to cloudfront and by cloudfront to the origin/backing store before using/processing/displaying it.
I'm curious if the edge receives the full POST/PUT first and then does a complete PUT/POST to the origin, or does it forward as it's receiving.
Combine that with a custom SSL certificate at all the edge nodes ($600/month) and this is a pretty compelling dynamic CDN offering.
Can't wait to try it out for our API.
Nope. Customer lost.
Look, I hate Quickbooks and Quickbooks Online. I really, really do, but I will not use a financial product that doesn't connect to my financial institutions. Period.
Do you want to know what MY startup dream is? I want someone to give me money, and then I want to go create a service that kicks the crap out of Yodlee and Intuit's own bank connection system. I want it to use REST APIs when it can, OFX when it should, and intelligent screen scraping when it must.
I want to build a startup based on an open core of specifications for how to connect to every financial system in the world. I want that spec to be executable and available as a simple library with bindings to every language you can think of. If you have a new institution or your bank changes and you can fix it, I want you to be able to fork the library and send us a pull request.
I want end users to be able to go through a "guided login process". "OK, log in now", "OK, click on the accounts list", "OK click on a transaction". "You're done! We've autogenerated a basic scraper for your bank. Thanks for helping us out."
I want to make money off this library by providing a simple, unified REST API behind all this mess that provides the computational resources to handle millions of customers connecting with thousands of institutions.
I want this company to provide push notifications so your app can do clever things when people spend money.
I don't want you to have to sign an NDA and pay thousands of dollars just to get permission to play with it.
I want it to be the Twilio of Banks.
But if you want to take the code and go your own way, you can.
I really don't know why we've let just a few companies keep our collective financial data locked up for so long. Is it because it's so expensive to get it working? Well why not spend it on people who will create an open, scalable system that can still make money?
Instead, we have Mint.com and mvelopes. That's it, really. Have an idea for a personal finance tool that lets you create "virtual subaccounts" for your checking and savings accounts so you can leverage double-entry bookkeeping in your personal finances through a clear metaphor? Great! Now have fun spending 10 minutes every two days copying and pasting stuff from 10 websites into 1.
It's just madness.
You know that "one weird thing" you're passionate about that's not really related to anything else you're passionate about? This is it for me.
P.S.: lubos - this isn't really about you or manager.io. I commend you for making something and getting it out there. This is about the thing that makes every one of these attempts inevitably fail, and it's sad that we're all being held hostage to crappy software because of it. I wish you success, I hope that I'm completely wrong.
But I'm curious about a few architectural decisions. What made you to decide to build each HTML page by hand?
Code like this makes my eyes bleed... reminds me of the faux-OOP HTML builder classes that used to be a fad among PHP programmers (or ISAPI & Delphi web developers of old) a while ago.. No offence, but much of your Manager.HttpHandlers.* codebase feels like messy, ugly PHP4 code ported to C#...
What made you decide against template-based output rendering (Razor, NVelocity, NHaml, .liquid to name a few)? With template-generated output, the business logic layer could be decoupled from the UI. I had only a cursory glance at your code (and thus could be wrong), but it seems manager.io's DAL/BLL layer is intermingled within the GUI parts.
The protobuf DLL was named protobufnet.dll in the MSI. But the proper filename should be protobuf-net.dll
I think user input validation and error handling could be made more robust.
Additionally, spawning 5 HTTP worker threads to serve a single user seems a little overkill.
These are few of the issues I've noticed during the 5 minute tinkering with your assemblies. But don't let this critique discourage you. The app looks good - I guess end users won't care how it's built so long as it provides real value...
PS: Thanks for the heads up about Eto forms! I'll give it a spin and see how it fares against Xamarin's XWT.
System.FormatException: Guid should contain 32 digits with 4 dashes (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx). at System.Guid.GuidResult.SetFailure(ParseFailureKind failure, String failureMessageID, Object failureMessageFormatArgument, String failureArgumentName, Exception innerException) at System.Guid.TryParseGuidWithNoStyle(String guidString, GuidResult& result) at System.Guid.TryParseGuid(String g, GuidStyles flags, GuidResult& result) at System.Guid..ctor(String g) at Manager.Objects.Get(String entityId) at Manager.HttpHandlers.File.Upgrade.Get() at HttpFramework.HttpModule.ProcessRequest(HttpRequest request) at Manager.HttpModule.ProcessRequest(HttpRequest request)
1. It's free (how does the company backing it plan to stay in business?).
2. It's free (as in beer), so I/the community cannot take over in case the product/company ever goes under.
Thanks for your work.
One question, can bank accounts be linked for realtime importing or is it based on importing csv's only etc?
It's disingenuous to say Google has "launched and succeeded" against Microsoft in the PC OS space.
Windows XP is version 5.1 of Windows NT. I think it is really dumb to try to see "laws" in product versions.
The next CEO needs to find the right balance between focusing on the assets they have as well as picking the right fights with emerging technologies. That said, hindsight is a wonderful thing.
Anyway, a clarification of the headline: not only 'unopened' snaps, but also those that have not yet been opened by all nominated recipients. So if a law enforcement agent can have themselves invited into a Snapchat circle they can take advantage of that fact to persist the image on the server until it is legally demanded.
I'm glad they don't actually store for longer than is necessary tbh.
Obviously you can do a lot more with it than just listen to cops, and even this tutorial makes it look a LOT more complicated than it is.
Plug the device in, and select a frequency. It's pretty much that easy.
I use it for listening to everything from ham radio operators to airplanes. Seriously have had every penny worth of fun playing with this.
Here is everything you need, available on amazon prime: http://www.amazon.com/Receiver-RTL2832U-Compatible-Packages-...
(Although I recommend a better antenna. Dig one out of your garage, that is what I did.)
Can the same USB receiver function simultaneously as the signal and control receiver? If not then the HW cost is $19*2.
As long as you have a $25 software package.
That runs on your $100+ Windows installation.
On your multi-hundred-dollar laptop.
Perhaps I've romanticized journalism in my mind and see the past through rose coloured glasses as we all tend to from time to time, but I feel like journalists used to have integrity and that we could once trust that when they put pen to paper, we could believe in what they wrote... or perhaps we never could and its only as age removes our naivety that we see the world's media for what it is, a sham designed to further political and commercial interests.
I for one should like to see a news agency that spends their time chasing down the facts like CSIs to present the cold hard truth rather than some dumbed down version of events designed to have some political sway. Lets hope the vision for whatever venture this may be is that. Perhaps I can dream.
As much as I dislike paying for news, I'd far rather see them model themselves after the Economist than CNN.
Well good luck with that. To lose faith in humanity, one has to look no further than comment threads on news sites. The current comments on the Reuters article are already facepalm-inducing.
And Google, and Microsoft, and Adobe, and Facebook, and Oracle, and Pfizer, and Starbucks, and General Electric.
But Apple makes the headline. Normally this would be headline whoring by the predictable media, except that this is the BBC, and I thought Starbucks was made the whipping boy for tax evasion in the UK.
* Vagrant is being updated to work with 4.3 now. A release should be out today or at the latest tomorrow. (UPDATE: Vagrant 1.3.5 is now out and supports VirtualBox 4.3)
* VirtualBox 4.3 doesn't run at all on Mavericks (10.9) because their kernel extensions aren't signed. OS X 10.9 requires signed kexts now. So the changelog where they said "limited" Mavericks support they actually should've said "no support". (UPDATE: Some people are reporting it is working for them on Mavericks. I can't get it to work. YMMV)
Based on these two bullet points, I would stick with VirtualBox 4.2 for the time being. The bullet point that says "rewritten VT-x and AMD-V code" is especially vague and could be "super unstable virtual machine manager" just as easily as it can mean "slight performance improvements." So be careful.
Other than that, it is good to see VirtualBox have some sort of big release. This is their first major ("4.x") release in over a year.
I also want to note that if you are on Mavericks, VMware Fusion works perfectly. As a disclaimer to this sentence though: I make money from Vagrant + VMware users. I'm not trying to advertise that, I just want to state that Vagrant _itself_ is fine. VirtualBox 4.3 doesn't work. VirtualBox 4.2 does. VMware Fusion 5 and 6 does.
The big releases tend to have a way of introducing bugs, that although get fixed, can wreck your day if you depend on it.
I still wish they'd consider adding retina support on OS X; that's the one thing keeping me from using it 24/7 :(.
This feature alone is worth the upgrade.
I've switched over to Parallels 9 on the Mac, and it is _much_ faster overall (not to mention better USB support).
Also, I highly recommend Vagrant users to look into the vagrant-LXC plugin - that, too, is one heck of a lot faster than using either VirtualBox or VMware providers.
I've documented my setup here: http://the.taoofmac.com/space/HOWTO/Vagrant
I hope Vbox will catch up with VMware player on 3d acceleration soon.
1. When cloning VMs I would always have networking issues. The fix was known and simple (https://www.virtualbox.org/ticket/660) but not intuitive to a casual user.
2. Installing the guest additions (drag-and-drop file support, shared clipboard, basically stuff you really want) as a kernel module can be a huge pain in the ass depending on what kernel you run. I never had any issues with a "stable" 2.x kernel, but with 3.x I had a difficult time finding the correct kernel headers and putting them in the correct place.
"They ask: If court orders are legitimate, why should we allow engineers to design services that protect users against court-ordered access?"
Why on earth would you allow that a court order is legitimate? Both the tactic and the execution are of questionable legality, and critically that legality is flexible.
Remember that the clearly illegal complicit acts of telecommunication companies were made retroactively legal through a grant of immunity. The idea that you would begin your article by allowing the emperor to retain the assumption of clothes undermines the credibility of your remaining contentions. Courts having the right to demand things from citizens that they do not wish to divulge is not some force of nature, it has not always existed, does not exist in all systems, and need not exist in the way it currently does.
So cut the crap trying to justify the moral, conscientious, and brave action of a small company and force them to justify their existence.
1. Court orders can be freely targeted.
It's incredibly hard and costly to make a system resistant to inside attacks from everyone. Not just costly from a technical implementation perspective, but from a business operations perspective. For example, software engineers might occasionally want to look at some user data in order to diagnose a bug. Not having access to the data would make their lives much harder. Certain analytics might not be able to be generated which leaves the business flying blind.
Instead, an acceptable tradeoff is that access is restricted and managed to mitigate risk. For example, access is only granted when necessary and sensitive operations might require two separate people to sign off. This makes it significantly more difficult for a malicious actor to bribe the right people but makes it no more difficult for law enforcement. Law enforcement can legally compel bypasses around all the safeguards.
2. Court Orders don't care about being detected.
Instead of making it technically impossible, it's often far more effective to deter inside attacks through robust detection. Audit logs, clear policies and dire consequences are usually enough to shift the calculus of inside attacks into "not being worth it". Such a calculus does not apply to court orders because they don't care about being detected, because they're not doing anything "wrong".
On the surface, court orders and inside attacks might seem very similar technically viewed from an overall business perspective, they are vastly different and the comparison between the two is unhelpful.
> From a purely technological standpoint, these two scenarios are exactly the same [...] Neither of these differences is visible to the companys technology - it cant read the employees mind to learn the motivation[...]. Technical measures that prevent one access scenario will unavoidably prevent the other one.
Emphasis on the last sentence - since this is only due to implementation in the chosen example.
As a counterpoint example, a system that allows for user data access only after a request has been made to access that data, the request is recorded in a request log system of some sort, and approval for the request goes through the appropriate checks (legal and procedurally) at which point it's signed off on and data access can occur.
(The counter-counter-argument is that technology isn't perfect and someone with the right access could potentially get around it ... but enterprise key management is a real thing, folks)
In this sort of system, the "intent of the employee" piece is encoded in the checks/approval piece as long as you make sure the same employee making the request is not the one with approval rights and that legal representation gets included in the loop for these types of accesses.
In this situation the hypothetical criminal syndicate would have to mount a larger and larger attack involving more people and greatly reducing the chance of it happening.
A government, however, would just pile on the legal requests and increase the number of employees involved until the request could be potentially be satisfied. By doing it this way, you make it unlikely for the government to il/legally pressure a single individual and instead involve your company's legal representation and a larger portion of the government's legal apparatus in determining if the request is valid - and in the meantime create some sort of documentation about the event (even if you can't publish / talk about the documentation while you're going through the courts).
The only advantage in defensive design where you literally cannot access your customer's information is that it absolves you of knowledge of what any one specific customer is doing. However, you increase your risk exposure to your services being used for illicit purposes (as defined by whoever is bringing a lawsuit against you), potentially being shut down, and potentially losing money as a result.
Some companies are ok with accepting that cost (in return for something that you can't put a price on) - most aren't.
There is a big difference between no employee can access the data and no single employee can access the data.
Had Lavabit had in place measures to prevent disclosure of its master key, it would have been unable to comply with the ultimate court order
Edit: Upon thinking about this further, couldn't a solution to the byzantine general's problem like how bitcoin works solve this?
e.g. Thinking outside the box here, what if every person who uses the mail system has to collectively solve some hashing problem based on the source code or system change where the solution allows the software patch or upgrade to be applied. If 50% of more of the users solve the hashing problem after inspecting the code, the patch would be applied.
Why wasn't Lavabit setup in a similar fashion? Why isn't this more widely practiced?
What is the current state of the art on homomorphic encryption? Does it still cost an 'ARM and a leg' of CPU cycles?
For another great read about a would-be Amazon that didn't turn out quite the same way, and an entertaining look into the internet business environment at the turn of the millenium, I recommend Dot Bomb by J. David Kuo.
The long narrative history of older meanings of the two words is also not very helpful. Here's a much better summary (http://www.nobleharbor.com/tea/chado/WhatIsWabi-Sabi.htm) that reads in part:
"So now we have wabi, which is humble and simple, and sabi, which is rusty and weathered. And we've thrown these terms together into a phrase that rolls off the tongue like Ping-Pong. Does that mean, then, that the wabi-sabi house is full of things that are humble, plain, rusty, and weathered? That's the easy answer. The amalgamation of wabi and sabi in practice, however, takes on much more depth."
And I don't think ikebana (floral arrangement) is generally a reflection of wabi-sabi, much less haiku (section entitled "Wabi-sabi in Japanese arts").
Anyway, kind of a mess. Someone should fix that ;-)
 to plug a book on Japanese culture (as it was, rather than as it is now) to anyone with the time or interest, I highly recommend Bernard Rudofsky's 1965 book 'The Kimono Mind'.
'Then my favorite fellow Bubba, Jesse Gutierrez, came by to swallow some beers. He told me that hes been studying Japanese esthetics, and that what I was talking about was "wabi-sabi," the beauty of the humble and the imperfect. Wabi-sabi, declaimed Jesse, his thumbs hooked into the straps of his overalls, was developed to its height by 15th-century tea masters who found that the finish of Chinese Ming porcelain began to cloy. They started buying and exalting absolutely plain Korean peasant ware, stuff that was cracked, distressed, flawed. It reminded them of the beauty of nature, autumn leaves on a stone path.'
When he was a teen-ager, Dorsey told me, he read a book about tea ceremonies and was impressed by the Japanese precept of wabi-sabi, which holds that the greatest beauty comes from organization with a dash of disorder. The monks rake up leaves, then they sprinkle a few leaves back, he explained.
That's such a simple and necessary feature of any reverse proxy that it should obviously be included in the free version.
So, are they going to avoid ever implementing it in the free version? Would they turn away patches to add that functionality? I know there was an open source patch that never made it in, but I don't know why.
And what about staging/dev environments? Do you really have to pay full price to get basic features for internal testing servers?
Seems less open source friendly business model than Trolltech (Qt), Red Hat, MySQL...
I wonder how much traffic are generated by this 16%? I would assume the traffic will be more than 16%, probably above 50%.
As it stands right now, a lot of users aren't able to make use of 3rd party modules because of the overhead (recompiling). Once dynamic modules are supported the community should be able to fill in the most desired features.
HEADLINES FROM NEXT WEEK:
cat raises $750M at valuation of $25B
The venerable gnu shell program 'cat', which is an integral part of 70 million developers' toolchains and helps power shell scripts on 70% of the world's Internet servers, has completed raising a seed round of $750M at a valuation of $25B.
"We could have raised more", said Richard Stallman, "but by only parting with 3.3% we are leaving ourselves room to grow. More and more people are starting to use Gnu/GNU utilities, and our eventual market base is 7 billion people. We have no plans to monetize."
There were rumors that he had plans to rewrite big parts of the server, so-called nginx2. If this Nginx Plus is what they would do instead, that is pity.
In my humble opinion, while nginx/core is brilliant the module system is over-engineered and too complicated, and there are lot of room for improvements.
I know they use it, it is just looks kind of odd compared to the rest of young (I mean less than 5 year old) web companies. No Ubuntu, nginx, node, jvm, but instead C#. I don't know, it just stands out.
If so, are there screenshots ?
Has anyone built a very simple solution to start/stop an arbitrary set of Windows services across several boxes, in a specific order? It'd be nice to have a simple GUI for this sort of thing. I've started working on it, but I suck at desktop programming (well, at programming in general, probably)...
Does anybody know if it works with Mono?
Currently I'm going a centralised logstash server and using a logstash shipper on each of my servers to push the exceptions, from a standard logfile to it. I was toying with the idea of pushing all my errors at source to an SQL database but figured if I was having database problems I'd be missing all the exceptions that I could be using to trigger the alerts that I'm having database problems!
I am staying in Thailand, which may be one of those emerging markets. I will comment on the situation in Thailand, but the situation in other countries may be similar.
The average salary in Thailand is about $350. But that does not make Thailand a price-sensitive market for high-end smartphones. If you want a cheap phone, chinese iPhones copies are readily available starting from $40 on markets and even malls. The people who do buy a real iPhone don't generally buy it for it's features, hardware or apps. They buy because they have money and want to show it. You also see a lot of German luxury cars on street, despite the fact that these taxed at 200%. I heard that if you don't you don't have a Mercedes in Thailand, people don't want to do business with you because they think you are insolvent. A cheaper plastic version of the iPhone, isn't going to cut it in these markets. Rather a more expensive Gold version. Gold has a special meaning for Asians, especially Chinese, who form most of the elite in Thailand.
To a lesser extent I think that also applies Western countries. My former boss, who drove a Porsche, commented on the iPhone 4s launch, that it didn't matter what Apple's latest killer-feature would be. People (including himself) will always want to have the latest iPhone.
As for Apple cutting orders due to weak demand, they simply placed a large initial order with their suppliers to get a better price. They probably told them it was going to sell like hot cakes when all along they only needed a modest quantity. They now go back to them and say 'sorry folks we won't be needing so many for the next order' and the suppliers walk away wishing they had charged more per unit with the first order.
Note this article is thin on sources and the angle implied - 'weak demand' is just interpretation.
I'm quite sure that Apple has smart economists. So why do they call something that costs 15% less than something expensive "low price"?
The Monday morning quarterback in me will now say that "in order to differentiate an excellent product (5C) with a premium product (5S) is to at least sell the 5C under the 400 level. At a 100 price delta users are 'clearly' price elastic"
If margins of the 5C at a sub-400 price are not those Apple expects from its products don't build the 5C.
I think if Apple launched a "budget" phone it would destroy much of the Android market. Maybe it would damage their brand though, who knows.
I think Tim Cook want her to do the same to Apple. The iPhone needs to be seen as a luxury brand.
I think creating a font from scratch could become a designer's rite of passage. It involves usability, aesthetics, and technical knowledge (kerning, weights, character encoding, horizontal and vertical metrics...). I always thought about creating one myself but usually ended up browsing the web for original and better designed fonts.
You got me questioning my behavior.
Can you please put another file with a link back to your website and the request to donate to the International Justice Mission if used?
If you wish to compare it to something, have a look at these free fonts: http://www.exljbris.com/ They're free for the Roman, Bold, Heavy, Italic and small caps, but if you want more variables, say a Heavy Italic you pay a small fee.
1. Since gravity acts on all points of the slinky equally, you can aggregate this by saying that gravity acts on the slinky's center of mass.
2. The slinky acts like a spring. Since it is being held stationary, the forces on the bottom part of the slinky equal out. There is a force of gravity going down which equals an upward spring force.
Therefore when it is dropped, the center of mass falls at g=9.8 m/s^2, while the bottom part initially experiences no net forces.
You can also show why the net forces on the bottom of the spring will remain 0 (0 = mg - F_spring) for a spring obeying Hooke's law (F = k*d), where d falls with gravity.
More explorations of falling slinkys:
* http://www.wired.com/wiredscience/2011/09/modeling-a-falling... - a different way to model it.
* http://www.youtube.com/watch?v=b9-XgSYLxDk - video analysis.
* http://wamc.org/post/dr-mike-wheatland-university-sydney-phy... + http://www.physics.usyd.edu.au/~wheat/slinky/ - more experimenting and a formal paper.
To explain, if you had a button a light year away and had the option to press it via a remote hand, the fastest we could tell the remote hand to press it would be one light year.
However, there has been the question of whether a long stick that is one light year in distance in lieu of sending a signal to the remote hand could be faster. The way the slinky moves would demonstrate that the giant stick would not move faster than the speed of light as the motion exerted on one end would actually travel much slower.
The slinky is quite useful for demonstrating movement of objects. :)
At one end he had a geared down variable-speed motor that pushed the slinky with a sine wave motion. I believe the other end was fixed. In between, he had painted various parts of the slinky blue. The blue parts seemed to be completely still, even as the areas between the blue pulsated back and forth.
He said it was a demonstration he built "to show my ninkompoop investors about standing waves".
when you hold the slinky in the air by the top, the weight of the bottom is equal to the force of the tension of the spring, otherwise the bottom wouldn't remain stationary.
once you let go, the force from the spring should decrease as the top falls downward. But the top gathers up more of the bottom as it falls, so the smaller bottom needs less force to hold it in place. Apparently the decrease in mass of the bottom and decrease in tension of the spring exactly cancel each other out.
He could just let people download their data and decrypt it locally. Instead the site is prompting you for a password which it could freely capture.
Does this not mean that the NSA could patiently log all the traffic going in and out of the site over the next few days, then get a court order for this new SSL private key, then decrypt the traffic they collected?
I may have misunderstood, but doesn't that make this something of a trojan horse? Many users will login and try to download all their email, and for everyone who does, when the NSA (very likely) get a court order for the new SSL key, they'll have that large amount of private email everyone tried to copy from the site?
What? If an active attacker is changing certificates on the fly, he's also surely able to change the values in the HTML content of the page.
This will add absolutely no security for the users, only false sense of security via complex-looking measures, and he should know this.
now, you'll just wait what happens and be as surprised by the outcome as you've been with the surveillance revelations this year.
brave new world.
I hope that's not confusing enough to cause problems for either of them.
Note: Learned the basics of web programming from Codecademy - extremely helpful service.