WordStar: Used "X" to Exit to system in its main menu (https://www.flickr.com/photos/markgregory/6946218793/?rb=1) - I do not know the revision shown in the screen shot.
According to Wikipedia (http://en.wikipedia.org/wiki/WordStar) WordStar was released in 1978. Which moves the date back to at least 1978 to use X for exit.
However, there is possibly a very simple explanation that the blog posting overlooked. In text menu's, such as WordStar's, which were quite common for a lot of software from that era, using the word "Exit" to mean "leave this program/application" was also common. When one goes looking for a single character memonic for "Exit" to build in as a keystroke to activate the "Exit" command from the menu, one has four choices: [e] [x] [i] [t]
Since [x] is an uncommon letter, while e, i, t, are more common, and therefore more likely to be used for triggering other commands in the menu(s), choosing [x] to mean exit meant that the same character could likely be used as a universal "leave this menu" command key across all the menus.
Which would then lead to the common _F_ile->E_x_it command accelerators in drop down style menus (whether in a GUI or in a text menuing system). [x] was unlikely to have been used for the keyboard accelerator for other entries in the "file" menu, so picking e[x]it was a safe choice.
It is not a far reach from _F_ile->E_x_it using [x] as its accelerator key to labeling the title bar button that performs the same function with an X as well, to take advantage of whatever familiarity users might have with the drop down menu accelerators
That said, since the "X" in this case is white on a black background, I always interpreted the icon as four arrows pointing inward to indicate a shrinking/disappearing motion. In fact, when you closed a window, GEM would play an (inelegant) animation akin to the Macintosh of the time, composed of a sequence of boxes first shrinking from the size of the window to a small box and then shuffling that off to the top left of the screen.
As bemmu points out, the maximize button (at the top right in a GEM/TOS window) is four arrows pointing outward. Incidentally, GEM did not have a notion of "minimize."
Put another way, although I find the Japanese inspiration argument interesting, I don't think there's a whole lot to it. I think it's a fun coincidence.
In any event, thank you for the trip down memory lane and for the fun screen grabs!
NextStep 0.8, '88 vintage.
I wonder where the author got the idea that the [-] button at the top-left was a close icon. It was the "Control Box", a menu icon. AFAIK it's still there, just invisible -- hit alt+space to open it.
Disclaimer: I'm currently unable to test that.
I clearly remember that for closing windows one could do alt+f4 (which was itself a shortcut to Close) or open the file menu (Alt+F) and select eXit.
I can't check but I believe it was the same for Write and notepad as well and any other programs that had the Exit option.
So maybe that's where the windows 95 developer took inspiration for the X icon
Maybe it's because I was used to Windows 3.11, where you had to actually double-click the [-] button to exit an application.
If the icons in upper left and right are also like that, then the upper left icon is actually four little triangles pointing inwards and not an X. The one on the right is four little triangles pointing outwards.
(Or it could be an X)
Edit: seems Wordstar used X too, probably starting in 1978.
A behavior still present in modern versions.
Or crossing-out an item to "delete" it on the page?
One quick thing, IIR Windows 2.0 and 3.0, the '-' button in the upper left wasn't "close". It was a small menu that happened to have close as an option.
edit - Here's Arthur, the precursor to RiscOS in ~ 1986 - http://www.rougol.jellybaby.net/meetings/2012/PaulFellows/10... - It has nice x icons.
No [x] to close these 1980's text editors either. X was commonly used to delete characters in-line, but not to close the program."
Hmm... I've used :x to write+quit in Vim for years. And, :X is to encrypt+quit. Don't have a year when that was added though. Could be fun to try and dig that up.
[NB A fisherman explained this to me in Mallaig Scotland, none of the fishing boats I've been on were so high tech!]
While doing a little googling, I found an article  claiming $7 million in fish ladder work after structural damage forced a reduction in water level. So perhaps this solution could be cost effective or quickly put in place in case damage occurs just before a run.
 http://damnationfilm.com/ http://www.columbian.com/news/2014/apr/12/crack-in-dam-force...
Sorry about that."
No problem. The Pirate Bay is still available in my country.
> Purchases are not available > > Channels linked with a Google+ page, such as codemac, can't make purchases on YouTube. > > To watch this video, you'll need to switch to your firstname.lastname@example.org account.
What a farce.
Some of these movies seem quite new. Snowpiercer is still in theaters I believe.
): Nothing sadder than profiting off death.
Simple pay / purchase approach works for digital goods the best, and that more naturally can remain DRM-free.
It has the "Buy" option there by the way. Is it DRM-free? Why can't they just make one option with lower price for "Buy"?
I run the Ghostery extension and a year or so ago I noticed that when visiting YouTube ~15 analytic trackers were being blocked. Turns out a couple of extensions were injecting tens of trackers into popular sites (without my express permission), and I would have had no idea unless I had another extension to block and report this activity.
My girlfriends computer is worse - her extensions seem to inject actual adverts into lots of her pages. I asked her why there was an obnoxious "click the bottle to win 1000000$" flash advert on Facebook and she thought it was just how Facebook is. Same thing for YouTube and other popular sites.
Usually stay wary of signing up for anything which tilts towards 3.
Is that part of SimilarWeb Pro? It's not clear from the website how their service could be used to monitor the web client traffic of specific companies. An independent reference on the quoted claim would be helpful.
I've seen a lot of horseshit patents asserted against start-ups. If there was an organization that followed the troll around and offered defense services against all of their defendants, it would make trolling a lot harder, and might reduce the numbers of these parasitic lawyers involved in this shameless trade.
I just read about a Fish & Richardson patent partner who started filing his own "inventions" with the patent office, based on slight modifications of the patents he was filing for clients, and then sold those patents to trolls for huge sums. Its actually really easy to write patents focused on sabotaging your clients, if you are a lawyer and become familiar with their future roadmaps.
I know a bunch of trivial claims I could write right now and they would be worth a few million in a couple years, because Google, Facebook, and others would have to move in that direction in a few years (related to Machine Learning and image recognition).
All you have to do is follow conferences, understand the papers, and then write some trivial, and obvious evolutions of those techniques. Obviousness is something defendant's find extremely difficult to prove for highly complex technology, because the juries are made of people that have no idea what programming is, much less Machine Learning, and the judge is probably some moron, that thinks he is really smart, and assumes that he patent office is full of diligent geniuses ... and so he will give a lot of weight to the plaintiff's "USPTO certified" claims.
All it takes is for a programmer to be involved in one patent litigation and you see the patent system for what it is. A colossal system of giant, continuous, expensive injustice implemented in the hope of preventing an extremely rare form of injustice (when a true original inventor is cheated by a shameless larger company).
Imagine we institute an expensive system of highly trained commandos to follow every nerd in America around in high schools across the country, to protect them from bullying and to be their friends. It would certainly stop all physical bullying. But would it be worth the giant overhead/expense?
That is what we have to start asking ourselves. Even if the patent system prevents some rare injustices, WTF, is this continuous, and overwhelming cloud of uncertainty for every start-up and company worth it?
I feel like China and India are doing quite alright without overburdensome strong patent protection. And Europe seems fine with a hamstrung software patent system. And even in the US, Microsoft, Oracle, Adobe, IBM, and Apple got their start before software became patentable ... and they all did, and are doing fine.
If you see someone arguing for patents, they are almost always some fucking lawyer, troll, or someone sitting on a giant portfolio. The people actually making software every day don't want this shit system. VCs that fund start-ups, don't want it ... even though you would expect they want it, to protect their investments.
- Prof. Jonathan Askin - @jaskin - runs the clinic, and trusted us to try this experiment.
- Maegan Fuller - @mafuller21 - did the lion's share of research and writing. Brilliant and dedicated student. She just took the bar exam.
- Jorge Torres - @jorgemtorres - Guy who actually knows patent litigation. Too bad he dropped out of law to be a VC. Pitch him :-)
"After reading it, and weighing the recent Supreme Court decisions, the troll simply dropped its case against CarShield. After months of dedicated work, the clinic students deserved a gavel-banging judicial decision in their favor. All they got was a quiet withdrawal. But I think we can still chalk it up in the win column. The case is dismissed (for now), the students learned real patent litigation skills."
Does the decision encouraging "fee shifting" require that the case go to trial? Does it require that the fees actually be paid by the defendant? Or might the law school students still be able to receive payment by the troll for their pro bono defense? It seems like the "new standard" would be much more effective if it also applied in cases like this.
This is a little disingenuous. You know the patent doesn't cover the "gist" or any particular figure. It covers the claims (which you don't mention at all, even in passing). And for some reason, you don't even tell us what the patent number is so we can look at it for ourselves!
From a little googling, I suspect that we're talking about Pat. No. 6,775,356. But why hide the ball and characterize the patent as "not particularly innovative" when you could just let people see it for themselves?
When reading articles from Medium I feel like I am not only not wasting time but acquiring knowledge in extremely fast pace.
I wonder if invalidating patents, that trolls commonly use, a good use of a law student's time?
Yet the startup and the judicial system already lost time on this. There should be a fee for withdrawing cases like this.
I'm glad they didn't have to the pay the troll, but I also hate when the troll doesn't get what it deserves, either: losing.
However, it shouldn't be that way anymore--not after Citizens United. I'm fighting a lawsuit about this issue right now, and if I win (however unlikely), corporations will be able to represent themselves against patent trolls.
Is it difficult, confusing and complex work? Yes. Is it any harder than programming, or anything else a serious startup would do? Not really. And it beats paying a law firm six or seven figures.
The case is:
Question 1: has the lawsuit been filed in an odd/irrelevant place? Followed by some subquestions to be more precise. If so, fill out this form, include the addresses of .. and ... and we'll send a form letter to them for you, asking for a dismissal.
Question 2-5: keep stalling and asking for dismissals based on various reasons.
Question 6-10: try some other ways to get the troll to drop it, for instance by presenting an example of obvious prior art
Of course all letter include repeating references to relevant higher court decisions.
While helping out gives the students experience, it's not reasonable to consider this any sort of real option beyond an occasional situation in which a startup can solicit a law-student who takes on a single case as part of their curriculum.
It does provide a very clear rationale why C++ became the way it was (C++98), given the design goals and the constraint of being C compatible as much as possible, both in language and compiler toolchains.
That's from his "Design Support Rules" section early on in the paper. I am still going through the paper, but this seems like a good rule and a reason why pointers are still allowed despite the protestations of people who hate C++. Sure, you can get into a pickle with them but that's not the language's fault.
Of course, there has been the rule "use references where you can, pointers where only absolutely necessary" but it's a difficult habit to break...
As a musician, it's very inspiring when someone pours so much of their personality and time into crafting such masterpieces. Say what you will about the font, but it's coherent and shows you a complete aesthetic world of its own. Beethoven's symphonies may have taken longer to write, and it may have been more difficult, but the love I see from Runge into crafting something that at the end of the day can be replaced by Times New Roman, is amazing and frankly, comparable to Beethoven's.
So much respect.
I would like to set a few projects with it to really get a feel for it and see if there are any issues.
Unless I missed it in the article, I'd be curious to know how many hours / weeks / months it took from v1 to final version?
> Update: Your author is a moron. Microsoft did in fact promise this [running Metro apps in windowed-fashion] in the future. I had my wires crossed.
Imaging in general is a field in medicine of increasingly growing importance - and as the article suggests its increasingly limited by resolution of the images. I personally see patient-specific diagnostics from image processing to be one of the most promising medical advances to expect in the coming decades. Image resolution is a major thing holding this back, it probably needs to improve ~2-5X for many applications.
This kind of thing is exciting, but probably at least a decade from clinical use, if at all from this particular technique. Use in research is itself quite interesting in at, at least to my knowledge, microvessels can only be observed by micro-dissection, which disturbs the tissue.
Being able to better observe microvessels clinically could have pretty big implications for heart attacks, strokes, cancers, and kidney functions.
The experiment (from the paper) was done on a in vitro (removed) pig heart (and kidneys, reported in the paper but not Medium article). In vitro studies often give much better images than in vivo because there is less
I am sceptical of the ability to retrieve the injected gallium, although I am not terribly familiar with its properties. I believe it is a quite rare and expensive metal, and the volumes needed for this would be small but not insignificant (maybe 20mL/ organ imaged?). I could see retrieving it being an issue, and particularly in the heart or brain, blood flow would need to be restored within a few minutes.
Iodine contrast agent, which this technique is compared against, is pretty nasty stuff. It gets filtered out by the kidneys, and is toxic to them. If anything, there is a movement to get away from contrast-based CT imaging of vessels and towards MRI or ultrasound, where contrast isn't required.
A quick search for a gallium MSDS confirms that it isn't exactly regarded as medically safe right now: https://www.rotometals.com/v/vspfiles/downloadables/MSDS_GAL...
The most curious part of the story isn't there, how do you suck out the metal afterwards? What percentage are you able to reliably suck out?
"When I click print I get nothing." -Tuesday, August 5, 2008
"I downloaded those updates and Open Office Still prints." -Friday, August 8, 2008
"Open Office stopped printing today." -Tuesday, August 12, 2008
"I just updated and still print." -Monday, August 18, 2008
"I stand corrected, after a boot cycle Open Office failed to print." -Tuesday, August 19, 2008
I had a friend who was Manjul's TA for a probability class our senior year. I still remember him exclaiming, "There's no way that series telescoped!"  when Manjul solved one problem in a particularly clever way.
The strange thing is that it was hard to tell at first if Manjul was particularly smart. Two of our other mathematically accomplished classmates, Lenny Ng and Kiran Kedlaya (both of whom I knew much better than Manjul), were obviously brilliant, but with Manjul it took a lot longer to figure out he was a genius.
I think now the secret is out.
Artur Avila - A Brazilian Wunderkind Who Calms Chaoshttp://www.simonsfoundation.org/quanta/20140812-a-brazilian-...
Manjul Bhargava - The Musical, Magical Number Theoristhttp://www.simonsfoundation.org/quanta/20140812-the-musical-...
Martin Hairer - In Noisy Equations, One Who Heard Musichttp://www.simonsfoundation.org/quanta/20140808-in-mathemati...
Maryam Mirzakhani - A Tenacious Explorer of Abstract Surfaceshttp://www.simonsfoundation.org/quanta/20140812-a-tenacious-...
Subhash Khot - A Grand Vision for the Impossiblehttp://www.simonsfoundation.org/quanta/20140812-a-grand-visi...
Here's a nice piece NPR did on him:http://www.npr.org/templates/story/story.php?storyId=4111253
I can't help seeing this and remember once more Robin Williams (RIP) in 'Good Will Hunting', where I first knew about the Field Medal and the magic of Maths was a recurring theme in that fantastic movie.
Fairly obvious but whatever.
weak people conjecture and then get results "if my conjecture stands". This guy is a black belt, he makes a conjecture, gets a field medal for it, and then removes it from his proof ("just joking, it works wether it's true or not").
Still gets the medal for the now useless conjecture and not for the ultimate proof.
And Subhash, winner of the Rolf Nevanlinna prize: http://www.simonsfoundation.org/quanta/20140812-a-grand-visi...
I am actually a fan of the simple, anti-powerpoint editor - that is why I use slides.
I hope you keep the old edited as an option so there is still the easy way to make WOW slides!
(Disclaimer: I'm working on an airplane printer. You turn a handle, feed in raw materials, and get a flying product at the other end of the box. I'm focusing on Kline-Fogelmann wing style, since its a very easy and workable camber technique .. hope to have something to report to HN about the project soon..)
I first heard about transactional memory when Sun had plans to implement it for its UltraSPARC Rock processor. There is a decent overview of the concept at http://en.wikipedia.org/wiki/Transactional_memory
Say I bought a TSX enabled CPU specifically for that feature, I wonder if Intel will give me my money back... (they can have their broken CPU of course too)
Given that TSX is one of the features that distinguishes some of the more expensive Haswell SKU's, is Intel going to issue a refund for affected customers?
I see two about TSX:
HSD87 X No Fix Intel TSX Instructions May Cause Unpredictable System behavior Problem: Under certain system conditions, Intel TSX (Transactional Synchronization Extensions) instructions may result in unpredictable system behavior. Implication: Due to this erratum, use of Intel TSX may result in unpredictable behavior. Workaround: It is possible for the BIOS to contain a workaround for this erratum. Status: For the steppings affected, see the Summary Table of Changes HSD114 X No Fix Intel TSX Instructions May Cause Unpredictable System behavior Problem: Under a complex set of internal timing conditions and system events, software using the Intel TSX (Transactional Synchronization Extensions) instructions may observe unpredictable system behavior. Implication: This erratum may result in unpredictable system behavior. Intel has not observed this erratum with any commercially available system. Workaround: It is possible for the BIOS to contain a workaround for this erratum. Status: For the steppings affected, see the Summary Table of Changes
It was last updated in June, so I guess it doesn't contain this latest erratum. Can't wait until it's updated... though I don't know if Intel is likely to disclose details.
Or is this such a non-issue that nobody cares?
Install the microcode by running: sudo apt-get install intel-microcode
Alternatively install a BIOS update once available.
Are there any scenario where Transactional Memory are useful in consumer environment?
That was the whole point of STARTTLS - to allow a way to start a tunnel but be backwards compatible to older clients.
The real problem was that they didn't account for MITM attacks.
This is not a protocol vulnerability though. Poorly written programs cause problems with any protocol.
About 13 years ago, I wrote my own programming language expressly for the purpose of implementing network stacks, and had a complete TCP in it; I didn't have this problem. But I do have this problem all the time when I write direct network code and forget about buffering.
Similarly: "Python is so slow that Google reset the network connection" seems a bit unlikely too. Google, and TCP in general, deals with slower, less reliable senders than you. :)
What's the time between your SYN and their SYN+ACK?
If you're interested in writing a TCP/IP stack in Python I would recommend you use Python raw sockets, or possibly dnet or pcapy. The Scapy level of abstraction is too high for your needs.
I agree with other posters who mention buffering in libpcap. Read the man page for pcap_dispatch to get a better idea of how buffering works in libpcap. Also try capturing packets with tcpdump with and without the '-l' switch. You'll see a big difference if your pkts/sec is low.
Don't do arp spoofing. If you're writing a TCP/IP stack then you need to also code up iparp. If you don't want to do that, then use raw sockets and query the kernel's arp cache.
On second thought you really need to use raw sockets if you want this to work. Using anything pcap based will still leave the kernel's TCP/IP stack involved, which is not what you want.
 http://libdnet.sourceforge.net/pydoc/private/dnet-module.htm... http://corelabs.coresecurity.com/index.php?module=Wiki&actio...
Here is my two cents on the expirement:
1. You don't really have to ack every packet, you have to order them, drop duplicates and ack the last one.
2. Google ignores the TCP flow control algorithm and sends the first few hundred packets very quickly without waiting for acks. They do this to beat the latency of the flow control algorithm. That's why you end up with so many packets on your side. You could just try anything but google, and you would probably see that you have a less insane packet storm.
The authors problems are because she is not using RAW_IP_SOCKETS.
Making TCP packets using pythons struct module is a breeze. I can post specific examples in code if anyone is interested.
Finally you can write a proper TCP stack in python, there is no reason not to. Your user-space interpreted stack won't be as fast as a compiled kernel-space one - but it won't be feature short.
PS: I guess, Google is probably sending him a SSL/TLS handshake which he isn't handling.
Edit: Corrected author's gender as mentioned by kind poster.
I've not used it yet, but I've read over the documentation and am itching for an opportunity to do so.
Anyone know why the Python program is so slow? I'm looking at the code and my first guess would be this part but I can't explain why, overall, it would be so slow that a remote host would close the connection.
An alternative method I've used in the past is to add an iptables rule to silently drop the incoming packets. libpcap sees the packets before they are dropped, so you'll still be able to react to them, but the kernel will ignore them (and therefore won't attempt to RST them).
In Uni we had a networking course where we got to build a network web server software stack from the bottom up, starting with Ethernet/MAC, TCP/IP, and then on the top HTTP, all being flashed onto a small network device (IIRC it was originally a CD-ROM server). It was an extremely enlightening exercise. I recommend you go deeper instead of just using a premade Python library for TCP!
---- SYN ---->
<-- SYN/ACK --
---- ACK ---->
rather than having the client send two SYNs to the server?
It's really surprising to me that lots of ppl are usingscapy for things that require performance but then againif you look at the scapy website or the docs, it's not immediately apparent that their tool is not meant for this.Which I guess says a lot about the scapy developers ratherthan the scapy users.
tl;dr Scapy is a joke, performance-wise.
The bottleneck for such processes is typically network I/O and I can imagine that taking control of the network in the user space might offer some modest to significant wins. For Hadoop in particular network packets needs to traverse quite a few layers before it is accessible to the application.
Has anyone done this sort of a thing for mapreduce. Pointers to any writeup would be awesome.
In fact TCP itself might be an overkill for mapreduce. The reduce functions used are typically associative and commutative. So as long as the keys and values are contained entirely within a datapacket, proper sequencing is not even needed. Any sequence would suffice.
Tangent: One of my favorite interview questions is to ask how traceroute works. The question works best when the candidate doesn't actually know. Then we can start to discuss bits and pieces of how TCP/IP works, until they can puzzle out the answer.
Somebody's oversubscribed $3/month shared PHP hosting might not ramp up the speed as quickly.
I also learned a lot about networking by writing a TCP/IP stack in Common Lisp. http://lukego.livejournal.com/4993.html if you are interested.
sshuttle is a pure-python one-way TCP VPN solution that is very well behaved and quite efficient. The source is highly readable as well. +1 to everything Avery Pennarun has released (including wvdial, bup)
C++, but very detailed articles.
Hopefully someone will blog a response post that gets popular on HN proving just how wrong it is.
In terms of using Scapy for your packet crafting, here are some guides with examples that may help you work around your issues. (Hint: use the Scapy-defined sending and receiving routines and don't implement your own, or stop using Scapy and implement your own raw packet sockets) http://securitynik.blogspot.com/2014/05/building-your-own-tc... https://github.com/yomimono/client-fuzzball/blob/master/fuzz... https://www.sans.org/reading-room/whitepapers/detection/ip-f... http://www.lopisec.com/2011/12/learning-scapy-syn-stealth-po...
And that's just the top 30 networks if every network cleaned up their announcements, it would eliminate ~232,000 routes (~45% of the table).
Adding to the deaggregation problem is the inability to easily filter out route announcements based on RIR minimum allocations without having to add tons of exceptions for CDNs that operate as islands of connectivity and carve out IP space for each island from a single address space allocation. (There's no covering route for the islands of connectivity since these CDNs have no "backbone" connecting the islands, so if you filter out those smaller announcements, you lose connectivity to those islands.)
There are many people who think this problem will just magically go away as IPv6 adoption increases, but all increased IPv6 adoption will do is make limited CAM space even more limited as network engineers have to balance dividing precious CAM space between a ballooning-quickly IPv4 route table and a ballooning-slightly-less-quickly IPv6 route table.
(To be clear: I think ubiquitous, functioning, end-to-end native IPv6 connectivity needs to happen sooner than later, but it's not a magic bullet for the Internet's technical problems.)
Takeaways: a) 512K routes isn't necessarily a hardware limitation, it's the default TCAM allocation for IPv4 and B) most people most of the time don't need their routers to take a full BGP feeds worth of routes - and I hope those that do aren't running 6500's in Q3 2014 ;)
I find the economics of the routing table to be fascinating. When someone announces a route, it makes use of a constrained (and often expensive, TCAM-based) resource on routers all over the world. More discussion:
edit: I'll take it by the downvotes without responses that's a "no"?
OH WAIT, WE DIDNT FKIN THINK OF THIS IN OUR RUSH TO PUSH A BROKEN INCOMPLETE SOLUTION.
I'm really intrigued by the tech for this whole area - clearly these guys are absurd 0.001% outliers, but I wonder if you could make a (lethal or non-lethal) self defense device for reduced-vision or blind people which used image recognition. It's going to happen at physical-contact distances, but having some combination of camera, pepper spray or firearm, and safety interlock (like a nailgun, where you press it against the material to allow it to fire)?
Ideally something which requires even less physical strength, vision, or other ability than a firearm (handguns are actually hard to use in self defense for a variety of situations). An impaired mobility, impaired vision person totally deserves some kind of protection, and probably needs it more than someone who can see trouble coming from a distance, or run away.
In my case, the knees are fine, the joint between my right thigh and body got hurt baddly.
So, do not stand a lot, do not sit all day long either.
Do drink water regularly, (kidney stones for sitting programmers) too much water will cause kidney problems too.
As for water bottles, I'm a fan of the stainless steel klean kanteen with the bamboo cap
Look out, many times aluminum bottles are lined with a BPA coating, too.
An OXO bottle brush is pretty essential to cleaning; I then use boiling water, soap, and sometimes bleach to clean them out.
I wish I could get a run of them custom printed as promotional items, and ideally cerakoted, but that would be horrible overkill.
(they're also a good way to transport high quality alcohol into settings where alcohol may not be so permissible...)
i hope it's up to date!