PLEASANT GROVE, Utah — A construction project in Pleasant Grove is taking some former elementary school students on an unexpected trip down memory lane.
A Pleasant Grove Streets Department worker hit a time capsule while the crew worked on excavating Discovery Park.
The park is being rebuilt from the ground up.
Bobbie Dickey remembers that park, and used to play at it after it opened with an exciting new play area in 1996.
On Tuesday, Dickey read poems in her living room from that time in her life.
“A super good athlete, a fine student too. You need a cool picture, he’ll draw one for you,” she recited.
The poems, written by her 6th Grade teacher Mr. Bagley, offer her a piece of nostalgia. He wrote the simple rhymes for each student in his class, and she said he read them to students on the last day of school.
“He was one of my favorite teachers,” she said.
It’s fun to think back on the memories—the “Sham Battles” (aka dodge ball) she played with classmates, how Mr. Bagley held monthly drawings. Bobbie won two of those drawings.
These are things she hasn’t thought about in a long time, until a former classmate sent her a photo of the poems on social media.
“I had forgotten about it, until it just barely came up,” she said.
Came up, or rather, dug up—by accident.
Nate Lloyd said his co-worker was digging near what used to be the sand box at Discovery Park, when he hit something about a foot deep in the ground.
It was a blue tube with a white end cap.
“He thought it was a pipe, so he was like, ‘Great. I just hit a pipe,’” Lloyd said.
That is until his co-worker read the lettering on the side.
“Alpine School District,” it reads. “May 2, 1996. Open on May 2, 2096.”
Unfortunately, the heavy equipment already broke the capsule open, so his co-worker brought the capsule to the shop.
“He said, ‘Hey I found this time capsule.’ So we went through it,” Lloyd recounted.
He and his co-worker started sifting through a 90s gold mine. The capsule had been buried when the park was originally built and dedicated.
They found Star Wars memorabilia, a Goosebumps book, 1996 Olympic basketball Dream Team magazine, newspapers, video tapes, and class pictures from Central Elementary School.
The find came as a total surprise to Pleasant Grove City leaders and the people involved with the Rediscover Discovery Park Committee.
The find also came as a total surprise to Lloyd.
“When I saw the Central School shirt, that was like ‘Hey– that’s… I could be in this!’”
Nate went to Central the year before the capsule went into the earth. As he looked through the pictures, he recognized teachers and students.
“These are the kids that I went to school with,” he said.
His favorite item in the capsule—a t-shirt with the logo for the Central Elementary school mock space center.
“We would do missions, and people would come visit it from all over,” Nate remembered. “And, it was a great experience.”
One of the pieces of papers included all of Mr. Bagley’s poems for his students, along with a class picture.
Bobbie Dickey is pictured near the center top row.
Her poem reads:
“In all that she does, she just gives her best, from artwork to math or spelling test, she’s won, you know, 2 monthly draws. Let’s all stand up, class, and give some applause.”
Memories once forgotten, now discovered again.
“Kind of neat to see those memories come back up, for sure,” Dickey said.
The Rediscover Discovery Park Committee said they plan to display the time capsule and its contents. They’ll add items from today and rebury it in the park after the new park is built—this time with a plaque to mark the spots it’s at.
WASHINGTON COUNTY, Utah — The Washington County Sheriff’s Office reported late Tuesday night that the body of a missing mother had been found in Washington County.
33-year-old Sarah Frances Cox (Yasuda) left her family’s campsite in the Little Creek Mountain area, southwest of the town of Apple Valley Sunday night and didn’t return.
Search and Rescue volunteers found the woman’s body around 3:30 p.m. Tuesday.
The Medical Examiner will determine the cause of death, but early indications are she died of a self-inflicted gunshot wound.
Detectives are still looking into the circumstances leading up to her death, but do not suspect foul play at this time.
(This is a repost of the answer I posted on stackoverflow for this question. This answer immediately became my most ever upvoted answer on stackoverflow with 516 upvotes during the 48 hours it was up before a moderator deleted it for unspecified reasons. It had then already been marked “on hold” for being “primarily opinion- based” and then locked but kept: “exists because it has historical significance”. But apparently that wasn’t good enough. I’ve saved a screenshot of the deletion. Debated on meta.stackoverflow.com)
I’m Daniel Stenberg.
I made curl
I founded the curl project back in 1998, I wrote the initial curl version and I created libcurl. I’ve written more than half of all the 24,000 commits done in the source code repository up to this point in time. I’m still the lead developer of the project. To a large extent, curl is my baby.
I shipped the first version of curl as open source since I wanted to “give back” to the open source world that had given me so much code already. I had used so much open source and I wanted to be as cool as the other open source authors.
Thanks to it being open source, literally thousands of people have been able to help us out over the years and have improved the products, the documentation. the web site and just about every other detail around the project. curl and libcurl would never have become the products that they are today were they not open source. The list of contributors now surpass 1900 names and currently the list grows with a few hundred names per year.
Thanks to curl and libcurl being open source and liberally licensed, they were immediately adopted in numerous products and soon shipped by operating systems and Linux distributions everywhere thus getting a reach beyond imagination.
Thanks to them being “everywhere”, available and liberally licensed they got adopted and used everywhere and by everyone. It created a defacto transfer library standard.
At an estimated six billion installations world wide, we can safely say that curl is the most widely used internet transfer library in the world. It simply would not have gone there had it not been open source. curl runs in billions of mobile phones, a billion Windows 10 installations, in a half a billion games and several hundred million TVs – and more.
Should I have released it with proprietary license instead and charged users for it? It never occured to me, and it wouldn’t have worked because I would never had managed to create this kind of stellar project on my own. And projects and companies wouldn’t have used it.
Why do I still work on curl?
Now, why do I and my fellow curl developers still continue to develop curl and give it away for free to the world?
- I can’t speak for my fellow project team members. We all participate in this for our own reasons.
- I think it’s still the right thing to do. I’m proud of what we’ve accomplished and I truly want to make the world a better place and I think curl does its little part in this.
- There are still bugs to fix and features to add!
- curl is free but my time is not. I still have a job and someone still has to pay someone for me to get paid every month so that I can put food on the table for my family. I charge customers and companies to help them with curl. You too can get my help for a fee, which then indirectly helps making sure that curl continues to evolve, remain free and the kick-ass product it is.
- curl was my spare time project for twenty years before I started working with it full time. I’ve had great jobs and worked on awesome projects. I’ve been in a position of luxury where I could continue to work on curl on my spare time and keep shipping a quality product for free. My work on curl has given me friends, boosted my career and taken me to places I would not have been at otherwise.
- I would not do it differently if I could back and do it again.
Am I proud of what we’ve done?
Yes. So insanely much.
But I’m not satisfied with this and I’m not just leaning back, happy with what we’ve done. I keep working on curl every single day, to improve, to fix bugs, to add features and to make sure curl keeps being the number one file transfer solution for the world even going forward.
We do mistakes along the way. We make the wrong decisions and sometimes we implement things in crazy ways. But to win in the end and to conquer the world is about patience and endurance and constantly going back and reconsidering previous decisions and correcting previous mistakes. To continuously iterate, polish off rough edges and gradually improve over time.
Never give in. Never stop. Fix bugs. Add features. Iterate. To the end of time.
Yeah. For real.
Do I ever get tired? Is it ever done?
Sure I get tired at times. Working on something every day for over twenty years isn’t a paved downhill road. Sometimes there are obstacles. During times things are rough. Occasionally people are just as ugly and annoying as people can be.
But curl is my life’s project and I have patience. I have thick skin and I don’t give up easily. The tough times pass and most days are awesome. I get to hang out with awesome people and the reward is knowing that my code helps driving the Internet revolution everywhere is an ego boost above normal.
curl will never be “done” and so far I think work on curl is pretty much the most fun I can imagine. Yes, I still think so even after twenty years in the driver’s seat. And as long as I think it’s fun I intend to keep at it.
As a reader of my blog you know curl. You also most probably already
know why you would use curl and if I’m right, you’re also a fan of using
the right tool for the job. But do you know why others use
curl and why they switch from other solutions to relying on curl for
their current and future data transfers? Let me tell you the top reasons
I’m told by users.
Logging and exact error handling
What exactly happened in the transfer and why are terribly important
questions to some users, and with curl you have the tools to figure that
out and also be sure that curl either returns failure or the command
worked. This clear and binary distinction is important to users for whom
that single file every transfer is important. For example, some of the
largest and most well-known banks in the world use curl in their
back-ends where each file transfer can mean a transfer of extremely
large sums of money.
A few years ago I helped a money transaction service switch to curl
to get that exact line in the sand figured out. To know exactly and with
certainty if money had been transferred – or not – for a given
operation. Vital for their business.
curl does not have the browsers’ lenient approach of “anything goes
as long as we get something to show” when it comes to the Internet
curl’s verbose output options allow users to see exactly what curl sends and receives in a quick and non-complicated way. This is invaluable for developers to figure out what’s happening and what’s wrong, in either end involved in the data transfer.
curl’s verbose options allows developers to see all sent and received data even when encryption is used. And if that is not enough, its SSLKEYLOGFILE support allows you to take it to the next level when you need to!
Same behavior over time
Users sometimes upgrade their curl installations after several years
of not having done so. Bumping a software’s version after several years
and many releases, any software really, can be a bit of a journey and
adventure many times as things have changed, behavior is different and
things that previously worked no longer do etc.
With curl however, you can upgrade to a version that is a decade
newer, with lots of new fancy features and old crummy bugs fixed, only
to see that everything that used to work back in the day still works –
the same way. With curl, you can be sure that there’s an enormous focus on maintaining old functionality when going forward.
Present on all platforms
The fact that curl is highly portable, our users can have curl and
use curl on just about any platform you can think of and use it with the
same options and behaviors across them all. Learn curl on one platform,
then continue to use it the same way on the next system. Platforms and
their individual popularity vary over time and we enjoy to allow users
to pick the ones they like – and you can be sure that curl will run on
When doing the occasional file transfer every once in a while, raw
transfer performance doesn’t matter much. Most of the time will then
just be waiting on network anyway. You can easily get away with your
Python and java frameworks’ multiple levels of overhead and excessive
Users who scan the Internet or otherwise perform many thousands of
transfers per second from a large number of threads and machines realize
that they need fewer machines that spend less CPU time if they build
their file transfer solutions on top of curl. In curl we have a focus on
only doing what’s required and it’s a lean and trimmed solution with a
well-documented API built purely for Internet data transfers.
The features you want
The author of a banking application recently explained for us that
one of the top reasons why they switched to using curl for doing their
Internet data transfers, is curl’s ability to keep the file name from
curl is a feature-packed tool and library that most likely already
support the protocols you need and provide the power features you want.
With a healthy amount of “extension points” where you can extend it or
hook in your custom extra solution.
Support and documentation
No other tool or library for internet transfers have even close to
the same amount of documentation, examples available on the net,
existing user base that can help out and friendly users to support you
when you run into issues. Ask questions on the mailing lists, post a bug
on the bug tracker or even show your non-working code on stackoverflow
to further your project.
curl is really the only Internet transfer option available to get something that’s old and battle-proven proven by the giants of the industry, that is trustworthy, high-performing and yet for which you can also buy commercial support for, today.
This blog post was also co-posted on wolfssl.com.
There seems to be no end to updated posts about bug bounties in the curl project these days. Not long ago I mentioned the then new program that sadly enough was cancelled only a few months after its birth.
Now we are back with a new and refreshed bug bounty program! The curl bug bounty program reborn.
This new program, which hopefully will manage to survive a while, is setup in cooperation with the major bug bounty player out there: hackerone.
If you find or suspect a security related issue in curl or libcurl, report it! (and don’t speak about it in public at all until an agreed future date.)
You’re entitled to ask for a bounty for all and every valid and confirmed security problem that wasn’t already reported and that exists in the latest public release.
The curl security team will then assess the report and the problem and will then reward money depending on bug severity and other details.
Where does the money come from?
We intend to use funds and money from wherever we can. The Hackerone Internet Bug Bounty program helps us, donations collected over at opencollective will be used as well as dedicated company sponsorships.
We will of course also greatly appreciate any direct sponsorships from companies for this program. You can help curl getting even better by adding funds to the bounty program and help us reward hard-working researchers.
Why bounties at all?
We compete for the security researchers’ time and attention with other projects, both open and proprietary. The projects that can help put food on these researchers’ tables might have a better chance of getting them to use their tools, time, skills and fingers to find our problems instead of someone else’s.
Finding and disclosing security problems can be very time and resource consuming. We want to make it less likely that people give up their attempts before they find anything. We can help full and part time security engineers sustain their livelihood by paying for the fruits of their labor. At least a little bit.
Only released code?
The state of the code repository in git is not subject for bounties. We need to allow developers to do mistakes and to experiment a little in the git repository, while we expect and want every actual public release to be free from security vulnerabilities.
So yes, the obvious downside with this is that someone could spot an issue in git and decide not to report it since it doesn’t give any money and hope that the flaw will linger around and ship in the release – and then reported it and claim reward money. I think we just have to trust that this will not be a standard practice and if we in fact notice that someone tries to exploit the bounty in this manner, we can consider counter-measures then.
How about money for the patches?
There’s of course always a discussion as to why we should pay anyone for bugs and then why just pay for reported security problems and not for heroes who authored the code in the first place and neither for the good people who write the patches to fix the reported issues. Those are valid questions and we would of course rather pay every contributor a lot of money, but we don’t have the funds for that. And getting funding for this kind of dedicated bug bounties seem to be doable, where as a generic pay contributors fund is trickier both to attract money but it is also really hard to distribute in an open project of curl’s nature.
How much money?
At the start of this program the award amounts are as following. We reward up to this amount of money for vulnerabilities of the following security levels:
Critical: 2,000 USD
High: 1,500 USD
Medium: 1,000 USD
Low: 500 USD
Depending on how things go, how fast we drain the fund and how much companies help us refill, the amounts may change over time.
Found a security flaw?
We’re running a short poll asking people about where and how we should organize curl up 2020 – our annual curl developers conference. I’m not making any promises, but getting people’s opinions will help us when preparing for next year.
I’ll leave the poll open for a couple of days so please respond asap.
curl supports some twenty-three protocols (depending on exactly how you count).
In order to properly test and verify curl’s implementations of each of these protocols, we have a test suite. In the test suite we have a set of handcrafted servers that speak the server-side of these protocols. The more used a protocol is, the more important it is to have it thoroughly tested.
We believe in having test servers that are “stupid” and that offer buttons, levers and thresholds for us to control and manipulate how they act and how they respond for testing purposes. The control of what to send should be dictated as much as possible by the test case description file. If we want a server to send back a slightly broken protocol sequence to check how curl supports that, the server must be open for this.
In order to do this with a large degree of freedom and without restrictions, we’ve found that using “real” server software for this purpose is usually not good enough. Testing the broken and bad cases are typically not easily done then. Actual server software tries hard to do the right thing and obey standards and protocols, while we rather don’t want the server to make any decisions by itself at all but just send exactly the bytes we ask it to. Simply put.
Of course we don’t always get what we want and some of these protocols are fairly complicated which offer challenges in sticking to this policy all the way. Then we need to be pragmatic and go with what’s available and what we can make work. Having test cases run against a real server is still better than no test cases at all.
“SOCKS is an Internet protocol that exchanges network packets between a client and server through a proxy server. Practically, a SOCKS server proxies TCP connections to an arbitrary IP address, and provides a means for UDP packets to be forwarded.
(according to Wikipedia)
Recently we fixed a bug in how curl sends credentials to a SOCKS5 proxy as it turned out the protocol itself only supports user name and password length of 255 bytes each, while curl normally has no such limits and could pass on credentials with virtually infinite lengths. OK, that was silly and we fixed the bug. Now curl will properly return an error if you try such long credentials with your SOCKS5 proxy.
As a general rule, fixing a bug should mean adding at least one new test case, right? Up to this time we had been testing the curl SOCKS support by firing up an ssh client and having that setup a SOCKS proxy that connects to the other test servers.
curl -> ssh with SOCKS proxy -> test server
Since this setup doesn’t support SOCKS5 authentication, it turned out complicated to add a test case to verify that this bug was actually fixed.
This test problem was fixed by the introduction of a newly written SOCKS proxy server dedicated for the curl test suite (which I simply named socksd). It does the basic SOCKS4 and SOCKS5 protocol logic and also supports a range of commands to control how it behaves and what it allows so that we can now write test cases against this server and ask the server to misbehave or otherwise require fun things so that we can make really sure curl supports those cases as well.
It also has the additional bonus that it works without ssh being present so it will be able to run on more systems and thus the SOCKS code in curl will now be tested more widely than before.
curl -> socksd -> test server
Going forward, we should also be able to create even more SOCKS tests with this and make sure to get even better SOCKS test coverage.
In January 2002, we added support for a global DNS cache in libcurl. All transfers set to use it would share and use the same global cache.
We rather quickly realized that having a global cache without locking was error-prone and not really advisable, so already in March 2004 we added comments in the header file suggesting that users should not use this option.
It remained in the code and time passed.
In the autumn of 2018, another fourteen years later, we finally addressed the issue when we announced a plan for this options deprecation. We announced a date for when it would become deprecated and disabled in code (7.62.0), and then six months later if no major incidents or outcries would occur, we said we would delete the code completely.
That time has now arrived. All code supporting a global DNS cache in curl has been removed. Any libcurl-using program that sets this option from now on will simply not get a global cache and instead proceed with the default handle-oriented cache, and the documentation is updated to clearly indicate that this is the case. This change will ship in curl 7.65.0 due to be released in May 2019 (merged in this commit).
If a program still uses this option, the only really noticeable effect should be a slightly worse name resolving performance, assuming the global cache had any point previously.
Programs that want to continue to have a DNS cache shared between multiple handles should use the share interface, which allows shared DNS cache and more – with locking. This API has been offered by libcurl since 2003.
HTTP/1.1 Pipelining is the protocol feature where the client sends off a second HTTP/1.1 request already before the answer to the previous request has arrived (completely) from the server. It is defined in the original HTTP/1.1 spec and is a way to avoid waiting times. To reduce latency.
HTTP/1.1 Pipelining was badly supported by curl for a long time in the sense that we had a series of known bugs and it was a fragile feature without enough tests. Also, pipelining is fairly tricky to debug due to the timing sensitivity so very often enabling debug outputs or similar completely changes the nature of the behavior and things are not reproducing anymore!
HTTP pipelining was never enabled by default by the large desktop browsers due to all the issues with it, like broken server implementations and the likes. Both Firefox and Chrome dropped pipelining support entirely since a long time back now. curl did in fact over time become more and more lonely in supporting pipelining.
The bad state of HTTP pipelining was a primary driving factor behind HTTP/2 and its multiplexing feature. HTTP/2 multiplexing is truly and really “pipelining done right”. It is way more solid, practical and solves the use case in a better way with better performance and fewer downsides and problems. (curl enables multiplexing by default since 7.62.0.)
In 2019, pipelining should be abandoned and HTTP/2 should be used instead.
Starting with this commit, to be shipped in release 7.65.0, curl no longer has any code that supports HTTP/1.1 pipelining. It has been disabled in the code since 7.62.0 already so applications and users that use a recent version already should not notice any difference.
Pipelining was always offered on a best-effort basis and there was never any guarantee that requests would actually be pipelined, so we can remove this feature entirely without breaking API or ABI promises. Applications that ask libcurl to use pipelining can still do that, it just won’t have any effect.
(I will update this blog post with more links to videos and PDFs to presentations as they get published, so come back later in case your favorite isn’t linked already.)
The third curl developers conference, curl up 2019, is how history. We gathered in the lovely Charles University in central Prague where we sat down in an excellent class room. After the HTTP symposium on the Friday, we spent the weekend to dive in deeper in protocols and curl details.
I started off the Saturday by The state of the curl project (youtube). An overview of how we’re doing right now in terms of stats, graphs and numbers from different aspects and then something about what we’ve done the last year and a quick look at what’s not do good and what we could work on going forward.
James Fuller took the next session and his Newbie guide to contributing to libcurl presentation. Things to consider and general best practices to that could make your first steps into the project more likely to be pleasant!
Long term curl hacker Dan Fandrich (also known as “Daniel two” out of the three Daniels we have among our top committers) followed up with Writing an effective curl test where the detailed what different tests we have in curl, what they’re for and a little about how to write such tests.
After that I was back behind the desk in the classroom that we used for this event and I talked The Deprecation of legacy crap (Youtube). How and why we are removing things, some things we are removing and will soon remove and finally a little explainer on our new concept and handling of “experimental” features.
Igor Chubin then explained his new protect for us: curlator: a framework for console services (Youtube). It’s a way and tooling that makes it easier to provide access to shell and console oriented services over the web, using curl.
Me again. Governance, money in the curl project and someone offering commercial support (Youtube) was a presentation about how we intend for the project to join a legal entity SFC, and a little about money we have, what to spend it on and how I feel it is good to keep the project separate from any commercial support ventures any of us might do!
While the list above might seems like more than enough, the day wasn’t over. Christian Schmitz also did his presentation on Using SSL root certificate from Mac/Windows.
Our local hero organizer James Fuller then spoiled us completely when we got around to have dinner at a monastery with beer brewing monks and excellent food. Good food, good company and curl related dinner subjects. That’s almost heaven defined!
Daylight saving time morning and you could tell. I’m sure it was not at all related to the beers from the night before…
Robin Marx then put in the next gear and entertained us another hour with a protocol deep dive titled HTTP/3 (QUIC): the details (slides). For me personally this was a exactly what I needed as Robin clearly has kept up with more details and specifics in the QUIC and HTTP/3 protocols specifications than I’ve managed and his talk help the rest of the room get at least little bit more in sync with current development.
Then I was up again and I got to explain to my fellow curl hackers about HTTP/3 in curl. Internal architecture, 3rd party libs and APIs.
Jakub Klímek explained to us in very clear terms about current and existing problems in his talk IRIs and IDNs: Problems of non-ASCII countries. Some of the problems involve curl and while most of them have their clear explanations, I think we have to lessons to learn from this: URLs are still as messy and undocumented as ever before and that we might have some issues to fix in this area in curl.
To bring my fellow up to speed on the details of the new API introduced the last year I then made a presentation called The new URL API.
Clearly overdoing it for a single weekend, I then got the honors of doing the last presentation of curl up 2019 and for an audience that were about to die from exhaustion I talked Internals. A walk-through of the architecture and what libcurl does when doing a transfer.
I ended up doing seven presentations during this single weekend. Not all of them stellar or delivered with elegance but I hope they were still valuable to some. I did not steal someone else’s time slot as I would gladly have given up time if we had other speakers wanted to say something. Let’s aim for more non-Daniel talkers next time!
A weekend like this is such a boost for inspiration, for morale and for my ego. All the friendly faces with the encouraging and appreciating comments will keep me going for a long time after this.
Thank you to our awesome and lovely event sponsors – shown in the curl up logo below! Without you, this sort of happening would not happen.
curl up 2020
I will of course want to see another curl up next year. There are no plans yet and we don’t know where to host. I think it is valuable to move it around but I think it is even more valuable that we have a friend on the ground in that particular city to help us out. Once this year’s event has sunken in properly and a month or two has passed, the case for and organization of next year’s conference will commence. Stay tuned, and if you want to help hosting us do let me know!