Category: Mozilla

Mozilla Future Releases Blog: Moving Firefox to a faster 4-week release cycle

Mozilla Future Releases Blog: Moving Firefox to a faster 4-week release cycle

This article is cross-posted from Mozilla Hacks

Overview

We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

 a table showing the release dates for Firefox GA and pre-release channels, 2019-2020. Follow the link for data.

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor, now have more frequent releases to choose from. These 4-week releases will be the most stable, fastest, and best quality Firefox builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Future Releases.

Hacks.Mozilla.Org: Moving Firefox to a faster 4-week release cycle

Hacks.Mozilla.Org: Moving Firefox to a faster 4-week release cycle

Overview

We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

a table showing the release dates for Firefox GA and pre-release channels, 2019-2020

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor, now have more frequent releases to choose from. These 4-week releases will be the most stable, fastest, and best quality Firefox builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Mozilla Hacks – the Web developer blog.

Alexandre Poirot: Trabant Calculator – A data visualization of TreeHerder Jobs durations

Link to this tool (its sources)

What is this tool about?

Its goal is to give a better sense on how much computations are going on in Mozilla automation.
Current TreeHerder UI surfaces job durations, but only per job. To get a sense on how much we stress
our automation, we have to click on each individual job and do the sum manually.
This tool is doing this sum for you.
Well, it also tries to rank the jobs by their durations. I would like to open minds about the possible impact on the environment we may have here.
For that, I am translating these durations into something fun that doesn’t necessarily make any sense.

What is that car’s GIF?

The car is a Trabant. This car is often seen as symbolic of the former East Germany and the collapse of the Eastern Bloc in general. This part of the tool is just a joke. You may only consider looking at durations, which are meant to be trustable data. Translating a worker duration into CO2 emission is almost impossible to get right. And that’s what I do here: Translate worker duration into a potential energy consumption, which I translate into a potential CO2 emission, before finally translating that CO2 emission into the equivalent emission of a trabant over a given distance in kilometers.

Power consumption of an AWS worker per hour

Here is a really weak computation of Amazon AWS CO2 emissions for a t4.large worker.
The power usage of the machines these workers are running on could be 0.6 kW.
Such worker uses 25% of these machines.
Then let’s say that Amazon Power Usage Effectiveness is 1.1.
It means that one hour of a worker consumes 0.165 kWh (0.6 * 0.25 * 1.1).

CO2 emission of electricity per kWh

Based on US Environmental Protection Agency (source), the average CO2 emission per MWh is 998.4 lb/MWh.
So 998.4 * 453.59237(g/lb) = 452866 g/MWh, and, 452866 / 1000 = 452 g of CO2/kWh.
Unfortunately, the data is already old. It comes from a 2018 report, which seems to be about 2017 data.

CO2 emission of a Trabant per km

A Trabant emits 170 g of CO2 / km (source). (Another [source] reports 140g, but let’s say it emits a lot.)

Final computation

Trabant’s kilometers = "Hours of computation" * "Power consumption of a worker per hour"
                       * "CO2 emission of electribity per kWh"
                       / "CO2 emission of a trabant per km"
Trabant’s kilometers = "Hours of computation" * 0.165 * 452 / 170
=> Trabant’s kilometers = "Hours of computation" * 0.4387058823529412 **

All of this must be wrong

Except the durations! Everything else is highly subject to debate.
Sources are here, and contributions or feedback are welcomed.

The Mozilla Blog: Examining AI’s Effect on Media and Truth

The Mozilla Blog: Examining AI’s Effect on Media and Truth

Mozilla is announcing its eight latest Creative Media Awards. These art and advocacy projects highlight how AI intersects with online media and truth — and impacts our everyday lives

 

Today, one of the biggest issues facing the internet — and society — is misinformation.

It’s a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong.

Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we’re announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.

These eight projects will receive Mozilla Creative Media Awards totalling $200,000, and will launch to the public by May 2020. They include a Turing Test app; a YouTube recommendation simulator; educational deepfakes; and more. Awardees hail from Japan, the Netherlands, Uganda, and the U.S. Learn more about each awardee below.

Mozilla’s Creative Media Awards fuel the people and projects on the front lines of the internet health movement. Past Creative Media Award winners have built mock dating apps that highlight algorithmic discrimination; they’ve created games that simulate the inherent bias of automated hiring; and they’ve published clever tutorials that mix cosmetic advice with cybersecurity best practices.

These eight awards align with Mozilla’s focus on fostering more trustworthy AI.


The winners

 

The Mozilla Blog: Examining AI’s Effect on Media and Truth 1[1] Truth-or-Dare Turing Test | by Foreign Objects in the U.S.

This project explores deceptive AI that mimic real humans. Users play truth-or-dare with another entity, and at the conclusion of the game, must guess if they were playing with a fellow human or an AI. (“Truths” are played out using text, and “dares” are played out using an online sketchpad.) The project also includes a website outlining the state of mimicry technology, its uses, and its dangers.

 

The Mozilla Blog: Examining AI’s Effect on Media and Truth 2

[2] Swap the Curators in the Tube | by Tomo Kihara in Japan

This project explores how recommendation engines present different realities to different people. Users will peruse the YouTube recommendations of five wildly different personas — including a conspiracist and a racist persona — to experience how their recommendations differ.

 

The Mozilla Blog: Examining AI’s Effect on Media and Truth 3

[3] An Interview with ALEX | by Carrie Wang in the U.S.

The project is a browser-based experience that simulates a job interview with an AI in a future of gamified work and total surveillance. As the interview progresses, users learn that this automated HR manager is covering up the truth of this job, and using facial and speech recognition to make assumptions and decisions about them.

 

[4] The Future of Memory | by Xiaowei Wang, Jasmine Wang, and Yang Yuting in the U.S.

This project explores algorithmic censorship, and the ways language can be made illegible to such algorithms. It reverse-engineers how automated censors work, to provide a toolkit of tactics using a new “machine resistant” language, composed of emoji, memes, steganography and homophones. The project will also archive censored materials on a distributed, physical network of offline modules.

 

The Mozilla Blog: Examining AI’s Effect on Media and Truth 4[5] Choose Your Own Fake News | by Pollicy in Uganda

This project uses comics and audio to explore how misinformation spreads across the African continent. Users engage in a choose-your-own-adventure game that simulates how retweets, comments, and other digital actions can sow misinformation, and how that misinformation intersects with gender, religion, and ethnicity.

 

 

[6] Deep Reckonings | by Stephanie Lepp in the U.S.

This project uses deepfakes to address the issue of deepfakes. Three false videos will show public figures — like tech executives — reckoning with the dangers of synthetic media. Each video will be clearly watermarked and labeled as a deepfake to prevent misinformation.

 

The Mozilla Blog: Examining AI’s Effect on Media and Truth 5[7] In Event of Moon Disaster | by Halsey Burgund, Francesca Panetta, Magnus Bjerg Mortensen, Jeff DelViscio and the MIT Center for Advanced Virtuality

This project uses the 1969 moon landing to explore the topic of modern misinformation. Real coverage of the landing will be presented on a website alongside deepfakes and other false content, to highlight the difficulty of telling the two apart. And by tracking viewers’ attention, the project will reveal which content captivated viewers more.

 

[8] Most FACE Ever | by Kyle McDonald in the U.S.

This project teaches users about computer vision and facial recognition technology through playful challenges. Users will enable their webcam, engage with a facial recognition AI, and try to “look” a certain way — say, “criminal,” or “white.” The game reveals how inaccurate and biased facial recognition can often be.


These eight awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members included Mozilla staff, current and alumni Mozilla Fellows and Awardees, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

Also see (May 2019): Seeking Art that Explores AI, Media, and Truth

The post Examining AI’s Effect on Media and Truth appeared first on The Mozilla Blog.

William Lachance: mozregression update: python 3 edition

For those who are still wondering, yup, I am still maintaining mozregression, though increasingly reluctantly. Given how important this project is to the development of Firefox (getting a regression window using mozregression is standard operating procedure whenever a new bug is reported in Firefox), it feels like this project is pretty vital, so I continue out of some sense of obligation — but really, someone more interested in Mozilla’a build, automation and testing systems would be better suited to this task: over the past few years, my interests/focus have shifted away from this area to building up Mozilla’s data storage and visualization platform.

This post will describe some of the things that have happened in the last year and where I see the project going. My hope is to attract some new blood to add some needed features to the project and maybe take on some of the maintainership duties.

python 3

The most important update is that, as of today, the command-line version of mozregression (v3.0.1) should work with python 3.5+. modernize did most of the work for us, though there were some unit tests that needed updating: special thanks to @gloomy-ghost for helping with that.

For now, we will continue to support python 2.7 in parallel, mainly because the GUI has not yet been ported to python 3 (more on that later) and we have CI to make sure it doesn’t break.

other updates

The last year has mostly been one of maintenance. Thanks in particular to Ian Moody (:kwan) for his work throughout the year — including patches to adapt mozregression support to our new updates policy and shippable builds (bug 1532412), and Kartikaya Gupta (:kats) for adding support for bisecting the GeckoView example app (bug 1507225).

future work

There are a bunch of things I see us wanting to add or change with mozregression over the next year or so. I might get to some of these if I have some spare cycles, but probably best not to count on it:

  • Port the mozregression GUI to Python 3 (bug 1581633) As mentioned above, the command-line client works with python 3, but we have yet to port the GUI. We should do that. This probably also entails porting the GUI to use PyQT5 (which is pip-installable and thus much easier to integrate into a CI process), see bug 1426766.
  • Make self-contained GUI builds available for MacOS X (bug 1425105) and Linux (bug 1581643).
  • Improve our mechanism for producing a standalone version of the GUI in general. We’ve used cx_Freeze which mostly works ok, but has a number of problems (e.g. it pulls in a bunch of unnecessary dependencies, which bloats the size of the installer). Upgrading the GUI to use python 3 may alleviate some of these issues, but it might be worth considering other options in this space, like Gregory Szorc’s pyoxidizer.
  • Add some kind of telemetry to mozregression to measure usage of this tool (bug 1581647). My anecdotal experience is that this tool is pretty invaluable for Firefox development and QA, but this is not immediately apparent to Mozilla’s leadership and it’s thus very difficult to convince people to spend their cycles on maintaining and improving this tool. Field data may help change that story.
  • Supporting new Mozilla products which aren’t built (entirely) out of mozilla-central, most especially Fenix (bug 1556042) and Firefox Reality (bug 1568488). This is probably rather involved (mozregression has a big pile of assumptions about how the builds it pulls down are stored and organized) but that doesn’t mean that this work isn’t necessary.

If you’re interested in working on any of the above, please feel free to dive in on one of the above bugs. I can’t offer formal mentorship, but am happy to help out where I can.

William Lachance: Time for some project updates

I’ve been a bit bad about updating this blog over the past year or so, though this hasn’t meant there haven’t been things to talk about. For the next couple weeks, I’m going to try to give some updates on the projects I have been spending time on in the past year, both old and new. I’m going to begin with some of the less-loved things I’ve been working on, partially in an attempt to motivate some forward-motion on things that I believe are rather important to Mozilla.

More to come.

Mozilla Open Policy & Advocacy Blog: Governments should work to strengthen online security, not undermine it

On Friday, Mozilla filed comments in a case brought by Privacy International in the European Court of Human Rights involving government “computer network exploitation” (“CNE”)—or, as it is more colloquially known, government hacking.

While the case focuses on the direct privacy and freedom of expression implications of UK government hacking, Mozilla intervened in order to showcase the further, downstream risks to users and internet security inherent in state CNE. Our submission highlights the security and related privacy threats from government stockpiling and use of technology vulnerabilities and exploits.

Government CNE relies on the secret discovery or introduction of vulnerabilities—i.e., bugs in software, computers, networks, or other systems that create security weaknesses. “Exploits” are then built on top of the vulnerabilities. These exploits are essentially tools that take advantage of vulnerabilities in order to overcome the security of the software, hardware, or system for purposes of information gathering or disruption.

When such vulnerabilities are kept secret, they can’t be patched by companies, and the products containing the vulnerabilities continue to be distributed, leaving people at risk. The problem arises because no one—including government—can perfectly secure information about a vulnerability. Vulnerabilities can be and are independently discovered by third parties and inadvertently leaked or stolen from government. In these cases where companies haven’t had an opportunity to patch them before they get loose, vulnerabilities are ripe for exploitation by cybercriminals, other bad actors, and even other governments,1 putting users at immediate risk.

This isn’t a theoretical concern. For example, the findings of one study suggest that within a year, vulnerabilities undisclosed by a state intelligence agency may be rediscovered up to 15% of the time.2 Also, one of the worst cyber attacks in history was caused by a vulnerability and exploit stolen from NSA in 2017 that affected computers running Microsoft Windows.3 The devastation wreaked through use of that tool continues apace today.4

This example also shows how damaging it can be when vulnerabilities impact products that are in use by tens or hundreds of millions of people, even if the actual government exploit was only intended for use against one or a handful of targets.

As more and more of our lives are connected, governments and companies alike must commit to ensuring strong security. Yet state CNE significantly contributes to the prevalence of vulnerabilities that are ripe for exploitation by cybercriminals and other bad actors and can result in serious privacy and security risks and damage to citizens, enterprises, public services, and governments. Mozilla believes that governments can and should contribute to greater security and privacy for their citizens by minimizing their use of CNE and disclosing vulnerabilities to vendors as they find them.

————————
1https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/
2https://www.belfercenter.org/sites/default/files/files/publication/Vulnerability Rediscovery (belfer-revision).pdf
3https://en.wikipedia.org/wiki/WannaCry_ransomware_attack
4https://www.nytimes.com/2019/05/25/us/nsa-hacking-tool-baltimore.html

The post Governments should work to strengthen online security, not undermine it appeared first on Open Policy & Advocacy.

QMO: Firefox 70 Beta 6 Testday Results

QMO: Firefox 70 Beta 6 Testday Results

Hello Mozillians!

As you may already know,  Friday, September 13th – we held a new Testday event, for Firefox 70 Beta 6.

Thank you all for helping us make Mozilla a better place: Gabriela (gaby2300), Dan Caseley (Fishbowler) and Aishwarya Narasimhan!

Result: Several test cases were executed for Protection Report and Privacy Panel UI Updates.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.

We will make announcements as soon as something shows up!

 

Onno Ekker: Checklist

Onno Ekker: Checklist

Because I always seem to forget one step or another when creating a new version of my add-on, I decided this time to make myself a nice checklist. Let’s hope that when I find out I forgot something this time too, I remember to update the checklist 🙂

☑ Fix bug or implement a new feature.
☑ Update status of tickets on SourceForge.
☑ Git push the changes to SourceForge and GitHub.
☑ Document the changes in the release notes and in the version history.
☑ Upload alpha/beta version to BabelZilla and Crowdin so localizers can translate it.
☑ Create screenshots showing where new strings are used to help localizers and upload them to Crowdin.
☑ Verify that the new version works as expected Windows, Linux and Mac.
☑ Verify that the new version works in the latest Thunderbird and is also compatible with older versions.
☑ Verify that the new version works in the latest SeaMonkey and is also compatible with older versions.
☑ Verify that I have downloaded and merged all language changes and synchronize them also on BabelZilla and Crowdin.
☑ Verify that all strings in all languages are localized and if not, urge localizers to do so.
☑ Decide on the version number of the new version x.y.z. Only bug fixes? Increase z. New function? Increase y.
☑ Finalize the release notes and version history and update the compatibility info.
☑ Build the new version for SourceForge and for addons.thunderbird.net.
☑ Generate sha256 checksum and gpg signature for the new version for SourceForge.
☑ Git push the new version and gpg signature to SourceForge.
☑ Upload the new version to SourceForge and move old version to archive.
☑ Wait a bit until the new version has been copied to SourceForge’s mirrors and shows up as the latest version.
☑ Update the website with the release notes, version history, new download link, and auto update information.
☑ Verify that auto update from older versions of Mail Redirect to new version works.
☑ Register the new version on SourceForge’s ticket system, close the old version and create a new target version.
☑ Git push the website to SourceForge to save version history.
☑ Draft the release also on GitHub and make it available.
☑ Write a message to the mailing list to tell people about the new release, functions, and bug fixes.
☑ Do the same in a blog post.
☑ Wait a bit for reactions.
☑ Submit the new version of Mail Redirect to the review queue of addons.thunderbird.net.
☑ Keep fingers crossed that it isn’t automatically rejected by the automatic validation process because of an error I made.
☑ Wait a couple of days before the add-on is reviewed and accepted by one of the volunteer reviewers.
☑ Wait a couple of more days and ping a volunteer reviewer on irc and ask him/her to review my add-on.
☑ Keep an eye on the download and usage statistics to see the uptake of the new version and wonder about how many people are still using an older version even though Mail Redirect is backwards compatible.
☐ Ask for donations to support the continued development show how happy you are to use this add-on.