Month: June 2017

Recover Exchange Server after total loss

Recover Exchange Server after total loss

Being able to recover an Exchange Server is key to business continuity. This is magnified in environments where there is only a single Exchange server. Rebuilding an Exchange environment from scratch would be an arduous and monumental task. Luckily Exchange saves much of its configuration settings in Active Directory. As long as Active Directory is healthy you can recover an Exchange Server to its former configuration. This process saves a massive amount of time.

That said there are some items that are not stored in Active Directory. This includes the databases where the user and public folder data is stored, third-party certificates and, customizations made outside of the Exchange management tools.

Certificates are easy. If you don’t have a backup you can have your certificate re-keyed by your provider. This may take some time so it is much better to export your Exchange server certificate and save it to a safe location. Then it can be quickly imported in the event of a failure. This will reduce downtime.

Databases are a little more difficult. Depending on the nature of the failure they may need to be restored from backup. The time required for restore largely depends on the size of the database. With the Exchange standard license you get five databases. So, rather than one large database, go with five smaller ones. These greatly aides your recovery time objective (RTO). Exchange enterprise allows up to a hundred databases giving you even greater capacity.

I always recommend that you architect a database availability group (DAG) where possible. Even if your budget can only cover two Exchange servers–creating a two-member DAG with two copies of each database–will put you miles ahead when it comes to disaster recovery. The instructions to recover a DAG member differ and we will cover that in a later article.

Configuration outside of the Exchange tools is going to be a little tougher. You will either need documentation so the changes can be repeated, or, a backup of the changes. Customizations outside of Exchange can include the registry, IIS, or, text-based configuration files.

Note: It can help to keep a copy of the Exchange install files in a safe location. Microsoft generally only makes the last two cumulative updates publically available. You can recover from the latest cumulative update. However, note that the build number within Exchange admin tools will reflect the version prior to failure. This will update as soon as you run the next cumulative update.

Total loss of an Exchange Server

For this article I have a virtual machine running Exchange Server 2016 CU5, named EX16, running on Windows Server 2016. This virtual machine is going to be deleted from its virtual storage. This could mimic failed storage, a virtual machine becoming corrupt, or, the operating system refusing to boot. This Exchange Server is not part of a Database Availability Group (DAG).

Note: You can also use this process with older versions of Exchange.

So, let’s “accidentally” delete our virtual machine. The screenshot to the right shows our virtual machine has been deleted.

Recover Exchange Server after total loss 1
Recover Exchange Server after total loss 2

Gathering information for the recovery process

For the recovery, we will need to know the operating system and directory path our Exchange server used. Without this information, the recovery procedure will fail. For our lab, we knew all this information. But what if we didn’t have this information?

To find the operating system we can use Active Directory Users and Computers.

Open Active Directory Users and Computers and locate the computer account for the failed Exchange Server. Right-click on the account and select Properties from the context menu. Select the Operating System tab. In our case, we can see we were running Windows Server 2016 Datacenter.

To find the install directory for Exchange you will need to use ADSI Edit.

Open ADSI Edit. Right-click on the ADSI Edit node and select Connect to from the context menu. Under Select a well known naming context pick Configuration from the drop-down menu. Click Ok.

Expand Configuration, CN=Configuration,DC=,DC=, CN=Services, CN=Microsoft Exchange, CN=, CN=Administrative Groups, CN=Exchange Administrative Group, CN=Servers and right-click on the name of the failed server and select Properties.

Double click on the attribute named msExchInstallPath to see the Exchange install path on the failed server. In our case Exchange was installed under the default path of C:Program FilesMicrosoftExchange ServerV15. If your value is different than the default install path you will need to specify this during recovery.

If you want to restore using the exact same cumulative update you can find that with ADSI Edit.

Using the instructions above right-click on the name of the failed server and select Properties. Select the attribute named serialNumber. The number 15.1 designates Exchange 2016. In parenthesis, we can see the build is 30845.34. If we remove the first two digits (30) we have the build number of 845.34. If we look up this build number we can see we are on Cumulative Update 5.

Recover Exchange Server

Now that we know the operating system and install path let’s recover our server.

The first decision is whether you are recovering to the same or different hardware. If the same hardware then you should not have to worry about sizing. If new hardware you need to make sure it meets both the system requirements to run Exchange and that you have also sized the new hardware in accordance with the Exchange Server Role Requirements Calculator.

For our scenario, we have a single server environment and will be recovering to a new virtual machine on the existing virtual infrastructure. We will use the same sizing as our old virtual machine.

Once the hardware is ready we need to reinstall our operating system. It is imperative we use the same operating system and service pack level as before. The recovery procedure will not allow an in-place upgrade of the operating system. For our scenario, we will be reinstalling Windows Server 2016.

While we are waiting for the operating system to reinstall let’s reset the Active Directory computer account for the failed Exchange Server. Resetting the computer account permits us to do two things. First, it allows us to rejoin the new server to Active Directory under the old computer name. Second–this is the most important part–it allows the recovery process to retrieve all configuration data from Active Directory for the failed Exchange server. It is critical we do not delete the computer account during the recovery process.

To reset the computer account open Active Directory Users and Computers and locate the computer account for the failed Exchange Server. Right-click on the account and select Reset Account from the context menu. Click Yes to confirm. You will receive a notification that the account was successfully reset. Click Ok.

Note: That it may take some time for this reset to replicate throughout Active Directory.

Once the operating system is installed be sure to give it a static IP (I recommend using the same one), the same computer name and, join it to the domain. If the domain join fails you may need more time for replication. In our case, we have set a static IP, named the new server back to EX16 and rejoined our domain SKARO.LOCAL.

Next, let’s get the Exchange 2016 prerequisites installed. To do this open a PowerShell console as the administrator.

Then issue the following command.

 C:> Install-WindowsFeature Server-Media-Foundation, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation, RSAT-ADDS

Next, verify you have the necessary version of the .NET Framework installed. This will vary based on the version of Exchange and operating system you need to recover.

Next download and install the Unified Communications Managed API (UCMA) 4.0.

Note: Your prerequisites will vary depending on the version of Exchange and operating system you are installing. For example, we are installing Exchange 2016 on Windows Server 2016 which requires we install this patch.

Now is a good time to reboot the server. Once rebooted open a Command Prompt, change to the location of your Exchange setup files and, issue the following command. The recovery process can not be completed via the GUI setup.

If you installed Exchange at a different install path you will need to specify this with the /TargetDir switch.

D:> Setup /m:RecoverServer /IAcceptExchangeServerLicenseTerms

Welcome to Microsoft Exchange Server 2016 Cumulative Update 5 Unattended 

Copying Files...
File copy complete. Setup will now collect additional information needed 
for installation.

Mailbox role: Transport service
Mailbox role: Client Access service
Mailbox role: Unified Messaging service
Mailbox role: Mailbox service
Management tools
Mailbox role: Client Access Front End service
Mailbox role: Front End Transport service

Performing Microsoft Exchange Server Prerequisite Check

Configuring Prerequisites                                    COMPLETED
Prerequisite Analysis                                        COMPLETED

Configuring Microsoft Exchange Server

Preparing Setup                                              COMPLETED
Stopping Services                                            COMPLETED
Copying Exchange Files                                       COMPLETED
Language Files                                               COMPLETED
Restoring Services                                           COMPLETED
Language Configuration                                       COMPLETED
Mailbox role: Transport service                              COMPLETED
Mailbox role: Client Access service                          COMPLETED
Mailbox role: Unified Messaging service                      COMPLETED
Mailbox role: Mailbox service                                COMPLETED
Exchange Management Tools                                    COMPLETED
Mailbox role: Client Access Front End service                COMPLETED
Mailbox role: Front End Transport service                    COMPLETED
Finalizing Setup                                             COMPLETED

The Exchange Server setup operation completed successfully.
Setup has made changes to operating system settings that require a reboot 
to take effect. Please reboot this server prior to placing it into 

When setup completes you will need to reboot. Once rebooted you will be able to log back into the Exchange Admin Center and confirm all settings were restored from Active Directory.

Wrapping it all up

Once you have successfully restored your Exchange server you will need to reapply your Exchange certificate. If you have backed this up you can easily import it. If not, you will need to get your certificate re-keyed with your certificate provider, which will mean generating a new certificate. This can take some time. As mentioned previously it is a good idea to export your certificate and store it in a safe place.

If you made any customizations to the Exchange Server these will need to be reconfigured. Customizations are changes you have made outside of Exchange Admin Center or Exchange Management Shell that Exchange has no knowledge of. For example, making custom registry changes, IIS modifications, or, manual edits to web.config files will all need to be redeployed. Any setting you configured inside of the Exchange tools such as configuring custom connectors, virtual directories, email address policies, or, transport rules will all be present.

Lastly, depending on the nature of your failure you may also need to recover a database from backup. For example, if this was just a matter of an operating system that would not boot and your databases and logs were on a separate drive you probably will not have to restore but instead, be dealing with a dirty shutdown. If the database and logs were lost you will need to recover this from a backup. As mentioned earlier if your budget can cover two Exchange servers then I would recommend creating a DAG. This will greatly improve your business continuity from a single server failure.

Further Reading

Here are some articles I thought you might like:

Have you ever had to recover an Exchange Server? How did it go? Join the conversation on Twitter @SuperTekBoy.

The post Recover Exchange Server after total loss appeared first on SuperTekBoy.

“SED” Options and its usage

Sed – The Stream Editor

A stream editor is used to perform basic text transformations on an input file. Sed command is mainly used to replace the text in a file. But it is a powerful text processing tool.

Some uses of sed command are explained below with examples:

Consider the text file “test” as an example

1) Replace all the occurrence of the pattern in a line

“SED”  Options and its usage 3

Here in the above example, sed is used to replace all occurrences of  “system” with “software” within the file test.

Let us have look into the switches used for the task

 s  Substitution operation
 /  Delimiter
 system  Search pattern
 software  Replacement string
 g  Global Replacement flag

Sed command by default replace only the first occurrence of a string in each line. The /g flag specifies the sed command to replace all the occurrences of the pattern in each line.

The table below shows some more options:

 Command  Purpose
 sed ‘s/system/software/’ test  Replace first occurrence of a string in each line
 sed ‘s/system/software/n’ test  Replace the nth occurrence of a pattern in a line
 sed ‘s/system/software/ng’ test  Replace from nth occurrence to all occurrences in a line
 sed ‘n s/system/software/’ test  Replace string on a specific line number (nth line)
 sed ‘n,$ s/system/software/’ test  Replace string on a range of lines (from nth line till last)
 sed ‘/as/ s/system/software/’ test  Replace on a line which matches a pattern (here pattern is “as”)
sed ‘s/system/software/w test2’ test Copy replaced line to another file
 sed ‘/your/ c Replaced line’ test  Replace the lines which contains a pattern (In this case, the entire line with the pattern “your” will be replced with “Replaced line”.)
 sed ‘/your/ a Replaced line’ test  Add a new line after a match
  sed ‘/your/ i Replaced line’ test   Add a new line before a match


2) Use a different delimiter

sed 's|system|software|' test

Instead of ‘|’ we can also use ‘_’ or ‘@’ as the delimiters. This is mainly used when the search pattern or the replacement string is url.


3) Use & as the replacement string

sed 's/system/& and &' test

Here in the above example ‘&’ represents the replacement string and hence replace ‘system’ with ‘system and system’.

4) Duplicate the replaced line

sed 's/system/software/p' test

The /p(print) flag prints the replaced line twice in the terminal. In this case the lines which are not replaced will be printed only once.

The following command generate the duplicate of each line in this file:

sed 'p' test


5) Print only the replaced lines

sed -n 's/system/software/p' test

Here the -n option suppresses the duplicate of the replaced lines generated by the /p flag.

If we use the option -n alone without /p, then the sed does not print anything.


6) Delete lines

sed '2 d' test

In this case, it deletes the second line from the file ‘test’.

sed '2,$ d' test

Here, it deletes a range of lines i.e, from second line till the last line.

7) Run multiple sed commands

sed -e 's/An/The' -e 's/is/was/' test

In order to run multiple sed commands either we can pipe the output of one sed command to the other or we can use the ‘-e’ option.

8) Sed as grep command

Case 1:
cat test
An operating system is a vital component of the system.
The operating system controls your computer's tasks and system resources to optimize performance.
An operating system is a collection of softwares.
An operating system is abbreviated as OS.

Case 2:
sed -n '/vital/ p' test
An operating system is a vital component of the system.

Case 3:
sed -n '/vital/ !p' test
The operating system controls your computer's tasks and system resources to optimize performance.
An operating system is a collection of softwares.
An operating system is abbreviated as OS.

Here in the above example,

Case 1 shows the contents of the file “test”

Case 2 executes sed command to print the lines which match the pattern “vital”. This is same as grep command.

Case 3 executes sed command to print the lined which do not match with the pattern “vital”. This is same as grep -v.


9) Sed as tr command

sed 'y/ATp/atP/' test

The sed command along with y flag acts as tr command. Here in the above example, the sed command use ‘y’ flag to replace ‘A’ with ‘a’ and ‘T’ with ‘t’ which is similar to the action performed by tr command.


10) Edit the source file

sed -i 's/system/software/' test

Sed command by default does not edit the source file for our safety but by using ‘-i’ option source file can be edited.

11) Take backup of source file prior to editing

sed -i.back 's/system/software/' test

In the above sed command, before editing the source file sed command takes the backup of ‘test’ as ‘test.back’.

sed command is case-sensitive. Use /I or /i flag for case-insensitive search.


There are many more examples of sed command. Here I chose these examples to illustrate some basic concepts. This concludes my tutorial on sed and I hope you enjoyed it.

Get 24/7 expert server management

The post “SED” Options and its usage appeared first on SupportSages – Your IT Infrastructure partner !!.

Enable SSH login to vCenter 6.5 in WebUI

Enabling vCenter SSH access from Web User Interface

1. Log in to WebUI

2. On the vSphere Web Client Home page, click System Configuration.

3. Under System Configuration, click Nodes.

4. Under Nodes, select a node and click the Manage tab.

5. Under Common, select Access and click Edit.

(I already had SSH enabled)

6. Check the box for SSH and click OK to save the settings.

Quickly copy an Anonymous Receive Connector “Relay” between servers

In this article, we are going to take a look at just how easy it can be to copy an anonymous receive connector from one server to another using PowerShell.

This is especially important in scenarios where a receive connector may have dozens–if not hundreds–of IPs. Adding each IP using the graphical user interface would be insanely time-consuming. It would also be prone to human error. This challenge only multiplies if you have many servers to repeat this process on. With PowerShell, we can cut this task down to mere seconds.

The first part of this article is a primer on how to configure an anonymous receive connector. If you are just interested in how to copy all IPs from one connector to another jump to the section titled Copying an Anonymous “Relay” Connector between servers.

Note: While this article focuses on moving an anonymous receive connector it can be adapted for any custom receive connector you have created.

A quick primer on anonymous receive connectors

Before we explore how to move a receive connector let’s take a refresher on how we create a receive connector with PowerShell. For this task, we use the New-ReceiveConnector cmdlet. For example, to create an anonymous receive connector our command might look like this.

 C:> New-ReceiveConnector -Name "Anonymous Relay" -Server EX16-01 -Usage Custom -TransportRole FrontEndTransport -PermissionGroups AnonymousUsers -Bindings -RemoteIPRanges,,

In this command, we create a receive connector named “Anonymous Relay”. We use the -Server parameter to identify which server we want the connector to be created on. We identify that the -Usage of the connector will be Custom. Custom is one of five connector types and is used for anonymous relays.

The -TransportRole identifies whether this connector should be a FrontEndTransport or a HubTransport. Front-end transport is a connector that accepts messages from client connections. When I say client connections this is anything external to the Exchange servers coming in. Hub transport is designed solely to accept messages from another Exchange server. Whenever configuring a relay you will always go with FrontEndTransport.

The -PermissionGroups identifies what type of connections this connector will accept. For the purposes of our relay, we went with AnonymousUsers. Assigning this permission grants the NT AuthorityAnonymous Logon account several permissions to the connector, including ms-Exch-SMTP-Accept-Any-Sender. For a full list of connector permissions and permission groups check out this article from TechNet.

The -Bindings parameter configures the IP and port number Exchange server should listen on. Specifying a string of zeroes instructs Exchange to listen on all its assigned IPs. We left the port number at 25 as this is the default for SMTP. However, it is possible to configure your relay on a custom port.

The -RemoteIPRanges lists all the IPs that will be permitted to relay through this connector. You can specify individual IPs or IP ranges. This parameter will accept a comma-separated list of IPs and IP ranges. In our example, we went with two individual IPs and one IP range.

If you want your anonymous receive connector to be able to relay to email addresses outside your organization you will need to add one additional permission. This is MS-Exch-SMTP-Accept-Any-Recipient.

 C:> Get-ReceiveConnector "EX16-01Anonymous Relay" | Add-ADPermission -User 'NT AUTHORITYAnonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

Identity                 User                            Deny    Inherited
--------                 ----                            ----    ---------
EX16-01Anonymous Relay  NT AUTHORITYANONYMOUS LOGON    False   False

Copying an Anonymous “Relay” Connector between servers

While PowerShell is certainly much easier than adding IPs via the graphical user interface the previous example is still cumbersome. In this section, we will take New-ReceiveConnector to the next level. But first, let’s take a look at the Get-ReceiveConnector cmdlet.

Get-ReceiveConnector allows us to return all attributes from a specific receive connector. To examine the connector we just created our command would look like this. Remember that if the name of your connector has spaces you will need to enclose it in quotation marks.

 C:> Get-ReceiveConnector -Identity "EX16-01Anonymous Relay" | Format-List

If we wanted to return just the information on the IP addresses we could issue a command such as this.

 C:> (Get-ReceiveConnector -Identity "EX16-01Anonymous Relay").RemoteIPRanges

LowerBound :
UpperBound :
Netmask :
CIDRLength :
RangeFormat : LoHi
Size : ::a
Expression :

LowerBound :
UpperBound :
Netmask :
CIDRLength :
RangeFormat : SingleAddress
Size : ::1
Expression :

LowerBound :
UpperBound :
Netmask :
CIDRLength :
RangeFormat : SingleAddress
Size : ::1
Expression :

The output of the previous command can be used as the source data for a new connector on another server.

To do this we will embed the Get-ReceiveConnector command with New-ReceiveConnector.

For example, to create a new receive connector on server EX16-02, using all remote IP data from EX16-01 our command would look as follows.

 C:> New-ReceiveConnector -Name "Anonymous Relay" -Server EX16-02 -Usage Custom -TransportRole FrontEndTransport -PermissionGroups AnonymousUsers -Bindings -RemoteIPRanges (Get-ReceiveConnector "EX16-01Anonymous Relay").RemoteIPRanges

Identity                          Bindings               Enabled
--------                          --------               -------
EX16-02Anonymous Relay           {}           True

In this command, you can see we replace the comma-separated list of IPs and IP ranges with our Get- cmdlet. We can confirm all IPs came across by rerunning the Get-ReceiveConnector cmdlet against the new server. You can repeat this command for each server you need to copy this connector.

Don’t forget that if you need your anonymous relay to be able to send an email outside your organization you will need to add the MS-Exch-SMTP-Accept-Any-Recipient permission.

 C:> Get-ReceiveConnector "EX16-02Anonymous Relay" | Add-ADPermission -User 'NT AUTHORITYAnonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

Further Reading

Here are some articles I thought you might like:

How have you copied anonymous relays to other servers? Join the conversation on Twitter @SuperTekBoy.

The post Quickly copy an Anonymous Receive Connector “Relay” between servers appeared first on SuperTekBoy.

How to install YARA and write basic YARA rules to identify malware

YARA is described as “The pattern matching Swiss knife for malware researchers (and everyone else)”. Think of it as like grep, but instead of matching based on one pattern, YARA matches based on a set of rules, with each rule capable of  containing multiple patterns, and complex condition logic for further refining matches. It’s a very useful tool. Let’s go over some practical examples of how to use it.

Installing YARA

Official Windows binaries can be found here. Unfortunately, as of the time of this writing, practically every Linux distribution’s repository contains an out-of-date version of YARA that has one or more security vulnerabilities. Follow the instructions below to compile and install the latest release with all features enabled on a Debain or Ubuntu system. The steps should be similar in other Linux distributions.

Download the source code .tar.gz for the latest stable release.

Install the dependencies

sudo apt-get install -y automake libtool make gcc flex bison libssl-dev libjansson-dev libmagic-dev

Build the project

tar -zxf v3.9.0.tar.gz
rm v3.9.0
cd yara-3.9.0
./configure --with-crypto --enable-profiling --enable-macho --enable-dex --enable-cuckoo --enable-magic --enable-dotnet

Install as a Debian package

sudo apt-get install -y checkinstall
sudo apt-get remove -y libyara3 yara python-yara # Remove any existing install from distro repos
sudo checkinstall -y --deldoc=yes

# Cleanup
cd ..
rm -rf yara-3.9.0/

Install the Python package

sudo apt-get install -y python-pip python3-pip
sudo -H pip install -U pip
sudo -H pip3 install -U pip
sudo -H pip install -U git+
sudo -H pip3 install -U git+

Introduction to YARA rules

Let’s start by looking at the different components that can be part of a rule.

At a minimum, a rule must have a name, and a condition. The simplest possible rule is:

rule dummy { condition: false }

That rule does nothing. Inversely, this rule matches on anything:

rule dummy { condition: true }

Here’s a slightly more useful example that will match on any file over 500 KB:

rule over_500kb {condition: filesize > 500KB}

Most often though, you’ll write rules with a meta section, a strings section, and a condition section:

rule silent_banker : banker {
  description = "This is just an example"
  threat_level = 3
  in_the_wild = true
   $a = {6A 40 68 00 30 00 00 6A 14 8D 91}
   $b = {8D 4D B0 2B C1 83 C0 27 99 6A 4E 59 F7 F9}
  all of them

The : after the rule name indicates the start of a list of rule tags, which are separated by spaces. These tags are not used frequently, but you should be aware that they exist. C-style comments can be used anywhere.

The meta section consists of a set of arbitrary key-value pairs that can be used to describe the rule, and/or the type of content that it matches. Meta values can be strings, integers, decimals, or booleans. The meta values can be viewed by the application that is using YARA when a match occurs.

The strings section defines variables as content to be matched. These can be:

  • Hexadecimal byte patterns (in {}, with support for wildcards and jumps). Often used to identify unique code, such as an unpacking mechanism
  • Text strings
  • Regular expressions (between //)

The condition section is where the true power an flexibility of YARA can be found. Here are a few common example condition statements

Condition Meaning
any of them The rule will match on anything containing any of the strings defined in the rule
all of them The rule will only match if all of the defined strings are in the content
3 of them The rule will match anything containing at least three of the defined strings
$a and 3 of ($s*)  Match content that contains string $a and at least three strings whose variable begins with $s

Practical examples

It would be very useful to check the attachments of suspected phishing emails reported to you by your users. PDF attachments with a link to a phishing site have become a common tactic, because many email gateways still do not check URLs in attached files. This rule checks for links in PDFs:

rule pdf_1.7_contains_link {

  author = "Sean Whalen"
  last_updated = "2017-06-08"
  tlp = "white"
  category = "informational"
  confidence = "high"
  description = "A PDFv1.7 that contains a link or external content"

  $pdf_magic = {25 50 44 46}
  $s_anchor_tag = "

The first part of the condition checks if the first few bytes of the file match the magic numbers (list here) for a PDF, allowing the rule to quickly disregard anything that isn’t a PDF. Filters like these can greatly increase speed when scanning a large amount of data. The second part of the condition strings whose variable begins with $s.

$s_uri is a regular expression that matches any URI/URL in parenthesis, which will match the PDF standard for URI actions and URLs in forms.

$s_anchor_tag matches any HTML anchor tag, which some PDF converters may leave an a document converted from HTML.

The ascii, wide , and nocase keywords tell YARA to search for ASCII and wide strings, and to be case-insensitive. By default, it will only search for ASCII strings, including substrings, and it will be case-sensitive. There are many more keywords for matching other kinds of strings.

But lots of legitimate PDFs (brochures, invoices, etc) contain links, so a better indicator of badness may be a PDF that contains a single link. Unfortunately, most PDF generators like Microsoft Office will save the PDF as multiple “versions”  in the same file, so we should give the rule a little flexibility, and allow for up to two URIs in a PDF.

rule pdf_1.7_contains_few_links {

  author = "Sean Whalen"
  last_updated = "2017-06-08"
  tlp = "white"
  category = "malicious"
  confidence = "medium"
  killchain_phase = "exploit"
  description = "A PDFv1.7 that contains one or two links - a common phishing tactic"

  $pdf_magic = {25 50 44 46}
  $s_anchor_tag = " 0 and #s_uri < 3))

It’s good to keep both of these rules, that way you have an informational one that should always match on any PDF with any number of links, and another that provides a higher confidence of badness.

Also, these rules only match PDFs with links that were generated according to the latest PDF standard (1.7). Any suggestions for older versions are appreciated.

Viewing strings in a file

To view a list of strings in a file, simply run the strings command on a Linux/MacOS/BSD or other UNIX-like system. You can pipe the output to less to view it one page at a time. For example:

strings rat.exe | less

Strings vs Bytes

the strings section in YARA rules can be made up of any combination of strings, bytes, and regular expressions. Most YARA rules are made up entirely of strings. These kinds of rules are relatively simple to write, but it is also very easy for malware authors to change or obscure strings in order to avoid detection in future builds. If a sample has few or no usable strings, that sample has likely been packed, meaning that any strings are built or decoded at runtime. YARA can scan processes, and you probably would have better luck scanning active memory for strings, but that won’t help if your goal is to identify samples at rest.

Bytes in YARA are represented as hexadecimal strings, and can include wildcards and/or jumps. Bytes can be used to identify specific variations of code, such as a unique method of unpacking. Writing signatures based on bytes requires some knowledge of assembly, APIs provided by the OS, and specialized software.

IDA Pro is the industry standard platform for software reverse engineering. It is also very expensive. Currently, a the Starter Edition of IDA (which can only process 32 bit files) for one named user, along with the X86 Hex-Rays decompiler costs about $2,700. If you want to be able to decompile x86 AMD64 files, the cost is about $4,400 for IDA Pro, x86, and x64 compilers for one named user. Fortunately, a couple open source alternatives exist.

In early 2019, the NSA released an open source Software Reverse Engineering (SRE) suite called Ghidra. It includes a decomplier, and a very simillar feel to IDA.

The open source Radare Framework provides many of the same features of IDA (and a few more) for free, under a GNU GPL license. Radare does not currently have a decomplier though.

If you’re new to assembly, check out this Crash Course in x86 Assembly for Reverse Engineers.

Testing YARA rules

To test your rules against some sample files, run a command like this:

yara -rs dev/yararules/files.yar samples/pdf

Where dev/yararules/files.yar is the path to the file containing your rules and samples/pdf is the path to a directory containing sample files to test against. This will output a list of files matches, and the strings that make up each match.

To find samples that do not match your rules, run a command like this:

yara -rn dev/yararules/files.yar samples/pdf

Note that this will give a a result for each rule that each file does not match. Use grep to narrow this down. For example, if you only wanted to see the results for rules with a name that contains pdf_, you could run:

yara -rs dev/yararules/files.yar samples/pdf | grep pdf_

It’s good practice to keep your rule names consistent. That makes testing much easier.

Generating rules

yarGen is an open source utility by Florian Roth that generates YARA rules for a given set of samples. It’s not magic, and generally won’t do a better than writing a rule manually. But, if you are mostly making rules out of strings rather than bytes, it can give you a great starting point to tweak and tune into a better rule. I use yarGen when I’ve some across a set of samples that I know are related, but have seemingly very different strings.

yarGen will generate many different rules for a given set of files, and the condition for each rule will likely be very narrow (i.e. all of them). I usually combine these rules based on what I suspect are the most reliable strings, and group the strings as needed in the condition statement.

You can download yarGen from its GitHub repository.

Learning more

Take some time to review the comprehensive official documentation for YARA. There, you will find complete details on writing rules, and using the command-line program, the Python API, and the C API.

Once you feel comfortable working with YARA, consider joining the YARA Exchange. The Exchange is a very active and helpful group of information security professionals that share rules and tips for writing them. You can even get access to VirusTotal Intelligence for free through the exchange, if your company doesn’t already pay for it.  In return, they only ask that you participate, share with others, and don’t hog the VirusTotal Intelligence monthly quotas that are shared across the whole group.

With VirusTotal Intelligence access, you can set up alerts to notify you when samples are uploaded to VirusTotal that match your YARA rules, without using any quotas. It’s a fantastic way to stress test your rules (your inbox/alert queue will fill up very quickly if you match false-positives), and for identifying new samples, and new waves of attacks. Quotas are used for searching, downloading, and using Retro Hunt on VirusTotal Intelligence. Retro Hunt takes a set of YARA rules, runs a scan on about the last three months of all files that were uploaded to VirusTotal (literally terabytes of data), and returns a list of file hashes that matched your rules, and which rules they matched.

Check out this great talk on getting the most out of VirusTotal Intelligence and YARA by Wyatt Roersma at GrrCon 2016:


Other resources

The post How to install YARA and write basic YARA rules to identify malware appeared first on

Custom user mappings in LXD containers

Custom user mappings in LXD containers

LXD logo


As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid

stgraber@castiana:~$ cat /etc/subgid

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid

stgraber@castiana:~$ cat /etc/subgid

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1nroot:$(id -u):1n" | sudo tee -a /etc/subuid

stgraber@castiana:~$ printf "lxd:$(id -g):1nroot:$(id -g):1n" | sudo tee -a /etc/subgid

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap


User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at:
Development happens on Github at:
Discussion forun:
Mailing-list support happens on:
IRC support happens in: #lxcontainers on
Try LXD online:

Some useful switches for “du – disk usage”

As we know, du command is used to check the disk usage of files and folders under Linux system. There are a lot of switches available for du among which here I am trying to explain the most common switches used.

As you know, one can easily checks the switches available for du by looking at the man page for du or by executing the command du –help .

Here I am trying to provide some basic guidelines about commonly used switches so that you can use them whenever in need.

Most commonly used form

# du -sch

h -> human readable format.
c -> display a total size usage at the end of result.
s -> display total size of a file or total size of all files in a directory.


root@server [/home/abl]# du -s /home/abl
213932 /home/abl
root@server [/home/abl]# du -sh /home/abl
209M /home/abl
root@server [/home/abl]# du -sch /home/abl
209M /home/abl
209M total

Listing all files and directories, switch “a”

# du -ah

a -> this switch displays disk usage of all individual files and directories.


root@server [/home/abl/etc]# du -ah /home/abl/etc | tail
0 /home/abl/etc/
4.0K /home/abl/etc/
0 /home/abl/etc/ftpquota
0 /home/abl/etc/quota
4.0K /home/abl/etc/cacheid
0 /home/abl/etc/
0 /home/abl/etc/
0 /home/abl/etc/
4.0K /home/abl/etc/
28K /home/abl/etc

Exclude something from the command output, using –exclude

# du –exclude

-–exclude -> This switch will avoid the particular file name that we have mentioned.
In the below example du -ah avoid files ending with .txt (–exclude=”*.txt”)

 root@server [/home/abl/etc]# du -ah –exclude=”quota” /home/abl/etc | tail
4.0K /home/abl/etc/
0 /home/abl/etc/
0 /home/abl/etc/
4.0K /home/abl/etc/
0 /home/abl/etc/ftpquota
4.0K /home/abl/etc/cacheid
0 /home/abl/etc/
0 /home/abl/etc/
4.0K /home/abl/etc/
28K /home/abl/etc

Display modification time of files/folders, using –time

# du –time

–time ->This option shows the modification date and time of the file/directories.

root@server [/home/user/folder]# du -sch –time *
1.8M    2017-05-19 00:32    Folder 1
248M    2017-05-06 07:35   Folder 2
40K       2016-12-15 15:05   Folder 3
32K       2017-04-27 15:46   File.pdf
250M    2017-05-19 00:32    total

This switches should help you get the disk usage of the files in any directory.

There are few other options for du which are worth trying. You can view its details checking the man page of du.

Get 24/7 expert server management

The post Some useful switches for “du – disk usage” appeared first on SupportSages – Your IT Infrastructure partner !!.

Moving a WordPress Site to the Root Directory

Moving a WordPress Site to the Root Directory

Moving a WordPress Site to the Root Directory 4There are a couple of common reasons people will want to move their WordPress website from a directory to the root directory.  The most common we’ve seen are:

  1. They installed WordPress in Softaculous’ default /wp directory, and don’t want to have to setup a forwarder for their website.
  2. They developed their site in a directory so that their website would not be offline or interfered with and now want to replace their existing site with the WordPress site.

While you can use the Duplicator plugin to create a zip archive of your website and then restore it in the root, there really is no need to create a large file to download and then re-upload.  The advantage of this is the plugin will rename all of the URLs that will need to be updated.  Disadvantage however is it is much slower, and potentially creates a very large file you will need to download and then re-upload.

Here’s how to do the move without the plugin:

1 – Backup Your Site

Before doing the move, to be safe, we recommend you use the backup wizard to create a backup of your home directory and MySQL database.  The backup Wizard will download the two backup files to your local hard drive.

  • Log in for your domain and open your cPanel
  • Click on Backup Wizard in the Files section
  • Click Backup, and then Home Directory and MySql Database

2 – Remove Old Files from the Root Directory

You can do this in a number of ways, deleting the files or moving them to a directory called something like “old_site”.

  • It is important that you delete or move the file called index.htm or index.html.  WordPress uses a file named index.php as its starting index file.  Most web servers, including ours, will load the html file before the php file.
  • Make certain there are no files in the root directory that would have the same name as any files that are in the directory where your WordPress website is currently.
  • If your WordPress site is currently using a caching plugin, deactivate the plugin and remove any cached files.

3 – Update WordPress’ Target Address

Log into your WordPress dashboard, and go to Settings > General, and update the target WordPress Address and Site Address so that the directory is removed.Moving a WordPress Site to the Root Directory 5

When you click the Save button, you will immediately see a 404 Not Found error page.  Do not ba alarmed, that is normal.  You will be able to log in once you move your files in the next step.

4 – Move the Files

Using either FTP or the File Manager, you’re going to move the files from the directory WordPress is in to the root directory for your website.

Using FTP

Connect to your site using your usual FTP program (FileZilla shown here).  You should be connected with the /public_html directory as the current directory.  In our example, your computer’s file system is in the window on the left, and the server’s on the right.

  • On the server’s side, double click on the directory where the WordPress site is.
  • Click the very top file, scroll to the very bottom, and shift-click to select all of the files.
  • Drag and drop the files on the top icon which shows a yellow folder followed by two dots (..).
    Moving a WordPress Site to the Root Directory 6

Using the cPanel’s File Manager

  • Open up the cPanel for your domain, and click the File Manager icon in the FILES section of cPanel.
  • On the left column, click the plus sign next to public_html.  This will show you on the left column what directories are in the root public_html folder.
  • Click on the directory name on the left column where WordPress is installed.
  • Select the upper most directory on the right window, and shift-click the bottom most file so all files and directories are highlighted.
  • Click Move on the top menu bar
  • A window will pop up.  Delete the directory name so only /public_html/ is displayed, and click the Move Files button.

Moving a WordPress Site to the Root Directory 7

5 – Update Permalinks

You now need to log in again to the WordPress dashboard.  Enter in your domain name followed by /wp-admin.  The final step in moving your site is to update the permalinks for your website.  Log into your WordPress dashboard, and navigate to Settings > Permalinks.  Click on Plain, and then the blue Save Changes button.  Once this has been saved, your site will now be fully functional.  It is highly recommended once you’ve completed this step, you update your permalinks again to “Post name” and click the Save Changes button again.  This makes your URLs more readable, and better for indexing by search engines.

Moving a WordPress Site to the Root Directory 8

6 – Setup 301 Forwarding

If there are external links or your site has already been searched and ranked by search engines, you’ll want to setup forwarding with the 301 redirect (permanent redirection) so that you can preserver your existing search engine rankings.  The redirection will also ensure anyone that’s linked to your old site’s location will still be able to find you.

  • On the cPanel, click on Redirects in the Domains section
  • Choose 301 option
  • Enter in the directory where WordPress was installed
  • Type in your domain name where WordPress has been moved to and click the blue Add button.

Moving a WordPress Site to the Root Directory 9