Azure – Random IT Utensils https://blog.adamfurmanek.pl IT, operating systems, maths, and more. Wed, 04 Apr 2018 23:34:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Logging in distributed system Part 5 — Parsing logs https://blog.adamfurmanek.pl/2018/01/13/logging-in-distributed-system-part-5/ https://blog.adamfurmanek.pl/2018/01/13/logging-in-distributed-system-part-5/#comments Sat, 13 Jan 2018 09:00:27 +0000 https://blog.adamfurmanek.pl/?p=2307 Continue reading Logging in distributed system Part 5 — Parsing logs]]>

This is the fifth part of the Logging series. For your convenience you can find other parts in the table of contents in Part 1 – Correlations

We start with classes representing logs:

using System;
using System.Globalization;

namespace LogHandler
{
	public class LogEntry
	{
		public string Path { get; set; }
		public string Content { get; set; }
		public string ApplicationName { get; set; }
		public string ServerId { get; set; }
		public string ThreadId { get; set; }
		public string CorrelationId { get; set; }
		public int Year => ParseTime().Year;
		public int Month => ParseTime().Month;
		public int Day => ParseTime().Day;
		public string Date => ParseTime().ToString("yyyy-MM-dd");
		public int Hour => ParseTime().Hour;
		public int Minute => ParseTime().Minute;
		public string Time => ParseTime().TimeOfDay.ToString();
		public string Timestamp { get; set; }
		public string LogLevel { get; set; }
		public string Activity { get; set; }
		public string LogicalTime { get; set; }
		public string LoggerId { get; set; }
		public DateTime GenerationTime => ParseTime();

		private DateTime ParseTime()
		{
			var formats = new[]
			{
				"yyyy-MM-dd HH:mm:ss.fff",
				"yyyy-MM-dd HH.mm.ss.fff"
			};
			DateTime result;
			DateTime.TryParseExact(Timestamp, formats, CultureInfo.InvariantCulture, DateTimeStyles.None, out result);

			return result;
		}
	}
}

namespace LogHandler
{
	public class RawLogEntry
	{
		public string Path { get; set; }
		public string Content { get; set; }
		public string LineNumber { get; set; }
	}
}

Since our logs might contain additional data, we do not want to lose it. So we push two types of logs to the OMS: parsed log and raw log. Raw log is only splitted by each line (you could as well push whole file), parsed log contains parsed data so it is easier to filter it and create alerts.

Now the parsers:

using System;
using System.Collections.Generic;
using System.Linq;

namespace LogHandler
{
	public static class RawLogParser
	{
		public static IEnumerable<RawLogEntry> ParseLog(string content, string path)
		{
			var entries = content
				.Split(new[] {Environment.NewLine}, StringSplitOptions.RemoveEmptyEntries)
				.Select((line, index) => new RawLogEntry
				{
					Content = line,
					Path = path,
					LineNumber = (index + 1).ToString()
				}).ToArray();

			return entries;
		}
	}
}

using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Text.RegularExpressions;

namespace LogHandler
{
	public class LogParser
	{
		private static readonly string LogHeaderGeneralPattern = "([[].*?[]]){9,}";

		private const string Timestamp = "timestamp";
		private const string ApplicationName = "applicationName";
		private const string ServerId = "serverId";
		private const string ThreadId = "threadId";
		private const string CorrelationId = "correlationId";
		private const string LogLevel = "logLevel";
		private const string Activity = "activity";
		private const string LogicalTime = "logicalTime";
		private const string LoggerId = "loggerId";
		private static readonly string LogHeaderSpecificPattern = string.Join("", new[]
		{
			Timestamp,
			ApplicationName,
			ServerId,
			ThreadId,
			CorrelationId,
			LogLevel,
			Activity,
			LogicalTime,
			LoggerId
		}.Select(group => $"[[](?<{group}>.*?)[]]"));

		public static IEnumerable<LogEntry> ParseLog(string log, string path)
		{
			// Not using Environment.NewLine because it handles new lines incorrectly
			var lines = log.Split('\n');

			var buffer = new StringBuilder();
			LogEntry currentEntry = null;

			foreach (var line in lines)
			{
				if (Regex.IsMatch(line, LogHeaderGeneralPattern))
				{
					if (currentEntry != null)
					{
						currentEntry.Content = buffer.ToString();
						yield return currentEntry;
					}

					buffer.Clear();
					var match = Regex.Match(line, LogHeaderSpecificPattern);
					currentEntry = new LogEntry
					{
						Timestamp = match.Groups[Timestamp].Value,
						ApplicationName = match.Groups[ApplicationName].Value,
						ServerId = match.Groups[ServerId].Value,
						ThreadId = match.Groups[ThreadId].Value,
						CorrelationId = match.Groups[CorrelationId].Value,
						LogLevel = match.Groups[LogLevel].Value,
						Activity = match.Groups[Activity].Value,
						LogicalTime = match.Groups[LogicalTime].Value,
						LoggerId = match.Groups[LoggerId].Value,
						Path = path,
					};
				}
				else
				{
					buffer.AppendLine(line);
				}
			}

			if (currentEntry != null)
			{
				currentEntry.Content = buffer.ToString();
				yield return currentEntry;
			}
		}
	}
}

Now when pushing data to OMS do not forget to set timestamp field header in order to avoid duplicates for parsed logs:

client.DefaultRequestHeaders.Add("time-generated-field", nameof(LogEntry.GenerationTime));

Summary

This short series shows how to implement logging infrastructure for distributed system. Please be advised that this is only an initial implementation which you should adapt to your needs.

]]>
https://blog.adamfurmanek.pl/2018/01/13/logging-in-distributed-system-part-5/feed/ 1
Sitefinity Part 4 — Turning on Redis in Sitefinity in Azure https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/ https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/#respond Sat, 18 Nov 2017 09:00:46 +0000 https://blog.adamfurmanek.pl/?p=2249 Continue reading Sitefinity Part 4 — Turning on Redis in Sitefinity in Azure]]>

This is the fourth part of the Sitefinity series. For your convenience you can find other parts in the table of contents in Sitefinity Part 1 — Capturing logs

Last time we saw how to change database connection string for Sitefinity which can be useful if we need to extract it in runtime. In the same manner we can enable Redis which is required when running Sitefinity in Azure (and is not needed on developer machine). First, add code to AssemblyInfo.cs:

// Sitefinity version: 10.0.6411.0
// Override configuration files
[assembly: PreApplicationStartMethod(typeof(LoadBalancerConfiguration), "OverrideConnectionString")]

And now modify file SystemConfig.config with Redis settings:

using System.IO;
using System.Web.Hosting;
using System.Xml.Linq;

namespace Cms
{
	public class LoadBalancerConfiguration
    {
		public static void OverrideConnectionString()
		{
			var configurationFilePath = HostingEnvironment.MapPath("~/App_Data/Sitefinity/Configuration/SystemConfig.config");

			if (string.IsNullOrEmpty(config.RedisConnectionString))
			{
				return;
			}

			var document = XDocument.Load(configurationFilePath);

			var systemConfig = document.Element("systemConfig");
			var loadBalancingConfig = systemConfig.Element("loadBalancingConfig");
			if (loadBalancingConfig == null)
			{
				loadBalancingConfig = new XElement("loadBalancingConfig");
				systemConfig.Add(loadBalancingConfig);
			}

			var redisSettings = loadBalancingConfig.Element("redisSettings");
			if (redisSettings == null)
			{
				redisSettings = new XElement("redisSettings");
				loadBalancingConfig.Add(redisSettings);
			}

			var connectionString = redisSettings.Attribute("ConnectionString");
			if (connectionString == null)
			{
				connectionString = new XAttribute("ConnectionString", "");
				redisSettings.Add(connectionString);
			}

			connectionString.Value = "YOUR_CONNECTION_STRING";

			File.WriteAllText(configurationFilePath, document.ToString());
		}
	}
}

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/feed/ 0
Sitefinity Part 3 — Dynamically changing database connection string in Sitefinity https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/ https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/#comments Sat, 11 Nov 2017 09:00:48 +0000 https://blog.adamfurmanek.pl/?p=2247 Continue reading Sitefinity Part 3 — Dynamically changing database connection string in Sitefinity]]>

This is the third part of the Sitefinity series. For your convenience you can find other parts in the table of contents in Sitefinity Part 1 — Capturing logs

Sitefinity stores its database connection string in App_data\Sitefinity\Configuration\DataConfig.config so you can easily modify connection string there. But what if you want to extract the connection string in runtime? E.g., you want to read it from Azure Key Vault after application is started?

First, you need to execute some piece of code before actual Webapp executes. Add the following to AssemblyInfo.cs:

// Override configuration files
[assembly: PreApplicationStartMethod(typeof(DatabaseConfiguration), "OverrideConnectionString")]

This will run your code before anything related to Sitefinity gets chance to work so you can override the connection string in the file:

using System.Web.Hosting;
using System.Xml.Linq;
using System.IO;

namespace Cms
{
    public class DatabaseConfiguration
    {
        public static void OverrideConnectionString()
        {
            var configurationFilePath = HostingEnvironment.MapPath("~/App_Data/Sitefinity/Configuration/DataConfig.config");

            var document = XDocument.Load(configurationFilePath);
            var element = document
                .Element("dataConfig")
                .Element("connectionStrings")
                .Element("add");
            element.Attribute("connectionString").Value = "YOUR_CONNECTION_STRING";
            element.Attribute("dbType").Value = "DB_TYPE_EG_MsSql";

            File.WriteAllText(configurationFilePath, document.ToString());
        }
    }
}

The file is modified before application reads it so changes are visible in the application without restart.

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/feed/ 1
Sitefinity Part 1 — Capturing Sitefinity logs and pushing them to Azure Storage https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/ https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/#respond Sat, 28 Oct 2017 08:00:35 +0000 https://blog.adamfurmanek.pl/?p=2237 Continue reading Sitefinity Part 1 — Capturing Sitefinity logs and pushing them to Azure Storage]]>

This is the first part of the Sitefinity series. For your convenience you can find other parts using the links below (or by guessing the address):
Part 1 — Capturing logs
Part 2 — Dependency Injection
Part 3 — Changing connection string
Part 4 — Turning on Redis

Let’s assume that we have properly configured Sitefinity instance to work in Azure (which includes using Azure SQL database and Azure Redis instance). Now there is a question: how do we capture logs and send them to Azure Storage?

By default Sitefinity logs everything to files in App_Data\Sitefinity\Logs directory. We can easily use Azure logging facilities (which can be enabled in Azure Portal in section Diagnostic logs) but we need to log using Trace class. In order to do that, we need to implement custom listener and configure Sitefinity to use it. Let’s go.

Implementation

First, in Sitefinity we need to register for bootstrapping event when actual logging configuration is enabled. In Global.asax.cs add the following:

using Telerik.Sitefinity.Abstractions;
using Telerik.Sitefinity.Data;

namespace Cms
{
	using System;

	public class Global : System.Web.HttpApplication
	{

		protected void Application_Start(object sender, EventArgs e)
		{
			ObjectFactory.Initialized += ConfigInitialize;
		}

		private void ConfigInitialize(object s, ExecutedEventArgs args)
		{
			if (args.CommandName == "ConfigureLogging")
			{
				LoggingConfig.ReplaceBuiltInTraceListenersWithCustom(args);
			}
		}
	}
}

Sitefinity uses logging classes from Microsoft Enterprise Library (Entlib) to manage logs. We can extract configuration in runtime and replace it using custom listeners:

using System.Linq;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;
using Telerik.Sitefinity.Data;
using CustomTraceListenerData = Cms.Logging.CustomTraceListenerData;

namespace Cms
{
	public static class LoggingConfig
	{
		public static void ReplaceBuiltInTraceListenersWithCustom(ExecutedEventArgs args)
		{
			var traceListeners = GetSitefinityTraceListeners(args);
			var listenerNames = traceListeners.Select(t => t.Name).ToArray();

			foreach (var name in listenerNames)
			{
				traceListeners.Remove(name);

				var listenerAdapter = new CustomTraceListenerData(name);

				traceListeners.Add(listenerAdapter);
			}
		}

		private static TraceListenerDataCollection GetSitefinityTraceListeners(ExecutedEventArgs args)
		{
			var builder = args.Data as ConfigurationSourceBuilder;
			return ((LoggingSettings)builder.Get("loggingConfiguration")).TraceListeners;
		}
	}
}

We extract listeners from Sitefinity internals. It is named collection describing how to create concrete loggers. There are loggers for errors (which by default log to Error.log file), debug, trace, etc. Since we would like to redirect all logs to Azure Storage, we need to remove all existing configurations and inject ours. To do that, we iterate over all loggers, remove them one by one and create custom loggers with same names.

Actual logger looks as follows:

using System;
using System.Diagnostics;
using System.Linq.Expressions;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;

namespace Cms.Logging
{
	public class CustomTraceListenerData : TraceListenerData
	{
		public CustomTraceListenerData(string name)
			: base(
				name, typeof (CustomTraceListener),
				TraceOptions.Callstack | TraceOptions.DateTime | TraceOptions.ProcessId | TraceOptions.ThreadId |
				TraceOptions.Timestamp | TraceOptions.LogicalOperationStack, SourceLevels.All)
		{
			ListenerDataType = typeof (CustomTraceListener);
		}

		protected override Expression<Func<TraceListener>> GetCreationExpression()
		{
			return () => new CustomTraceListener();
		}
	}
}

using System.Diagnostics;

namespace Cms.Logging
{
	public class CustomTraceListener : TraceListener
	{
		public override void Write(string message)
		{
			Trace.Write(message);
		}

		public override void WriteLine(string message)
		{
			Trace.WriteLine(message);
		}
	}
}

In configuration class we want to log everything on all levels. Actual logger only redirects messages to Trace class.

Now, we need to configure logging to Azure Storage. First, we enable it in Azure Portal. Next, we need to add trace listener for Azure. We add the following to web.config:

< system.diagnostics>
    < trace>
      < listeners>
        < add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics,Version=2.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
          < filter type="" />
        < /add>
        < add name="FileDiagnostics" type="System.Diagnostics.TextWriterTraceListener" initializeData="App_Data/Sitefinity/Logs/Log.txt" />
      < /listeners>
    < /trace>
  < /system.diagnostics>

We add to listeners: one for Azure and another to log to drive. The latter is useful for running application locally, it redirects all logs (errors, SQL changes, etc.) to one file.

You might also need to add the following to web.config

< compilation debug="true" targetFramework="4.7" numRecompilesBeforeAppRestart="2000">
      < assemblies>
        < add assembly="System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
      < /assemblies>
< /compilation>

Now you can also configure Log Analytics and have Sitefinity logs in OMS.

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/feed/ 0
Capturing Azure Webapp application log in Azure Log Analytics https://blog.adamfurmanek.pl/2017/06/10/capturing-azure-webapp-application-log-in-azure-log-analytics/ https://blog.adamfurmanek.pl/2017/06/10/capturing-azure-webapp-application-log-in-azure-log-analytics/#comments Sat, 10 Jun 2017 08:00:03 +0000 https://blog.adamfurmanek.pl/?p=2160 Continue reading Capturing Azure Webapp application log in Azure Log Analytics]]> Let’s assume that you have webapp deployed as an app service in Azure. It would be great if we could utilize Log Analytics to capture logs. Currently it is possible to capture various types of details: Windows metrics, infrastructure logs etc., however, Log Analytics is unable to capture application logs. However, by using Webjob or Function we can push logs to OMS manually. Let’s go.

Webapp setup

First, we need to modify application to capture logs in storage. We need to reconfigure Trace so Azure can handle its results. In MVC or WebAPI application we need to add listener in configuration file:

< system.diagnostics>
< trace>
  < listeners>
	< add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
	  < filter type="" />
	< /add>
  < /listeners>
< /trace>
< /system.diagnostics>

Next, we go to Azure Portal, choose our Webapp, go to Diagnostic Logs and enable Application Logging (Blob). We need to configure the storage connection string (so we need to create storage account).

Effect: when you try to log something using Trace, there will be a file in storage. File is appended and rolled each hour.

Okay, we have logs stored in Azure. Now its time to capture them and push to OMS.

Capturing

In order to capture file we can simply use Webjob or Azure Function. We can use storage trigger so the function will be executed each time there is new file or existing one is modified.

Unfortunately:

  • Storage trigger is not guaranteed to execute every time.
  • There also might be delays.
  • We don’t have information whether the file is new or modified.
  • Trigger works simply by scanning whole storage container. It might result in slower response times if you have lots of logs. Also, scan results are stored in the storage.

Good thing is dead letter queue. In case of failure, trigger will execute up to five times and if we are still unable to process the file it will be moved to special queue with poisoned files.

Here goes the code for pushing file to Log Analytics:

using System;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Runtime.Serialization;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Newtonsoft.Json;

namespace LoggingInfrastructure
{
    public class Functions
    {
        // Update customerId to your Operations Management Suite workspace ID
        static string customerId = "OMS workspace id";

        // For sharedKey, use either the primary or the secondary Connected Sources client authentication key   
        static string sharedKey = "OMS key";

        // LogName is name of the event type that is being submitted to Log Analytics
        static string LogName = "ApplicationLog";

        // You can use an optional field to specify the timestamp from the data. If the time field is not specified, Log Analytics assumes the time is the message ingestion time
        static string TimeStampField = "";

        // Send a request to the POST API endpoint
        public static void PostData(string signature, string date, string json, TextWriter log)
        {
            string url = "https://" + customerId + ".ods.opinsights.azure.com/api/logs?api-version=2016-04-01";

            HttpClient client = new System.Net.Http.HttpClient();
            client.DefaultRequestHeaders.Add("Accept", "application/json");
            client.DefaultRequestHeaders.Add("Log-Type", LogName);
            client.DefaultRequestHeaders.Add("Authorization", signature);
            client.DefaultRequestHeaders.Add("x-ms-date", date);
            client.DefaultRequestHeaders.Add("time-generated-field", TimeStampField);

            HttpContent httpContent = new StringContent(json, System.Text.Encoding.UTF8);
            httpContent.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/json");
            var result = client.PostAsync(new Uri(url), httpContent).Result;

            log.WriteLine(result.StatusCode);
        }



        // Build the API signature
        public static string BuildSignature(string message, string secret)
        {
            var encoding = new System.Text.ASCIIEncoding();
            byte[] keyByte = Convert.FromBase64String(secret);
            byte[] messageBytes = encoding.GetBytes(message);
            using (var hmacsha256 = new System.Security.Cryptography.HMACSHA256(keyByte))
            {
                byte[] hash = hmacsha256.ComputeHash(messageBytes);
                return Convert.ToBase64String(hash);
            }
        }

		// You pass path to blob in BlobTrigger attribute
        public static void ProcessWebAppBlob([BlobTrigger("logging/{name}")] Stream myBlob, string name, string blobTrigger, TextWriter log)
        {
            log.WriteLine($"Handling webapp blob:{name}");
            var message = new BlogLogEntry
            {
                Path = blobTrigger
            };

            string connectionString = AmbientConnectionStringProvider.Instance.GetConnectionString("AzureWebJobsStorage");
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
            CloudBlobClient client = storageAccount.CreateCloudBlobClient();
            var blob = client.GetBlobReferenceFromServer(new Uri($"{client.BaseUri}{message.Path}"));

            var entries = new StreamReader(blob.OpenRead())
                .ReadToEnd()
                .Split(new[] {Environment.NewLine}, StringSplitOptions.RemoveEmptyEntries)
                .Select(l => new LogEntry
                {
                    Content = l,
                    Path = message.Path
                }).ToArray();

            PushLogEntry(entries, log);
        }

        private static void PushLogEntry(object[] entries, TextWriter log)
        {
            var json = JsonConvert.SerializeObject(entries);
            var datestring = DateTime.UtcNow.ToString("r");
            string stringToHash = "POST\n" + json.Length + "\napplication/json\n" + "x-ms-date:" + datestring + "\n/api/logs";
            string hashedString = BuildSignature(stringToHash, sharedKey);
            string signature = "SharedKey " + customerId + ":" + hashedString;

            PostData(signature, datestring, json, log);
        }
    }

    public class BlogLogEntry
    {
        public string Path { get; set; }
    }

    public class LogEntry
    {
        public string Path { get; set; }
        public string Content { get; set; }
    }
}

You can also introduce Service Bus and push there message after finding new blob. Service Bus guarantees that the message will not get lost.

You can also use Azure Function instead of Webjob — it might be simpler.

]]>
https://blog.adamfurmanek.pl/2017/06/10/capturing-azure-webapp-application-log-in-azure-log-analytics/feed/ 2