Sitefinity – Random IT Utensils https://blog.adamfurmanek.pl IT, operating systems, maths, and more. Sat, 18 Jul 2020 01:45:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Logging in distributed system Part 4 — Passing data to other systems https://blog.adamfurmanek.pl/2018/01/06/logging-in-distributed-system-part-4/ https://blog.adamfurmanek.pl/2018/01/06/logging-in-distributed-system-part-4/#comments Sat, 06 Jan 2018 09:00:06 +0000 https://blog.adamfurmanek.pl/?p=2304 Continue reading Logging in distributed system Part 4 — Passing data to other systems]]>

This is the fourth part of the Logging series. For your convenience you can find other parts in the table of contents in Part 1 – Correlations

We can generate correlation data on each request but we need to pass them throughout the system. Let’s start with simple REST requests.

Rest client

We want to have the following factory:

namespace DomainServices.RestClient
{
    public interface IRestClientFactory
    {
        IRestClient CreateRestClientForExternalService(string baseUrl);
        IRestClient CreateRestClient(string baseUrl);
    }
}

We will use it to create REST clients for communicating with our components and external services. Actual implementation could go as follows:

using System;
using DomainCommons.Correlations;

namespace DomainServices.RestClient
{
	public class RestClientFactory : IRestClientFactory
	{
		private readonly ICorrelator _correlator;

		public RestClientFactory(ICorrelator correlator)
		{
			_correlator = correlator;
		}

		public IRestClient CreateRestClientForExternalService(string baseUrl)
		{
			if (baseUrl == null)
			{
				throw new ArgumentNullException(nameof(baseUrl));
			}

			return new RestSharpRestClient(new RestSharp.RestClient(baseUrl));
		}

		public IRestClient CreateRestClient(string baseUrl)
		{
			if (baseUrl == null)
			{
				throw new ArgumentNullException(nameof(baseUrl));
			}

			return new CorrelationRestClient(new RestSharp.RestClient(baseUrl), _correlator);
		}
	}
}

We use RestSharp library and we wrap it with our custom logic for client to use internally. Since we do not want to expose all the details, we adapt RestSharp with custom interface:

using System.Threading.Tasks;
using RestSharp;

namespace DomainServices.RestClient
{
	public interface IRestClient
	{
		Task<IRestResponse<T>> ExecuteTaskAsync<T>(IRestRequest request);
	}
}

We expose just one function. In fact you should reimplement IRestRequest interface on your own but for the simplest implementation this is not required.

Now the basic client:

using System.Threading.Tasks;
using RestSharp;

namespace DomainServices.RestClient
{
	internal class RestSharpRestClient : IRestClient
	{
		private readonly RestSharp.RestClient _restClient;

		public RestSharpRestClient(RestSharp.RestClient restClient)
		{
			_restClient = restClient;
		}

		public Task<IRestResponse<T>> ExecuteTaskAsync<T>(IRestRequest request)
		{
			return _restClient.ExecuteTaskAsync<T>(request);
		}
	}
}

This only delegates to RestSharp client and so it is obvious.

The latter case is a bit harder. We need to pass headers to the request and parse the response:

using System.Linq;
using System.Threading.Tasks;
using DomainCommons.Correlations;
using DomainCommons.Loggers;
using RestSharp;

namespace DomainServices.RestClient
{
	internal class CorrelationRestClient : IRestClient
	{
		private readonly RestSharp.RestClient _restClient;
		private readonly ICorrelator _correlator;

		public CorrelationRestClient(RestSharp.RestClient restClient, ICorrelator correlator)
		{
			_restClient = restClient;
			_correlator = correlator;
		}

		public async Task<IRestResponse<T>> ExecuteTaskAsync<T>(IRestRequest request)
		{
			request.AddHeader(Constants.CorrelationIdHeader, _correlator.GetCorrelationId());
			request.AddHeader(Constants.CorrelationCounterHeader, _correlator.GetLogicalTime().ToString());
			
			var result = await _restClient.ExecuteTaskAsync<T>(request);

			var correlationHeader = result.Headers.FirstOrDefault(h => h.Name == Constants.CorrelationCounterHeader);
			if (correlationHeader != null)
			{
				_correlator.UpdateLogicalTime(int.Parse(correlationHeader.Value.ToString()));
			}
			
			return result;
		}
	}
}

So we send correlation ID and logical time to the service and on return we parse logical time and update it in correlator. Simple as that.

Service bus

We also need to pass correlation data to service bus messages. We can store them in metadata:

var client = QueueClient.CreateFromConnectionString(_config.ServiceBusConnectionString, notification.GetType().Name);
var brokeredMessage = new BrokeredMessage(notification);
brokeredMessage.Properties[_messageTypeMetadataField] = notification.GetType().AssemblyQualifiedName;
brokeredMessage.Properties[_correlationIdMetadataField] = _correlator.GetCorrelationId();
brokeredMessage.Properties[_correlationCounterMetadataField] = _correlator.GetLogicalTime();

_logger.Log(LogLevel.Information, $"Sending message: {notification}");
client.Send(brokeredMessage);

Summary

We now pass correlation data throughout the system. The only missing piece is parsing logs. We will handle this next time.

]]>
https://blog.adamfurmanek.pl/2018/01/06/logging-in-distributed-system-part-4/feed/ 1
Logging in distributed system Part 3 — Exceptions https://blog.adamfurmanek.pl/2017/12/30/logging-in-distributed-system-part-3/ https://blog.adamfurmanek.pl/2017/12/30/logging-in-distributed-system-part-3/#comments Sat, 30 Dec 2017 09:00:20 +0000 https://blog.adamfurmanek.pl/?p=2302 Continue reading Logging in distributed system Part 3 — Exceptions]]>

This is the third part of the Logging series. For your convenience you can find other parts in the table of contents in Part 1 – Correlations

So we need to handle errors and exceptions. Web jobs are already covered in the previous part so we need to take care of Sitefinity and WebAPI. Let’s begin with the former.

Sitefinity

There are two paths: WebForms and MVC.

WebForms

This is easy, we only need to handle error:

using System;
using System.Web;
using DomainCommons.Loggers;

namespace AzureCommons.Filters
{
	public class WebFormsErrorHandler
	{
		public static void ApplicationError(object sender, EventArgs e)
		{
			var logger = (ILogger)HttpContext.Current.Items[Constants.LoggerItem];

			if (logger == null)
			{
				return;
			}

			logger.Log(LogLevel.Error, $"Application error from {sender}: {e}");

			if (HttpContext.Current?.Response?.Headers == null)
			{
				return;
			}

			if (HttpContext.Current.Response.HeadersWritten)
			{
				return;
			}

			HttpContext.Current.Response.AddHeader(Constants.CorrelationIdHeader, logger.Correlator.GetCorrelationId());
			HttpContext.Current.Response.AddHeader(Constants.CorrelationCounterHeader, logger.Correlator.GetLogicalTime().ToString());
		}
	}
}

We handle the case of headers already sent to the client and add them if possible. Now you need to call this method from Global.asax.cs and we are done.

MVC

This time we need to create another action filter:

using System.Web.Mvc;
using DomainCommons.Correlations;
using DomainCommons.Loggers;

namespace AzureCommons.Filters
{
	public class MvcExceptionFilter : IExceptionFilter
	{
		private readonly ILogger _logger;
		private readonly ICorrelator _correlator;

		public MvcExceptionFilter(ILogger logger, ICorrelator correlator)
		{
			_logger = logger;
			_correlator = correlator;
		}

		public void OnException(ExceptionContext filterContext)
		{
			_logger.Log(LogLevel.Error, $"Unhandled exception: {filterContext.Exception}");

			if (filterContext.HttpContext?.Response?.Headers == null)
			{
				return;
			}

			if (filterContext.HttpContext.Response.HeadersWritten)
			{
				return;
			}

			filterContext.HttpContext.Response.AddHeader(Constants.CorrelationIdHeader, _correlator.GetCorrelationId());
			filterContext.HttpContext.Response.AddHeader(Constants.CorrelationCounterHeader, _correlator.GetLogicalTime().ToString());
		}
	}
}

The same stuff. You need to register this filter in your DI container and that is all. You could also consider adding HandleErrorAttribute globally.

WebAPI2

Now comes the tricky part. WebAPI2 actually has three different ways of handling exceptions: attribute, logger and handler. We can implement all of them:

using System.Net;
using System.Net.Http;
using System.Web.Http.Filters;
using DomainCommons.Correlations;
using DomainCommons.Loggers;

namespace AzureCommons.Filters
{
	public class WebApiExceptionFilterAttribute : ExceptionFilterAttribute
	{
		public override void OnException(HttpActionExecutedContext context)
		{
			var logger = (ILogger) context.Request.GetDependencyScope().GetService(typeof (ILogger));
			logger.Log(LogLevel.Error, $"Unhandled exception: {context.Exception}");

			var correlator = (ICorrelator)context.Request.GetDependencyScope().GetService(typeof(ICorrelator));
			context.Response = context.Response ?? new HttpResponseMessage(HttpStatusCode.InternalServerError);
			context.Response.Headers?.Add(Constants.CorrelationIdHeader, correlator.GetCorrelationId());
			context.Response.Headers?.Add(Constants.CorrelationCounterHeader, correlator.GetLogicalTime().ToString());
		}
	}
}

using System.Net;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http.ExceptionHandling;
using ConfigurationCommons.Configurations;
using DomainCommons.Loggers;

namespace AzureCommons.Filters
{
	[RegisterManually]
	public class WebApiExceptionLogger : IExceptionLogger
	{
		public Task LogAsync(ExceptionLoggerContext context, CancellationToken cancellationToken)
		{
			var logger = (ILogger)context.Request.GetDependencyScope().GetService(typeof(ILogger));
			logger.Log(LogLevel.Error, $"Unhandled exception: {context.Exception}");

			return Task.CompletedTask;
		}
	}
}

using System.Net;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http.ExceptionHandling;
using ConfigurationCommons.Configurations;
using DomainCommons.Correlations;
using DomainCommons.Loggers;

namespace AzureCommons.Filters
{
	[RegisterManually]
	public class WebApiExceptionHandler : IExceptionHandler
	{
		public Task HandleAsync(ExceptionHandlerContext context, CancellationToken cancellationToken)
		{
			var logger = (ILogger)context.Request.GetDependencyScope().GetService(typeof(ILogger));
			logger.Log(LogLevel.Error, $"Unhandled exception: {context.Exception}");

			var correlator = (ICorrelator)context.Request.GetDependencyScope().GetService(typeof(ICorrelator));

			context.ExceptionContext.Response = context.ExceptionContext.Response ?? new HttpResponseMessage(HttpStatusCode.InternalServerError);
			context.ExceptionContext.Response?.Headers?.Add(Constants.CorrelationIdHeader, correlator.GetCorrelationId());
			context.ExceptionContext.Response?.Headers?.Add(Constants.CorrelationCounterHeader, correlator.GetLogicalTime().ToString());

			return Task.CompletedTask;
		}
	}
}

We log everything we can, set headers and creates response if necessary.

You need to register your filters:

configuration.Filters.Add(new WebApiExceptionFilterAttribute());
configuration.Services.Replace(typeof(IExceptionHandler), new WebApiExceptionHandler());
configuration.Services.Insert(typeof(IExceptionLogger), 0, new WebApiExceptionLogger());

Summary

Now we handle exceptions and errors in our applications. In the next part we will see how to implement logic for passing correlation IDs and logical times to other systems using RestSharp.

]]>
https://blog.adamfurmanek.pl/2017/12/30/logging-in-distributed-system-part-3/feed/ 1
Logging in distributed system Part 1 — Correlations https://blog.adamfurmanek.pl/2017/12/16/logging-in-distributed-system-part-1/ https://blog.adamfurmanek.pl/2017/12/16/logging-in-distributed-system-part-1/#comments Sat, 16 Dec 2017 09:00:09 +0000 https://blog.adamfurmanek.pl/?p=2292 Continue reading Logging in distributed system Part 1 — Correlations]]>

This is the first part of the Logging in distributed system series. For your convenience you can find other parts using the links below (or by guessing the address):
Part 1 — Correlations
Part 2 — Action filters
Part 3 — Exceptions
Part 4 — Passing data to other systems
Part 5 — Parsing logs

If you are interested in the topic see the talk page

Distributed systems are hard. Patterns of Enterprise Architecture says that “You shouldn’t try to break up a single application into Web services that talk to each other unless you really need to.” However, nowadays microservices (and even nanoservices) are very popular. One issue when working with such an architecture goes down to logging — there are things which simply must be reliable because without them debugging errors is close to impossible.

General idea

Generally it is a good idea to have logs from different components gathered in one place. That’s why there are applications like ELK stack (Elasticserach, Logstash and Kibana) which download logs, transform them to common format and display. Search capabilities and built-in alerts are also nice features.

In order to analyse logs, you need to be able to correlate them. That’s why correlation id is often a “must-have” pattern. In this series I am going to show you a simple implementation of logger handling correlation ids and Lamport’s happened-before relation for ordering events. Do not consider this a one solution to rule them all, this is just a base implementation ready to be extended as you need.

Components

Let’s assume that our system consists of Sitefinity CMS, Azure Webjob and REST API based on WebAPI 2. All these components are hosted in Azure. We want to design a code for logging events in an uniform way.

Interfaces

First, let’s start with interfaces:

using DomainCommons.Correlations;

namespace DomainCommons.Loggers
{
	public interface ILogger
	{
		void Log(LogLevel level, string message);
		ICorrelator Correlator { get; }
	}
}

We have an interface for our logger. It gives us only one method to log event, accepting information level (see below) and string message. Ideally, logger implementation should add necessary infrastructure details to the message (node id, correlation id, timestamp etc).

namespace DomainCommons.Loggers
{
	public enum LogLevel
	{
		Debug = 1,
		Information = 2,
		Warning = 3,
		Error = 4
	}
}

namespace DomainCommons.Correlations
{
	public interface ICorrelator
	{
		string GetCorrelationId();
		int GetLogicalTime();
		void UpdateLogicalTime(int newTime);
		string Activity { get; set; }
	}
}

Since we need to maintain logical time (Lamport’s happened-before) and correlation ID, we have an interface to store these data. Also, we store activity which represents currently executed action.

Also, we might use a bunch of constants to store data:

namespace DomainCommons.Loggers
{
	public static class Constants
	{
		public static readonly string CorrelationIdHeader = "X-Correlation-ID";
		public static readonly string CorrelationCounterHeader = "X-Correlation-Counter";
		public static readonly string CorrelationIdItem = "CorrelationId";
		public static readonly string LoggerItem = "Logger";
	}
}

This is it. Now we need to implement interfaces. Let’s start with correlators.

Correlators

We basically have two entry points: one for web application and one for web job. Web application should get the correlation ID from request headers or create new one. Web job always needs to create the ID on its own. Let’s start with base correlator:

using System;
using System.Threading;

namespace DomainCommons.Correlations
{
	public abstract class Correlator : ICorrelator
	{
		private int _logicalTime;

		public abstract string GetCorrelationId();
		public abstract string Activity { get; set; }

		public int GetLogicalTime()
		{
			return Interlocked.Increment(ref _logicalTime);
		}

		public void UpdateLogicalTime(int newTime)
		{
			int currentTime, finalTime;

			do
			{
				currentTime = _logicalTime;
				finalTime = Math.Max(currentTime, newTime);
			} while (Interlocked.CompareExchange(ref _logicalTime, finalTime, currentTime) != currentTime);
		}

	}
}

The only logic here is for maintaining logical time. Since we don’t want to use locks to decrease performance, we use interlocked operation to update logical time.

Let’s implement correlator for web applications:

using System;
using System.Web;
using DomainCommons.Correlations;
using DomainCommons.Loggers;

namespace AzureCommons.Correlations
{
	public class RequestHeadersCorrelator : Correlator
	{
		private readonly Lazy<string> _correlationId;

		public RequestHeadersCorrelator()
		{
			_correlationId = new Lazy<string>(GenerateCorrelationId);

			var logicalTime = HttpContext.Current?.Request.Headers[Constants.CorrelationCounterHeader];
			if (logicalTime == null)
			{
				return;
			}

			UpdateLogicalTime(int.Parse(logicalTime));
		}

		public override string GetCorrelationId()
		{
			return _correlationId.Value;
		}

		public override string Activity { get; set; }

		private string GenerateCorrelationId()
		{
			var id = (string)HttpContext.Current?.Items[Constants.CorrelationIdItem] ??
				   HttpContext.Current?.Request.Headers[Constants.CorrelationIdHeader] ??
				   Guid.NewGuid().ToString();

			if (HttpContext.Current != null)
			{
				HttpContext.Current.Items[Constants.CorrelationIdItem] = id;
			}

			return id;
		}
	}
}

Implementation is rather straightforward. Correlation ID and logical time are extracted from headers or generated if they are absent. Activity is not set yet, it will be done later.

We also need to implement correlator for web jobs. This is pretty easy since it doesn’t need to bother with headers and always generates data on start:

using System;
using DomainCommons.Correlations;

namespace AzureCommons.Correlations
{
	public class MemoryBasedCorrelator : Correlator
	{
		private readonly string _correlationId = Guid.NewGuid().ToString();

		public override string GetCorrelationId()
		{
			return _correlationId;
		}
		public override string Activity { get; set; }
	}
}

Loggers

We have similar case with loggers. Let’s start with base class:

using System;
using System.Globalization;
using System.Threading;
using DomainCommons.Correlations;

namespace DomainCommons.Loggers
{
	public abstract class Logger : ILogger
	{
		private readonly string _loggerId;

		public Logger(ICorrelator correlator)
		{
			Correlator = correlator;
			_loggerId = Guid.NewGuid().ToString();
		}

		public void Log(LogLevel level, string message)
		{
			var segments = new object[]
			{
				Timestamp,
				ApplicationName,
				InstanceId,
				Thread.CurrentThread.ManagedThreadId,
				Correlator.GetCorrelationId(),
				level,
				Correlator.Activity,
				Correlator.GetLogicalTime(),
				_loggerId
			};
			LogWithNewLine($"\n[{string.Join("][", segments)}]\n{message}\n");
		}

		public ICorrelator Correlator { get; }
		protected virtual string Timestamp => DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture);
		protected abstract void LogWithNewLine(string message);
		protected abstract string ApplicationName { get; }
		protected abstract string InstanceId { get; }
	}
}

Method for logging message adds infrastructure details. The only interesting things is why we have two IDs: correlation and logger. This is because Sitefinity internally might create multiple requests for each widget which in fact are completely separate (each widget does job on its own) but are part of one page (and so they should be correlated). You can treat this as a scope or something similar.

We also add fancy new lines before and after the message because it makes parsing logs easier. Since we want to push logs to Azure OMS, we need to parse them in job handler (which will be shown in one of the next parts).

Let’s go with logger for Azure:

using System;
using DomainCommons.Correlations;
using DomainCommons.Loggers;
using Microsoft.Cis.Eventing;

namespace AzureCommons.Loggers
{
	public abstract class AzureLogger : Logger
	{
		protected AzureLogger(string applicationName, ICorrelator correlator) : base(correlator)
		{
			ApplicationName = applicationName;
		}

		protected override string InstanceId => Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
		protected override string ApplicationName { get; }
		
		public void MissingListenerDllHack()
		{
			// Hack for missing MonAgentListener.dll
			var messageEvent = new MessageEvent();
		}
	}

}

We extract node ID from environment variables which are set automatically by Azure platform. Also, we have a hack for missing file MonAgentListener.dll, which sometimes is not copied to the output directory. This library is required for Azure trace listener and is a part of Azure SDK so if your build server doesn’t have this SDK installed, you need to add the dll manually to your solution and make sure that it is copied to the bin directory.

Now, logger for web applications:

using System.Diagnostics;
using ConfigurationCommons.Configurations;
using DomainCommons.Correlations;

namespace AzureCommons.Loggers
{
	[RegisterManually]
	public class WebAppLogger : AzureLogger
	{
		public WebAppLogger(string applicationName, ICorrelator correlator) : base(applicationName, correlator)
		{
		}

		protected override void LogWithNewLine(string message)
		{
			Trace.WriteLine(message);
		}
	}
}

Web apps have no other way to log anything to Azure infrastructure but to use trace listener so we simply use Trace class. Also, since I register lots of stuff by convention, I mark this logger as registered manually with attribute. This is only a matter of your DI container and conventions.

Web job logger looks a bit different, since it should use TextWriter injected by the framework:

using System.IO;
using ConfigurationCommons.Configurations;
using DomainCommons.Correlations;

namespace AzureCommons.Loggers
{
	[RegisterManually]
	public class WebJobLogger : AzureLogger
	{
		private readonly TextWriter _textWriter;

		public WebJobLogger(TextWriter textWriter, string applicationName, ICorrelator correlator) : base(applicationName, correlator)
		{
			_textWriter = textWriter;
		}

		protected override void LogWithNewLine(string message)
		{
			_textWriter.WriteLine(message);
		}
	}
}

And that’s all.

Summary

We have our loggers and correlators, we are ready to use them in production code. In next part we will see how to create them automatically with various action filters.

]]>
https://blog.adamfurmanek.pl/2017/12/16/logging-in-distributed-system-part-1/feed/ 4
Sitefinity Part 4 — Turning on Redis in Sitefinity in Azure https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/ https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/#respond Sat, 18 Nov 2017 09:00:46 +0000 https://blog.adamfurmanek.pl/?p=2249 Continue reading Sitefinity Part 4 — Turning on Redis in Sitefinity in Azure]]>

This is the fourth part of the Sitefinity series. For your convenience you can find other parts in the table of contents in Sitefinity Part 1 — Capturing logs

Last time we saw how to change database connection string for Sitefinity which can be useful if we need to extract it in runtime. In the same manner we can enable Redis which is required when running Sitefinity in Azure (and is not needed on developer machine). First, add code to AssemblyInfo.cs:

// Sitefinity version: 10.0.6411.0
// Override configuration files
[assembly: PreApplicationStartMethod(typeof(LoadBalancerConfiguration), "OverrideConnectionString")]

And now modify file SystemConfig.config with Redis settings:

using System.IO;
using System.Web.Hosting;
using System.Xml.Linq;

namespace Cms
{
	public class LoadBalancerConfiguration
    {
		public static void OverrideConnectionString()
		{
			var configurationFilePath = HostingEnvironment.MapPath("~/App_Data/Sitefinity/Configuration/SystemConfig.config");

			if (string.IsNullOrEmpty(config.RedisConnectionString))
			{
				return;
			}

			var document = XDocument.Load(configurationFilePath);

			var systemConfig = document.Element("systemConfig");
			var loadBalancingConfig = systemConfig.Element("loadBalancingConfig");
			if (loadBalancingConfig == null)
			{
				loadBalancingConfig = new XElement("loadBalancingConfig");
				systemConfig.Add(loadBalancingConfig);
			}

			var redisSettings = loadBalancingConfig.Element("redisSettings");
			if (redisSettings == null)
			{
				redisSettings = new XElement("redisSettings");
				loadBalancingConfig.Add(redisSettings);
			}

			var connectionString = redisSettings.Attribute("ConnectionString");
			if (connectionString == null)
			{
				connectionString = new XAttribute("ConnectionString", "");
				redisSettings.Add(connectionString);
			}

			connectionString.Value = "YOUR_CONNECTION_STRING";

			File.WriteAllText(configurationFilePath, document.ToString());
		}
	}
}

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/11/18/turning-on-redis-in-sitefinity-in-azure/feed/ 0
Sitefinity Part 3 — Dynamically changing database connection string in Sitefinity https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/ https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/#comments Sat, 11 Nov 2017 09:00:48 +0000 https://blog.adamfurmanek.pl/?p=2247 Continue reading Sitefinity Part 3 — Dynamically changing database connection string in Sitefinity]]>

This is the third part of the Sitefinity series. For your convenience you can find other parts in the table of contents in Sitefinity Part 1 — Capturing logs

Sitefinity stores its database connection string in App_data\Sitefinity\Configuration\DataConfig.config so you can easily modify connection string there. But what if you want to extract the connection string in runtime? E.g., you want to read it from Azure Key Vault after application is started?

First, you need to execute some piece of code before actual Webapp executes. Add the following to AssemblyInfo.cs:

// Override configuration files
[assembly: PreApplicationStartMethod(typeof(DatabaseConfiguration), "OverrideConnectionString")]

This will run your code before anything related to Sitefinity gets chance to work so you can override the connection string in the file:

using System.Web.Hosting;
using System.Xml.Linq;
using System.IO;

namespace Cms
{
    public class DatabaseConfiguration
    {
        public static void OverrideConnectionString()
        {
            var configurationFilePath = HostingEnvironment.MapPath("~/App_Data/Sitefinity/Configuration/DataConfig.config");

            var document = XDocument.Load(configurationFilePath);
            var element = document
                .Element("dataConfig")
                .Element("connectionStrings")
                .Element("add");
            element.Attribute("connectionString").Value = "YOUR_CONNECTION_STRING";
            element.Attribute("dbType").Value = "DB_TYPE_EG_MsSql";

            File.WriteAllText(configurationFilePath, document.ToString());
        }
    }
}

The file is modified before application reads it so changes are visible in the application without restart.

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/11/11/dynamically-changing-database-connection-string-in-sitefinity/feed/ 1
Sitefinity Part 2 — Dependency injection in Sitefinity https://blog.adamfurmanek.pl/2017/11/04/dependency-injection-in-sitefinity/ https://blog.adamfurmanek.pl/2017/11/04/dependency-injection-in-sitefinity/#comments Sat, 04 Nov 2017 09:00:07 +0000 https://blog.adamfurmanek.pl/?p=2243 Continue reading Sitefinity Part 2 — Dependency injection in Sitefinity]]>

This is the second part of the Sitefinity series. For your convenience you can find other parts in the table of contents in Sitefinity Part 1 — Capturing logs

In order to use DI in Sitefinity you need to configure two mechanisms. One for feather widgets and one for other stuff. Let’s begin with widgets.

Feather widgets

We would like to have dependencies injected to widgets’ controllers (using constructor). Sitefinity uses NInject to do that, so we need to inject our configuration into it.

First, we need to inject code into bootstrapping process. We add the following to the global.asax.cs:

using Telerik.Sitefinity.Abstractions;
using Telerik.Sitefinity.Data;

namespace Cms
{
	using System;

	public class Global : System.Web.HttpApplication
	{

		protected void Application_Start(object sender, EventArgs e)
		{
			Bootstrapper.Bootstrapped += Bootstrapper_Bootstrapped;
		}

		void Bootstrapper_Bootstrapped(object sender, EventArgs e)
		{
			NinjectControllerFactory.RegisterControllerFactory();
		}
	}
}

Sitefinity will call this event during initialization. Now, we need to configure NInject:

using System;
using System.Web.Mvc;
using System.Web.Routing;
using Ninject;
using Telerik.Microsoft.Practices.Unity;
using Telerik.Sitefinity.Abstractions;
using Telerik.Sitefinity.Frontend.Mvc.Infrastructure.Controllers;
using Telerik.Sitefinity.Mvc;

namespace Cms
{
	public class NinjectControllerFactory : FrontendControllerFactory
	{
		private readonly IKernel _kernel = Telerik.Sitefinity.Frontend.FrontendModule.Current.DependencyResolver;

		protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
		{
			if (controllerType == null)
			{
				return null;
			}

			var controller = _kernel.Get(controllerType);

			return controller as IController;
		}

		private static void RegisterDependencies(IKernel dependencyResolver)
		{
			// Register your dependencies here:
			// dependencyResolver.Bind< IHaveASandwich >().To< QuarterPounderWithCheese>();
		}

		public static void RegisterControllerFactory()
		{
			RegisterDependencies(Telerik.Sitefinity.Frontend.FrontendModule.Current.DependencyResolver);
			ObjectFactory.Container.RegisterType< ISitefinityControllerFactory, NinjectControllerFactory >(new ContainerControlledLifetimeManager());
			var factory = ObjectFactory.Resolve< ISitefinityControllerFactory >();
			ControllerBuilder.Current.SetControllerFactory(factory);
		}
	}
}

Now our controllers will be created by the factory presented above.

Other services

If you need to create other stuff (e.g., membership provider) you need to inject dependencies to Unity container (yes, another DI container). First, modify global.asax.cs:

using Telerik.Sitefinity.Abstractions;
using Telerik.Sitefinity.Data;

namespace Cms
{
	using System;

	public class Global : System.Web.HttpApplication
	{

		protected void Application_Start(object sender, EventArgs e)
		{
			UnityRegistrations.RegisterAllDependencies();
		}
}

And now simply register all stuff:

using Telerik.Microsoft.Practices.Unity;
using Telerik.Sitefinity.Abstractions;

namespace Cms
{ 
    public static class UnityRegistrations
    {
        public static void RegisterAllDependencies()
        {
            ObjectFactory.Initializing += (s, ev) => { RegisterDependencies(); };
        }

        private static void RegisterDependencies()
        {
            // Here register your dependencies
            // ObjectFactory.Container.RegisterType< IHaveASandwich, QuarterPounderWithCheese >();
        }
    }
}

That’s all.

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/11/04/dependency-injection-in-sitefinity/feed/ 1
Sitefinity Part 1 — Capturing Sitefinity logs and pushing them to Azure Storage https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/ https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/#respond Sat, 28 Oct 2017 08:00:35 +0000 https://blog.adamfurmanek.pl/?p=2237 Continue reading Sitefinity Part 1 — Capturing Sitefinity logs and pushing them to Azure Storage]]>

This is the first part of the Sitefinity series. For your convenience you can find other parts using the links below (or by guessing the address):
Part 1 — Capturing logs
Part 2 — Dependency Injection
Part 3 — Changing connection string
Part 4 — Turning on Redis

Let’s assume that we have properly configured Sitefinity instance to work in Azure (which includes using Azure SQL database and Azure Redis instance). Now there is a question: how do we capture logs and send them to Azure Storage?

By default Sitefinity logs everything to files in App_Data\Sitefinity\Logs directory. We can easily use Azure logging facilities (which can be enabled in Azure Portal in section Diagnostic logs) but we need to log using Trace class. In order to do that, we need to implement custom listener and configure Sitefinity to use it. Let’s go.

Implementation

First, in Sitefinity we need to register for bootstrapping event when actual logging configuration is enabled. In Global.asax.cs add the following:

using Telerik.Sitefinity.Abstractions;
using Telerik.Sitefinity.Data;

namespace Cms
{
	using System;

	public class Global : System.Web.HttpApplication
	{

		protected void Application_Start(object sender, EventArgs e)
		{
			ObjectFactory.Initialized += ConfigInitialize;
		}

		private void ConfigInitialize(object s, ExecutedEventArgs args)
		{
			if (args.CommandName == "ConfigureLogging")
			{
				LoggingConfig.ReplaceBuiltInTraceListenersWithCustom(args);
			}
		}
	}
}

Sitefinity uses logging classes from Microsoft Enterprise Library (Entlib) to manage logs. We can extract configuration in runtime and replace it using custom listeners:

using System.Linq;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;
using Telerik.Sitefinity.Data;
using CustomTraceListenerData = Cms.Logging.CustomTraceListenerData;

namespace Cms
{
	public static class LoggingConfig
	{
		public static void ReplaceBuiltInTraceListenersWithCustom(ExecutedEventArgs args)
		{
			var traceListeners = GetSitefinityTraceListeners(args);
			var listenerNames = traceListeners.Select(t => t.Name).ToArray();

			foreach (var name in listenerNames)
			{
				traceListeners.Remove(name);

				var listenerAdapter = new CustomTraceListenerData(name);

				traceListeners.Add(listenerAdapter);
			}
		}

		private static TraceListenerDataCollection GetSitefinityTraceListeners(ExecutedEventArgs args)
		{
			var builder = args.Data as ConfigurationSourceBuilder;
			return ((LoggingSettings)builder.Get("loggingConfiguration")).TraceListeners;
		}
	}
}

We extract listeners from Sitefinity internals. It is named collection describing how to create concrete loggers. There are loggers for errors (which by default log to Error.log file), debug, trace, etc. Since we would like to redirect all logs to Azure Storage, we need to remove all existing configurations and inject ours. To do that, we iterate over all loggers, remove them one by one and create custom loggers with same names.

Actual logger looks as follows:

using System;
using System.Diagnostics;
using System.Linq.Expressions;
using Telerik.Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;

namespace Cms.Logging
{
	public class CustomTraceListenerData : TraceListenerData
	{
		public CustomTraceListenerData(string name)
			: base(
				name, typeof (CustomTraceListener),
				TraceOptions.Callstack | TraceOptions.DateTime | TraceOptions.ProcessId | TraceOptions.ThreadId |
				TraceOptions.Timestamp | TraceOptions.LogicalOperationStack, SourceLevels.All)
		{
			ListenerDataType = typeof (CustomTraceListener);
		}

		protected override Expression<Func<TraceListener>> GetCreationExpression()
		{
			return () => new CustomTraceListener();
		}
	}
}

using System.Diagnostics;

namespace Cms.Logging
{
	public class CustomTraceListener : TraceListener
	{
		public override void Write(string message)
		{
			Trace.Write(message);
		}

		public override void WriteLine(string message)
		{
			Trace.WriteLine(message);
		}
	}
}

In configuration class we want to log everything on all levels. Actual logger only redirects messages to Trace class.

Now, we need to configure logging to Azure Storage. First, we enable it in Azure Portal. Next, we need to add trace listener for Azure. We add the following to web.config:

< system.diagnostics>
    < trace>
      < listeners>
        < add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics,Version=2.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
          < filter type="" />
        < /add>
        < add name="FileDiagnostics" type="System.Diagnostics.TextWriterTraceListener" initializeData="App_Data/Sitefinity/Logs/Log.txt" />
      < /listeners>
    < /trace>
  < /system.diagnostics>

We add to listeners: one for Azure and another to log to drive. The latter is useful for running application locally, it redirects all logs (errors, SQL changes, etc.) to one file.

You might also need to add the following to web.config

< compilation debug="true" targetFramework="4.7" numRecompilesBeforeAppRestart="2000">
      < assemblies>
        < add assembly="System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
      < /assemblies>
< /compilation>

Now you can also configure Log Analytics and have Sitefinity logs in OMS.

Tested with Sitefinity version: 10.0.6411.0.

]]>
https://blog.adamfurmanek.pl/2017/10/28/capturing-sitefinity-logs-and-pushing-them-to-azure-storage/feed/ 0