Zobrazují se příspěvky se štítkemSecurity. Zobrazit všechny příspěvky
Zobrazují se příspěvky se štítkemSecurity. Zobrazit všechny příspěvky

sobota 4. února 2012

Choosing technologies for .NET project

Our latest research and development project was an online banking application. While choosing the building pieces of this application, we tried to pick the State-Of-Art frameworks and technologies. This is not an easy task, while there are always several alternatives for each component. I have decided to created this post which sums up the technologies available for different parts of application and I will try regularly update it, to keep up with the changes.

Here is the structure of this blog, according to which the technologies are grouped.

  • DataAccess - ORM, data generation
  • Platform - Dependency Injection, Aspect Oriented Programming
  • Integration - SOAP/REST, messaging, distributed objects...
  • Testing - Unit testing and Mocking, Parametrized testing, Functional testing
  • Presentation layer
  • Security
  • Logging


Typical application


Our application was a classical 3-tier application with database, business and presentation layers.
Data stored in SQL Server 2008. Data access layer implemented using Repository pattern and using ORM. Dependency Injection and Aspect Oriented Programming used to put together the application pieces. Services exposed using WCF, and two types of client applications: mobile and web.

So the technologies presented here, are the ones mostly used in this scenarios, however as said before, I would like to update the post to give more information any time I cross another technology, and that might while working on different architectures.

Data Access

The most important part of the Data Access layer is the framework used for Object Relational Mapping (ORM). There are currently two major ORM frameworks in .NET: NHibernate and Entity Framework. Both provide similar ORM functionalities (Code only approach, Lazy loading , use of POCOs as persistence classes).

Entity Framework 4.0 has brought a lot of improvement to its previous version (named EF 1.0) which did not provide above mentioned functionalities and its comparable to NHibernate. Crucial for ORM framework in .NET environment is the integration of LINQ (Language Integrated Query). Entity Framework was the first to offer this functionality but the implementation in NHibernate followed shortly after.

NHibernate has still several advantages among these it’s better ability to process batch treatment and also the fact that as an open source product it can be customized. On the other hand Entity Framework provides better tools integrated into Visual Studio.
One last thing which can justify the choice of NHibernate is the possibility of using FluentNHibernate.

FluentNHibernate
NHibernate uses its XML based HBM format to define the mappings between entities and POCOs. While the separation of code and configuration in XML can be seen as nice approach it gets complicated once the XML configuration files are larger and once we are introducing changes into the POCOs. The XML is not checked upon the compilation, so potential errors can be detected at run-time only and are generally hard to localize.
NFluent allows us to define the mappings in strongly-typed C#, which practically eliminates these issues. If there is an error in configuration, it will be most likely discovered during the compilation. Currently Fluent allows provides almost full compatibility with HBM files, which means that what can be defined in HBM can be also defined in Fluent.

Data Generation
AutoPoco is a simple framework which allows generation of POCOs (Plain Old CLR Objects) with meaningful values. When building enterprise application we often need generate initial data for the database. This can of course be done using SQL scripts or in imperative language which we are using, but consists of lots of repetitive code and for loops in order to create sufficient amount of data. AutoPoco provides easy way to generate the starting data. It also provides several build-in sources for common properties which are stored in databases such as phone numbers, birth dates, name and credit card numbers.

Platform

There are two design patterns (or approaches) which are very often present among the several layers of enterprise applications: Dependency Injection and Aspect Oriented Programming.

Dependency Injection is used to assemble complex system from existing blocks. There are several Dependency Injection containers available for .NET framework: Spring.NET, CastleWinsdor, StructureMap, AutoFac, Ninject, Unity (by Microsoft), LinFu.

Aspect Oriented Programming allows developers to separate cross-cutting concerns from the applications blocks. This is usually done by injecting code into object's existing methods.
There are several ways to implement AOP, two of these being most common: Proxy based AOP and IL Weaving based AOP.

Proxy based AOP is easily achieved by wrapping targeted object by a proxy class. Than it is easy to intercept the calls to the target object by the proxy class and call the code, which should be injected. It just happens so, that the Dependency Injection containers use proxy classes and therefor most of them offer also AOP. (Spring.NET, CastleWinsdor).

IL Weawing is an expression for injection of IL code after compile time before the generation of byte-code.

There are two frameworks which provide AOP through IL Weaving: PostSharp and LinFu. PostSharp has a commercial licence, however at the time of writing this post(July 2011), there is also 45 days free trial. LinFu is an opensource project under LGPL licence which covers both IoC and AOP.

I have used to choose Spring.NET because of it’s maturity, the fact that it is well documented, works great with NHibernate and allows both AOP as well as Dependency Injection. One of the disadvantages of Spring.NET is the XML configuration which as always can become too large to maintain. Other frameworks use C# as the language to configure the AOP or Dependency Injection (PostSharp makes use of attributes and frameworks such as Ninject or StructureMap use strongly typed classes to configure the dependency injection container).

I have however decided to use Ninject on my last project, which seems to have a bit of momentum right now, and I will post here later pros/cons.

Code Verification (Code Contracts)
Design by contract is software design approach, which implies that developers define clear interfaces for each software component, specifying its exact behavior. The interfaces are defined by contracts and extend the possibilities of code verification and validation.
The term was first used by Bertrand Meyer, who made it part of his Eiffel programming language.

Code Contracts is a language agnostic framework which enables the Design-by-Contract approach by allowing the programmer to define three types of conditions for each method:
Pre-condition - states in what forms the arguments of the method should be.
Post-condition - states what forms the outputs of the method will have.
Invariants - conditions which will always be true during the execution of the method.

These conditions can be later verified by two types of checks:
Static checking - is being done at the compilation type. At this time the compiler does not know what will be the values passed as arguments to the methods, but from the execution tree can determine which method calls might potentially be evoked with non-compliant parameters.
Runtime checking - the code contracts are compiled as conditions directly into .NET byte-code. This allows the program to avoid writing conditions manually inside the method bodies.

Note that Code Contracts are not language feature. They are composed of class library and the checking tools which are available as plugins for Visual Studio.

Integration

Distributed applications need a way of communication between the components. Remote Procedure Call(RPC) was the first technology used in distributed systems back in 70's. The choice here surely depends on the architecture of the application (client-server, publish-subscribe, ESB, and more...)

WCF
Flexible platform which provides abstraction of transport layer configuration (security, transport format, message patterns).
WCF options and choices:
Transportation protocol: WCF can user HTTP, TCP, MSMQ
Transportation format: XML, JSON, or Binnary

One service can expose several Endpoints (URIs). Each Endpoint can be configured to use different Binding. Binding can have different transportation protocol and format options. The same services can be thus exposed using different protocols and formats. In our application we can use this advantage and expose different endpoints for different clients.

Testing

Several types of tests can be used to confirm the correct behavior of the application: Unit Tests, Integration tests, smoke tests, functional tests (or acceptance tests).

Unit Testing
Mocking frameworks
When it comes to isolating the unit tests there are several Mocking frameworks available: NMock, EasyMock, Moq, JustMock (commercial), TypeMock (comercial), RhinoMocks, NSubstitute, FakeItEasy and Moles.

In our application we have decided for RhinoMocks and Moles. Moles are used in connection with Pex - test generation framework, which will be described later.
Most of the Mocking frameworks provide more or less the same functionalities thus the decision is quite complicated. RhinoMocks has the following characteristics:
  • Free and Open Source
  • Easy to use
  • Active community
  • Compatible with Silverlight (existing port to Silverlight)
Possible disadvantage: three types of syntax, which might be confusing for beginners
Actual version 3.6, version 4 which should break backwards compatibility is in development, but if I have not missed something, there are so far no releases.


Pex & Moles - Parametrized Unit Testing
Pex & Moles are used in order to build Unit Tests for the back-end part. Pex is a tool which helps generate inputs for unit tests while Moles enables the isolation of tested code. In order for Pex to generate the inputs the the test cases have to be parametrized.

Instead of writing concrete test cases, the test method is just a wrapper which takes the same arguments as the tested method, performs necessary set-up and then passes the arguments to the tested method. Pex analyses the execution tree of tested method and suggests the parameters which should be passed to the method and builds concrete test cases.

The aim of Pex is to obtain maximal code coverage. In order to achieve that, it uses algebraic solver (Microsoft’s Z3) to determine the values of variables used in the method which will lead to execution of each branch. Than it varies the parameters to obtain these values.

Moles is a stubbing framework. It allows you to isolate the parts of the code which you want to test from other layers. There are basically two reasons why use Moles:
Moles works great with Pex. Because Pex explores the execution tree of your code, so it also tries to enter inside all the mocking frameworks which you might use. This can be problematic, since Pex will generate inputs which will cause exceptions inside the mocking frameworks. By contrast Moles generates simple stubs of classes containing delegates for each method, which are completely customizable and transparent.
Moles allows to stub static classes, including the ones of .NET framework which are usually problematic to mock(typically DateTime, File, etc)

As it says on the official web: "Moles allows you to replace any .NET method by delegate". So before writing your unit test, you can ask Moles to generate the needed stubs for any assembly (yours or other) and than use these “moles” in your tests.

Presentation Layer

The presentation layer is quite large topic with several choices: ASP.NET, ASP.MVC + JavaScript, pure HTML5 + JavaScript, some JS frameworks (jQuery, KnockOutJS, Silverlight - and all of these technologies can be combined.

Silverlight
Here is a list of characteristics which can be seen as advantages:
  • Intend ed to develop Rich Internet Applications.
  • Supports separation of the view and the logging using the MVVM pattern.
  • Possibility to use declarative language (XAML) to design user interface and imperative language tode ne the application logic.
  • Data visualization support u sing open source Silverlight Toolkit (charts, line series)
  • Re-usability of code on .NET compliant platform.
  • Possibility to access audio and video devices on client side.
  • Plug-in based technology. Requires the plug-in to be run inside the browser. The plug-in is not available for all possible combinations of platform and browser. This lowers the availability of the developed application and brings also higher requirements on hardware.
  • Standard web features are missing such as navigation.
  • Limited testability. Silverlight can not be tested with traditional functional testing frameworks such as Selenium. On the other hand, when the MVVM pattern is implied, the ViewModels can be tested as simple classes, using traditional Unit Testing technologies.

HTML + JavaScript
  • No plug-in needed, HTML 5 is supported on the majority of the current browsers.
  • Naturally comes with web standard features: navigation, bookmarking.
  • Developers has to handle the "all browsers compatibility" issue.
  • Compared to C\# JavaScript is dynamic language, not compiled before the execution. This may be seen as advantage and disadvantage.

Knouckout.JS seems to me as a great possibility to use the MVVM pattern with JavaScript, I will be checking it and writing about it later.

Logging

Logging is an essential part of each application. Following frameworks are available in .NET:

  • Log4Net - easy configurable framework.
  • Logging in MS Enterprise library
  • NLog - version 2.0 released 7/2011 including logging framework for Windows Phone 7 and Silverlight - seems very nice, but I have never tried.
  • The Objects Guy Logging Framework - lightweight logging framework
  • .NET build-in tracing - alternative approach of using System.Diagnostics namespace which enables output of standard Trace and Debug Write method to XML file.
Good recapitulation for logging is available at this stackoverflow thread.

Security

There is usually a need to handle the user authentication in enterprise applications. When using ASP.NET I have found out that there are the standard Forms Authentication usually satisfies my needs. To handle OpenID authentication DotNetOpenAuth is an excellent choice.

Forms Authentication
Forms Authentication scheme works by issuing a token to user the first time that he authenticates. User can be authenticated against database or any other information source.

This token in the form of cookie is added to the response which follows the authentication request. This way the cookie is added to the next request by the same client. Forms Authentication than takes care of revoking the cookie (after demanded time) as well as of checking the cookie in each requests.

Forms Authentication works automatically with browser based clients, when used from different clients, some additional work on the client has to be done in order to add the authentication cookie to each request.

DotNetOpenAuth

I have previously used this library for two task: integrating OpenID authentication and creating OAuth provider.

Integration of OpenID works hand in hand with Forms Authentication. DotNetOpenAuth library provides a means to authenticate user against any Open ID provider. Once the user is authenticated the authentication cookie can be generated using Forms Authentication.

Conclusion

When new application is being developed, there are several decisions, that have to be taken regarding the framework and technologies which might be used. This article does not give direct answers to these question, but rather lists all the possible frameworks which should be taken into account.

New frameworks are being delivered by Microsoft and by Open Source community and it is hard to see which technologies will hold on which will be forgotten. I hope this overview can help to make the right decision. Any suggestions are welcomed.

pátek 8. července 2011

ASP.NET Forms Authentication and Java client

This post describes my later situation. I have a Silverlight application which talks to traditional WCF services in backend. The services have so far been configured automatically - so let's say Visual Studio took care of the web.config. Newest requirement to my application was to allow Java clients consume these services.

The prerequisites for this post are some basic knowledge of WCF (bindings, services, endpoints) and some knowledge of Java (I am using Axis to generate the clients...for the first time).

To make it a bit more complicated: I was using FormsAuthentication on the backend side, since these services are hosted by IIS 7.

Here I want to show what to do use Forms Authentication from Java application, mobile client or any other non-browser client.

The second part which describes how to enable WCF services to be consumed by JAVA client is described in my other post.

IIS 7 buid-in Authentication Service

I was using the build-in authentication service in order to authenticate the client, which is just a basic service, which offers methods such as Login, Logout etc.
This service can be enabled on IIS server using the following configuration:
<system.web.extensions>
  <scripting>
    <webServices>
      <authenticationService enabled="true" requireSSL="false"/>
    </webServices>
  </scripting>
</system.web.extensions>
And we also need to expose this service:
<service behaviorConfiguration="NeutralBehavior" name="System.Web.ApplicationServices.AuthenticationService">
    <endpoint address="" binding="basicHttpBinding" contract="System.Web.ApplicationServices.AuthenticationService" />
    <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>
Now that service works great from Silverlight client but I was not able to generate Java client for this service - I tried with different versions of Axis and settings - but it did not work for me.

So for the non-Silverlight client I needed to write my own Authentication service. That is actually prety easy using the FormsAuthentication static class.
[OperationContract]
public Login(String login, String password)
{
    //your way to auth the user againts DB or whatever
    var user = UserService.AuthenticateUser(login, password);
    
    if (user != null)
    {
        FormsAuthentication.SetAuthCookie(login, true);
    }
    return null;
}
After you check if the user is connected, you can just call the SetAuthCookie method. This method adds authentication token to the response which will go to server. Then the browser adds this token to any request which he will send to the server.
And here comes the problem: how to use this with non-browser based application?
Let me continue.

Services secured using PrincipalPermission

I use FormsAuthentication, because it allows me to secure all service just by adding the PrincipalPermission attribute over each Service method. So my WCFUserService can look like this:
public class WCFTagService
{
  public WCFTagService()
  {
      Thread.CurrentPrincipal = HttpContext.Current.User;
  }
  
  [OperationContract]
  [PrincipalPermission(SecurityAction.Demand, Authenticated = true)]
  public Object GetSecuredData(int param)
  {
      return MyDB.GetData();
  }
}
In the constructor the CurrentPrincipal is set to the current user of the ASP.NET application (again we are hosting this service in IIS), than the [PrincipalPermission] attribute will be check even before the method is executed if the user is logged in.
And how is the HttpContext.Current.User determined?
Well simply by checking the authentication token which the browser adds to the request. IIS will automatically check this token and populated the User static class with the correct identity.

Adding one more authentication service for Java Clients

This is definitely not correct but it is the only way I was able to get it to work. Basically when I call
FormsAuthentication.SetAuthCookie(login, true);
the cookie is added to the response and I will have to get it on the client (Java) side. Actually I was not able to achieve that - I will describe the approach I took lower, but I just did not get the cookie from the response. So I decided to build one more service which will just return the authentication token (or cookie if you will).
[OperationContract]
public String LoginCookie(String login, String password)
{
  var user = UserService.AuthenticateUser(login, password);
  if (user != null)
  {
      var cookie = FormsAuthentication.GetAuthCookie(login, true);
      return cookie.Value;
  }
  return null;
}
Ok that's it, we are done. We can almost switch to JAVA.

Accessing Authentication Service using the Axis generated client

Before we start, we need to generated the client, either you can use the build in tool in Eclipse ("New -> Other -> Web Service Client") or you can use the commander line "WSDLtoJava" utility. In both cases you have to enter just the URL of the WSDL.
When the client is ready, you can see that there is quite a lot of code(10kLines) generated for you.
MyServiceLocator locator = new MyServiceLocator();
AuthService client = locator.getBasicHttpBinding_AuthService();
String cookie = client.LoginCookie("login","password");
That is quite simple, I am calling the method which I have defined before which gives me the authentication cookie. Remember that this "Authentication Service" stays open, so anybody can call the methods. Now when we have the cookie, we can use it to make calls to other already protected services.
MyServiceLocator locator = new MyServiceLocator();
WCFUserService client = locator.getBasicHttpBinding_WCFUserService();
((Stub)client)._setProperty(Call.SESSION_MAINTAIN_PROPERTY,new Boolean(true));
((Stub)client)._setProperty(HTTPConstants.HEADER_COOKIE, ".ASPXAUTH=" + 
cookie);
Object data = client.GetSecuredData(myParam);
The generated client does not allow you to add cookies, but you can convert the client to org.appache.axis.client.Stub which allows you to call _setProperty method a static HttpConstatns class provides the names of the headers which you can set.
Now notice the "ASPXAUTH=" that is the prefix(or in other words the name) of the cookie and it has to be there. It took me a while to find out in what exact form should I send the cookie, finally Fiddler came as help - I used the Silverlight client to see what exactly he is sending and I just did the same.
What is little bit said is the fact, that we have to create a special method to be called by the Java client which returns the authentication token directly and no as a cookie.
I was thinking - it could not be that hard, generate a client and get the cookie. This way I could have only one authentication method used by browser-based clients and Java clients. But I just did not managed to do that.

I will show an attempt which I did - but did not succeed.

Creating the client dynamically

The java.rmi namespace provides classes which will the creation of web service client on the fly (without generation). This has some advantages, specially that you can create a javax.rmi.xml.Service class which allows assignment of special handlers, which are executed during the "reception" and "sending" of SOAP messages, these handlers can allow you to alter the content of the message and thus provide possibility to do some additional tuning.

Personally I thought, that I will be able to create my own handler to recuperate the authentication cookie send the standard way. But I did not manage to get the cookie from the SOAP message. Well that is actually normal, because the Cookie is not part of the SOAP message but instead part of the HTTP message (which wraps the SOAP message). But that is the problem I was not able to locate the cookie in the HTTP Response message, anyone knows how to do that?

I will provide here a conception of my solution - maybe someone will be able to finalize and obtain the cookie from the response of the authentication service.
try {
  QName serviceName = new QName("http://mynamespace","AuthService");
  URL wsdlLocation = new URL("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl");
  // Service
  ServiceFactory factory = ServiceFactory.newInstance();
  Service service =  factory.createService(wsdlLocation,serviceName);
  
  //Add the handler to the handler chain
  HandlerRegistry hr = service.getHandlerRegistry();
  HandlerInfo hi = new HandlerInfo();
  hi.setHandlerClass(SimpleHandler.class);
  handlerChain.add(hi);
  
  QName  portName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "BasicHttpBinding_AuthService");
  List handlerChain = hr.getHandlerChain(portName);
  
  QName operationName = new QName("http://localhost:49830/WCFServices/WCFUserService.svc?wsdl", "Login");
  Call call = service.createCall(portName,operationName);
  
  //call the operation
  Object resp = call.invoke(new java.lang.Object[] {"login","pass"});
}
To be able to call the web service dynamically, you will need to specify the names of the service, the port and the operations, you can find these easily in the WSDL definition.
Here follows the definition of the SimpleHandler which is added to the handler chain
public class SimpleHandler extends GenericHandler {
 
  HandlerInfo hi;
 
  public void init(HandlerInfo info) {
    hi = info;
  }

  public QName[] getHeaders() {
    return hi.getHeaders();
  }

  public boolean handleResponse(MessageContext context) {
    try {
     
     //Iterate over all properties - did not find the cookie there :(
     Iterator properties = context.getPropertyNames();
        while(properties.hasNext()){
         Object property = properties.next();
         System.out.println(property.toString());
        }
        
      //examine the response header - did not find the cookie there either :( 
      if(context.containsProperty("response")){
       Object response = context.getProperty("response");
       HttpResponse httpResponse = (HttpResponse)response;
       
       Header[] headers = httpResponse.getAllHeaders();
       for(Header header:headers){
        System.out.println(header.toString());
       }
      }
     
     //here is how to get the SOAP headers - they do not serve - we need pure HTTP response
      // get the soap header
      SOAPMessageContext smc = (SOAPMessageContext) context;
      SOAPMessage message = smc.getMessage();
      
    } catch (Exception e) {
      throw new JAXRPCException(e);
    }
    return true;
  }
  public boolean handleRequest(MessageContext context) { 
    return true;
  }
}


Alternative approach using WCF Inspectors

When looking into this problem, I found one alternative approach that you can use when dealing with Security and WCF Service.
The solution is basic:
  • Give up on FormsAuthentication
  • Define your own authentication tickets or just pass the login/pass combination on each request in the HTTP Header
  • Define a Message InInspector on the Server which would read the message upon its reception and check the availability of the authentication token or the credentials in the message header
When following this approach what might come handy is an easy way to generate and later control the Authentication ticket. FormsAuthentication can actually help you with this. Here is what happens when you call the FormsAuthentication.GetAuthCookie.
FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, login, DateTime.Now, DateTime.Now.AddMinutes(30), false, login);
string encryptedTicket = FormsAuthentication.Encrypt(ticket);
HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket);
So you can create an Inspector class, which will do the reverse of this process:
public class TestInspector : IDispatchMessageInspector
{
    public TestInspector()  { }
    
    public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
    {
        var httpRequest = (HttpRequestMessageProperty)request.Properties[HttpRequestMessageProperty.Name];
        var cookie = httpRequest.Headers[HttpRequestHeader.Authorization];
        if(cookie == null)
        {
          throw new SecurityException("Not authenticated!");
        }
        var ticket = FormsAuthentication.Decrypt(cookie);
        if(ticket.IsExpired)
        {
          throw new SecurityException("Ticket expired");
        }
    }

    public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        
    }
}

Securing the servicing using SSL

When we pass the authentication token over the wire, we want to be sure, that no-one can intercept this token and act in name of the user against the services. To prevent this situation we can use SSL to secure the whole communication between client and server.

The WCF configuration which is needed is quite simple, we just have to alter the standard basicHttpBinding by adding the security mode.
<basicHttpBinding>
  <binding name="SecurityByTransport">
    <security mode="Transport">
      <transport clientCredentialType="None"/>
    </security>
  </binding>
</basicHttpBinding>

Than comes the infrastructure work:

  • Be sure to publish the service on your local IIS server (you cannot use the build-in Visual Studio Server
  • On the IIS server create a new certificate - for test purposes auto-signed
  • Configure a new binding to application that you have deployed using the certificate, that you have created
This should be enough. Now we need to go back to the Java client - if we can regenerated the client using Axis. When you run the client for the first time, you will get the following exception:
java client unable to find valid certification path to requested target
That is because JVM maintains its list of trusted server. If he sees that the certificate is signed by Certification Authority, he will add it to its "keystore". Because for testing you usually use Self-Signed certificate, JVM will not add it do the keystore, it has to be done manually.

So: go back to the IIS 7 configuration and in the list of the certificates, select the certificate and on the "Details" tab page choose: "Copy to File".
You can leave the predefined option and just save the ".CER" wherever you want to.

Now to finish you have to run the following command in the JAVA-HOME\BIN directory:
keytool.exe -import -alias localhost -file C:\myCert.cer -keystore "c:\Program Files\Java\jre6\lib\security\cacerts"
  • localhost - stands for the web server which holds the certificate (your local IIS).
  • cacerts directory - is the store of trusted certificates.
  • The password is "changeit".

Summary

I tried to connect to secured WCF services hosted on IIS server with Java client. During the process I found some issues, but at the end I was able to connect securely to the services. The main steps are:
  • Don't use IIS build-in Authentication Service
  • Provide a service which will return the Authnentication Cookie to the Java client
  • Pass this cookie along with any request which is sent to secured services
In the end, I have showed how to enable SSL on the WCF service and how to consume the service with Java client.
And at last I presented an approach which should be taken to replace FormsAuthentication with your own authentication scheme using WCF Message Inspectors.

pátek 15. října 2010

RSA implementation using GMP library

Right now I am studying in Paris in one of the engineering schools here and one of my last assignements of cryptography was to implement the RSA in C. To achieve this and to allow manipulation of big integers I have used GMP library which is an open source library for arithmetics.

One part of the assignement was also an implementation of Miller - Rabin primarity test and implementation of Right - to - left binary method to perform Modular exponentiation. These two algorithms are already implemented in GMP so if you just want to implement RSA you can use my source code and modify it so it will use the functions from GMP instead of my implementations.

You can download the source code here.
If you speak french you can also download my poor-french-written report.