Readify Developer Network – An “Agile” Agile Primer

I thought it will be nice to share this while am watching it…very interesting especially the way this talk was organized. This is a Live recording of a presentation done 26-Feb-08 in Sydney as part of the Readify Developer Network. This is a presentation on agile methodologies (e.g Scrum, XP, etc) where the content was determined by the audience and covers questions such as upfront estimating, being agile in a non-agile company, scope management and more.

Readify
Presenter: Richard Banks

Video | Posted on by | Tagged , | Leave a comment

Using PowerShell scripts for common SharePoint Server 2010 tasks

One of the great things introduced by SharePoint Server 2010 is PowerShell support. Previously -and by this I mean in SharePoint Server 2007- developers and SharePoint administrators was using Stsadm which a command-line tool for SharePoint 2007. The benefits of using PowerShell over Stsadm is countless includes but not limited to using the powerful capabilities of PowerShell like accessing file system, registry..etc ; that’s why Stsadm support only exists in SharePoint 2010 for backward compatibility but its deprecated.

This is a sample of a PowerShell script that can use to create a site collection based on prompting the user for a url, owner, and a title:

Clear-Host #Clear Screen
Add-PSSnapin Microsoft.Sharepoint.PowerShell -ErrorAction "SilentlyContinue" #adds reference to SharePoint (-ErrorAction "SilentlyContinue") only for running inside Powershell ISE
Write-Host "Starting Sharepoint SPSite Creator..." #outputing a starting string
$url = Read-Host "The URL of the SPSite to create?" #asking for user input
$owner = Read-Host "domain/user?" #asking for user input
$title = Read-Host "Site Title?" #asking for user input
Remove-SPSite -Identity $url -ErrorAction "SilentlyContinue" #remove the site if exists
$site = New-SPSite -Url $url -OwnerAlias $owner -Template STS#1 -Name $title #Create the Site Collection (STS#1) is the blank template
Write-Host $site.RootWeb.Title " has been successfully created with url: " $site.Url #outputing a success string

You can simply write this using notepad and save it with “.ps1” extension and you are good to go, however its better to use PowerShell ISE (PowerShell integrated scripting environment) to get syntax highlighting, debugging..etc

Posted in Software | Tagged , | Leave a comment

Convert Word documents to PDF and XPS documents in .Net

In a previous old post actually two years ago I talked about converting HTML documents to Microsoft Word documents using the Microsoft Office application’s COM-based object model and it was very nice and easy task.

But what about PDF? actually there are tons of HTML to PDF converters out there for a long time, however, straggling with their feature lists, prices and support plans is not a fun thing at all; That’s  why two years ago I decided – in a project I was working on- to only export to Word and leave PDF generation to another point in future.

However things had changed now since Microsoft decided to add exporting to PDF and XPS document to its feature list starting from Office 2010. Of course we can use this in our applications by working with the Office application’s COM-based object model. This is a very handy method that converts a Word document to a PDF document.

public static void SaveAsPDF(string wordFilePath, string PdfFilePath)
{
    Application word = null;
    Document document = null;

    FileInfo SrcFile = new FileInfo(wordFilePath);
    FileInfo DestFile = new FileInfo(PdfFilePath);
    if (SrcFile.Exists == false)
    {
        throw new Exception(wordFilePath + " doesn't exist.");
    }

    try
    {
        word = new Application();
        document = new Document();

        document = word.Documents.Add();
        word.Visible = false;

        document = word.Documents.Open(SrcFile.FullName);

        document.Activate();

        document.ExportAsFixedFormat
            (
            DestFile.FullName,
            Microsoft.Office.Interop.Word.WdExportFormat.wdExportFormatPDF
            );


        document.Close(false);
        word.Quit();
    }
    catch (Exception ex)
    {
        try
        {
            document.Close(false);
            word.Quit();
        }
        catch (Exception)
        {
            //nothing
        }
        throw ex;
    }
}

The only thing that we need to change if we want to export to XPS is this line:

Microsoft.Office.Interop.Word.WdExportFormat.wdExportFormatPDF //PDF
Microsoft.Office.Interop.Word.WdExportFormat.wdExportFormatXPS //XPS

Finally one good thing I noticed; – but I didn’t try out before- you don’t have to own and install Microsoft Office on your server to make use of this, I noticed that Microsoft Office Primary Interop Assemblies are available for download for free, I will much appreciate if someone try these out and give me his feedback.

Posted in Software | Tagged , , | 1 Comment

How to render asp.net control to HTML string by code

Sometimes you will need to programatically get the HTML markup of a server or a user control that normally will be rendered to the HTTP response by the ASP.Net framework.

This can simply be done with something like this:

/// <summary>
/// Returns the generated HTML markup for a Control object
/// </summary>
private string RenderControl(Control control)
{
      StringBuilder sb = new StringBuilder();
      StringWriter sw = new StringWriter(sb);
      HtmlTextWriter writer = new HtmlTextWriter(sw);

      control.RenderControl(writer);
      return sb.ToString();
}

This is simple but unfortunately not enough to help us unless you already in the scope/life cycle of a  System.Web.UI.Page and the control that you are dealing with is already in the control tree of that page; which is not always the case.

One of the problems we will usually face -if not inside a page scope- is if the control we are dealing with requires to be inside of a runat=”server” form. If this is the case you will be facing an exception telling you that you have to add the control inside of a from. You can fix that by creating a new Page and a new HtmlForm object and hook thinks together like that:

//initialize a new page to host the control
Page page = new Page();

//create the runat="server" from that must host asp.net controls
HtmlForm form = new HtmlForm();
form.Name = "form1";
page.Controls.Add(form);
form.Controls.Add(control);
//call RenderControl method to get the generated HTML
string html = RenderControl(control);

Then you will be facing the last exception RegisterForEventValidation can only be called during Render(); . Actually I couldn’t fix this without disabling event validation on the newly created page which in my case makes a lot of sense. You disable event validation on the page like this:

page.EnableEventValidation = false;

A complete solution

This is a complete solution that makes use of a Generic Handler that renders a .ascx user control as an HTML document:

using System;
using System.Web;
using System.Web.SessionState;
using System.Web.UI;
using System.Text;
using System.IO;
using System.Web.UI.HtmlControls;

namespace UserControlHandlerDemo.Web
{
    /// <summary>
    /// Renders a (.ascx) user control
    /// </summary>
    public class UserControlHandler : IHttpHandler, IRequiresSessionState
    {
        private const string BASE_DIRECTORY = "~/controls/";

        public void ProcessRequest(HttpContext context)
        {
            context.Response.ContentType = "text/html";

            //get name from GET or POST parameter
            string controlName = context.Server.UrlDecode(context.Request.Params["name"]);
            if (!string.IsNullOrEmpty(controlName))
            {
                try
                {
                    //define full virtual path
                    var fullPath = BASE_DIRECTORY + controlName;
                    
                    //initialize a new page to host the control
                    Page page = new Page();
                    //disable event validation (this is a workaround to handle the "RegisterForEventValidation can only be called during Render()" exception)
                    page.EnableEventValidation = false;
                    
                    //create the runat="server" from that must host asp.net controls
                    HtmlForm form = new HtmlForm();
                    form.Name = "form1";
                    page.Controls.Add(form);

                    //load the control and add it to the page's form
                    Control control = page.LoadControl(fullPath);
                    form.Controls.Add(control);

                    //call RenderControl method to get the generated HTML
                    string html = RenderControl(page);
                    
                    //output it to the response stream
                    context.Response.Write(html);
                }
                catch (Exception ex)
                {
                    //output error messege to the response stream
                    context.Response.Write("Error: " + ex.Message);
                }

                //end the response
                context.Response.End();
            }

        }


        /// <summary>
        /// Returns the generated HTML markup for a Control object
        /// </summary>
        private string RenderControl(Control control)
        {
            StringBuilder sb = new StringBuilder();
            StringWriter sw = new StringWriter(sb);
            HtmlTextWriter writer = new HtmlTextWriter(sw);

            control.RenderControl(writer);
            return sb.ToString();
        }




        public bool IsReusable
        {
            get
            {
                return false;
            }
        }
    
    }
}

Posted in Software | Tagged , | 9 Comments

CSS: selecting an HTML element with a certain attribute

That isn’t a new thing; it actually exists in the W3C specs since CSS2.1 but I have seen a lot of guys misses this very productive feature while styling.
You can target HTML elements that have certain attributes and attributes values like this:

E[foo] Matches any E element with the “foo” attribute set (whatever the value).
E[foo=”warning”] Matches any E element whose “foo” attribute value is exactly equal to “warning”.
E[foo~=”warning”] Matches any E element whose “foo” attribute value is a list of space-separated values, one of which is exactly equal to “warning”.
E[lang|=”en”] Matches any E element whose “lang” attribute has a hyphen-separated list of values beginning (from the left) with “en”.

This can save you in a lot of situations; fore example I had a situation where I needed to change the style of an ASP.NET Calender control; when using an RTL (right to left) layout  the framework was rendering the previous month button with an ugly (align=”right”) attribute only for the previous month button, not allowing me to define different CSS classes for previous and next month buttons. So I had to do something like this which did the trick:

.NextPrevMonth[align='right'] {
    text-align: left;
}
Posted in Software | Tagged | Leave a comment

What are Graph Databases?

Came across this video that talks about graph databases, what are they, how it works and what twitter do with them…looks interesting to me.

Video | Posted on by | Tagged , | Leave a comment

JSONP vs. JSON and Twitter Search API

The main concern of this post is talking about the difference between json and jsonp formats and I will discuss consuming Twitter Search API as an example for clarification.

Lets start by opening this URL in browser or creating a GET request with it using Fiddler:
http://search.twitter.com/search.json?q=egypt
Here we are providing “egypt” as a query to search for. The result is a json format response that looks like this (I only kept the first result here) :

{
	"completed_in": 0.109,
	"max_id": 218732531992895489,
	"max_id_str": "218732531992895489",
	"next_page": "?page=2&max_id=218732531992895489&q=egypt",
	"page": 1,
	"query": "egypt",
	"refresh_url": "?since_id=218732531992895489&q=egypt",
	"results": [{
		"created_at": "Fri, 29 Jun 2012 15:47:54 +0000",
		"from_user": "newgooglenews",
		"from_user_id": 389322092,
		"from_user_id_str": "389322092",
		"from_user_name": "New Google News",
		"geo": null,
		"id": 218732531992895489,
		"id_str": "218732531992895489",
		"iso_language_code": "en",
		"metadata": {
			"result_type": "recent"
		},
		"profile_image_url": "http:\/\/a0.twimg.com\/profile_images\/1682800149\/news_normal.png",
		"profile_image_url_https": "https:\/\/si0.twimg.com\/profile_images\/1682800149\/news_normal.png",
		"source": "&lt;a href=&quot;http:\/\/ibytes.net&quot; rel=&quot;nofollow&quot;&gt;NewGoogleNews&lt;\/a&gt;",
		"text": "Egypt President-elect Mohamed Mursi to speak in Cairo - BBC News http:\/\/t.co\/1a6UEqRH #World #news",
		"to_user": null,
		"to_user_id": 0,
		"to_user_id_str": "0",
		"to_user_name": null
	}]
}

Now we request the jsonp response instead. Add a “callback” query string key/value to the URL: callback=OnSearchComplete (“callback” is a standard keyword that is used bu most API providers but you have to check API documentation first and “OnSearchComplete” is just a function name that you will add in the page)
So URL will be:
http://search.twitter.com/search.json?q=egypt&callback=OnSearchComplete

The resulted response will be like:

OnSearchComplete(
{
	"completed_in": 0.109,
	"max_id": 218732531992895489,
	"max_id_str": "218732531992895489",
	"next_page": "?page=2&max_id=218732531992895489&q=egypt",
	"page": 1,
	"query": "egypt",
	"refresh_url": "?since_id=218732531992895489&q=egypt",
	"results": [{
		"created_at": "Fri, 29 Jun 2012 15:47:54 +0000",
		"from_user": "newgooglenews",
		"from_user_id": 389322092,
		"from_user_id_str": "389322092",
		"from_user_name": "New Google News",
		"geo": null,
		"id": 218732531992895489,
		"id_str": "218732531992895489",
		"iso_language_code": "en",
		"metadata": {
			"result_type": "recent"
		},
		"profile_image_url": "http:\/\/a0.twimg.com\/profile_images\/1682800149\/news_normal.png",
		"profile_image_url_https": "https:\/\/si0.twimg.com\/profile_images\/1682800149\/news_normal.png",
		"source": "&lt;a href=&quot;http:\/\/ibytes.net&quot; rel=&quot;nofollow&quot;&gt;NewGoogleNews&lt;\/a&gt;",
		"text": "Egypt President-elect Mohamed Mursi to speak in Cairo - BBC News http:\/\/t.co\/1a6UEqRH #World #news",
		"to_user": null,
		"to_user_id": 0,
		"to_user_id_str": "0",
		"to_user_name": null
	}]
}
);

So its just like the first one but the json object is passed to a function call with OnSearchComplete as the name of the function.

This is the idea behind jsonp; we will have a function in the page with this signature:

function OnSearchComplete(tweets) {
    //body
}

And this function will be called as soon as the page loads this JavaScript file. Yes, the server this way is just exposing a simple JavaScript file that only calls your callback function passing the requested data as json object. And we can add this file -like we add any JavaScript file to the page- by adding this to the page:

<script src="http://search.twitter.com/search.json?q=egypt&callback=OnSearchComplete" type="text/javascript"></script>

or of course something like this:

$("<script>")
                 .attr("language", "javascript")
                 .attr("type", "text/javascript")
                 .attr("src", "http://search.twitter.com/search.json?q=egypt&callback=OnSearchComplete")
                 .appendTo($("head"));

So what are the benefits of using jsonp?

  • No XMLHTTPRequest object is used (no ajax)
  • No Same-origin policies limitations
  • No need for cross-domain policy files on servers

This is a simple complete solution for Twitter searching:

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title></title>
    <script src="http://code.jquery.com/jquery-1.7.2.min.js" type="text/javascript"></script>
    <script language="javascript" type="text/javascript">

        $.twitter = function (query, callback, options) {
            
            if (query == undefined || query == null || query == "") {
                throw new Error("search query is not defined!");
            }

            if (callback == undefined || callback == null || callback == "") {
                throw new Error("callback function is not defined!");
            }

            var twitterSearchUrl = "http://search.twitter.com/search.json?q=" + query + "&callback=" + callback;
            if (options != undefined && options != null) {
                twitterSearchUrl += "&" + $.param(options);
            }

            $("<script>")
                    .attr("language", "javascript")
                    .attr("type", "text/javascript")
                    .attr("src", twitterSearchUrl)
                    .appendTo($("head"));

            var returnObject = {};
            returnObject.callback = callback;
            returnObject.LoadPage = function (pageString) {
                var twitterSearchPageUrl = "http://search.twitter.com/search.json" + pageString + "&callback=" + this.callback;
                $("<script>")
                    .attr("language", "javascript")
                    .attr("type", "text/javascript")
                    .attr("src", twitterSearchPageUrl)
                    .appendTo($("head"));
            };

            return returnObject;
        };

        var twitterSearcher;
        
        $(function () {
            $("#ButtonSubmit").click(function (event) {
                twitterSearcher = $.twitter($("#TextSearch").val(), "OnSearchComplete", { rpp: $("#SelectResultsPerPage").val(), result_type: "recent" });
            });
        });


        function OnSearchComplete(tweets) {
            if (tweets != undefined && tweets != null) {
                $("#TweetsList").html("");
                $("#PageCountContainer").show();
                $("#PageCount").text(tweets.page);
                $("#PrevPageLink").unbind("click");
                $("#NextPageLink").unbind("click");
                if (tweets.next_page != undefined && tweets.next_page != null) {
                    //twitterSearcher.LoadPage(tweets.next_page);
                    $("#NextPageLink").show();
                    $("#NextPageLink").click(function () {
                        
                        twitterSearcher.LoadPage(tweets.next_page);
                        return true;
                    });
                }
                else {
                    $("#NextPageLink").hide();
                }
                
                if (tweets.previous_page != undefined && tweets.previous_page != null) {
                    //twitterSearcher.LoadPage(tweets.next_page);
                    $("#PrevPageLink").show();
                    $("#PrevPageLink").click(function () {
                        
                        twitterSearcher.LoadPage(tweets.previous_page);
                        return true;
                    });
                }
                else {
                    $("#PrevPageLink").hide();
                }
                
                for (var i = 0; i < tweets.results.length; i++) {
                    $("#TweetsList").append($("<li>").html("<img src='" + tweets.results[i].profile_image_url + "' /> " + tweets.results[i].text));
                }
            }
        }

    </script>
</head>
<body>
    <input id="TextSearch" type="text" />
    <input id="ButtonSubmit" type="button" value="Search" />
    <select id="SelectResultsPerPage">
        <option value="10">10 Results</option>
        <option value="50">50 Results</option>
        <option value="70">70 Results</option>
        <option value="100">100 Results</option>
    </select>
    <ol id="TweetsList"></ol>
    <div>
        <a style="display:none" id="PrevPageLink" href="javascript:;">Prev. Page</a>
        <span style="display:none" id="PageCountContainer"> <span id="PageCount"></span> </span>
        <a style="display:none" id="NextPageLink" href="javascript:;">Next Page</a>
    </div>
</body>
</html>
Posted in Software | Tagged , , | Leave a comment