Saturday, January 14, 2006
Tuesday, June 21, 2005
Yet another to-do list
1. Write a code snippet manager with tagging/bookmarking ability
2. Build a CMS for bookstores that automagically updates inventory on Amazon, Alibris, and other popular venues from the CMS alone, while providing a web site/store.
3. Take that same CMS and customize it for HR/Benefits/Payroll, adding document management and tagging
4. Book mark aggregator tool--integrate delicious, private bookmarks, and additional bookmarking site tools.
All of these are pretty sizeable projects, if done right. First step is...START!
AJAX and asp.net
I've written a little code that shows how to AJAX-ify an aspx web form.
1. Write a javascript function that creates an xmlHttpRequest object
2. Write a javascript function that calls the function in #1 and passes it a link to post to the code behind
3. Write your code behind to respond to an event connected with #2 and return a string
4. Remember to assign the value of the returned string to a div or something in the HTML
I'll post the code once I get my laptop on my home network so I can copy paste it. What is important here is to understand why AJAX, while old and nothing new, is a key to making web apps more equal to client/server/desktop applications.
Abstract systems
I read the thread at this link, particularly the first post, and I started thinking about what I really enjoy about programming: that every system we build, every abstract vision is a MACHINE. The machine IS the business. I can take any system, any business model, and build a reasonable facsimile in code.
I love that.
After doing this for so long, that is the way I think of the world around me; little systems that tie together to make up bigger systems. In and of themselves, the abstraction means little. But if you build it right, they can accomplish things and impact "the big picture."
The big picture is usually something like "I don't have to do that manually anymore" or "now I can understand the outcomes of my actions and make predictions based on a model", or somesuch.
Saturday, June 11, 2005
Some thoughts on direction, purpose
I've discovered the open source world. I already knew it existed, but I didn't REALLY think of it as a serious platform. I have some ideas about document management, content management, tagonomies, etc., that need a place to rest, within a role-based security structure with skins, usability, you name it.
I looked at Drupal, Scoop, DotNetNuke, XAMPP, Ubuntu, SuSE, Red Hat...and I found I prefer the BSD license structure with the Microsoft toolset. How 'bout that? I went looking deep into the FOSS (free open source software) jungle and still came out wanting the productive options Microsoft offers. Sooooo.
I compared Scoop (Perl-based, Linux only CMS) with DotNetNuke. DNN is VB.NET, but I love the structure. Scoop is neat too, but limited regarding opportunities because of its strict GPL licensing.
The key to FOSS is the licensing. BSD rules. I can write my own attachable modules and sell them aside a known open source entity that is a growing community because developers can earn money supporting it. Can't have that with Scoop, because everything I sell MUST be open source. With DNN, I can close it or open it, as I choose. Having that option is perfect.
Anyway, I am considering the option of using the DotNetNuke marketing/name recognition or re-writing my own version of it in C#, partly as exercise/fun, partly as a way to build my own brand, partly as a way to port it to Linux via Mono.
I'll have to think about that last paragraph seriously. DNN is a huge code base.
Other things to consider: marketing Scoop has its advantages/disadvantages because it is a bitch to support and admin. There is a barrier to entry that doesn't exist for DNN. What to do, what to do. I have one year of code-writing to get done on this thing I want to build. Scoop has no real community. Then there is always the dark side of Microsoft to contend with. Each side has its issues.
I'm betting I stay with MS$. ;0
Tuesday, May 24, 2005
execute stored procedures
I decided to used stored procedures in my last project (HR/Benefits/Payroll related web application). I wanted one procedure to execute all my stored procedures, no matter what the list of parameters:
using System;
using System.Data;
using System.Collections;
using System.Data.SqlClient;
using System.Data.OleDb;
public DataSet returnAnyStoredProcOleDB(string spName, ArrayList myParams)
{
SqlDataAdapter da = new SqlDataAdapter();
DataSet ds = new DataSet();
string questionMarks = null;
StringBuilder list = new StringBuilder();
string testString = null;
SqlConnection cn = new SqlConnection(connectionString);
SqlCommand cmdSel = new SqlCommand(spName, cn);
da.SelectCommand = cmdSel;
cmdSel.CommandType = CommandType.StoredProcedure;
try
{
cn.Open();
if (myParams != null)
{
foreach(dbcParams q in myParams)
{
list.Append("?,");
cmdSel.Parameters.Add("@" + q.ParamName, q.ParamValue);
testString = testString + " " + "ParamName=" + q.ParamName + " ParamVal=" + q.ParamValue.ToString();
}
list.Length -= 1; //remove trailing comma
questionMarks = list.ToString();
}
da.Fill(ds);
cn.Close();
bool returnLog = false;
returnLog = ErrorLog.ErrorRoutine(false, " params = > " + testString);
cn.Dispose();
}
catch (Exception e)//yeah, I know, lazy...this is just an example
{
bool returnLog = false; //logging code I'll post about in the future
returnLog = ErrorLog.ErrorRoutine(false, e, e.ToString() + spName + " " + "=spName, params = > " + testString);
errorMessage = e.ToString();
}
return ds;
}
Wednesday, March 23, 2005
Python.NET
I'm curious so I've downloaded SPE IDE and Python 2.4. My exercises in the XML world have lead me down the Python path. It seems a lot of good string handling and parsing utilities are built in Perl (yuck) and Python so I'm going to have to sate my curiousity now. :)
some Python.NET del.icio.us links
Using HTML Tidy in .NET to screen scrape and create RSS/XML
A little code using HTML Tidy and XPath to screen scrape links and descriptions of those links from a web page. This technique probably will not work for every web site because HTML Tidy chokes on especially poor, standards incompliant web pages, but it should work for most "plainer" web pages with matching tags, good div tags. I would like to give a shout-out to O'Reilly's XML.Com pages and the wonderful XPath Explorer.
public void getSpurlLinks(string urlString, string styleSheet, string cachedFileName, string tagList)
{
//Spurl.Com has multiple page results, so I loop through 10, arbitrarily, to get a number of result links from their search
int numberOfPages = 10;
WebClient webClient = new WebClient();
ArrayList a = new ArrayList();
//a couple of file options for HTMLTidy
string optFile = @"c:\foo.tidy";
string errFile = @"c:\err.tidy";
byte[] reqHTML;
ArrayList theseLinks = new ArrayList();
for(int s=1;s<=numberOfPages;s++)
{
urlString = urlString + "&page=" + Convert.ToString(s);
reqHTML = webClient.DownloadData(urlString);
UTF8Encoding objUTF8 = new UTF8Encoding();
string myString = objUTF8.GetString(reqHTML);
//HTML Tidy uses the Document class for the following settings:
Document thisHTML = new Document();
thisHTML.LoadConfig( optFile );
thisHTML.SetErrorFile( errFile );
thisHTML.ParseString(myString);
thisHTML.CleanAndRepair();
thisHTML.RunDiagnostics();
thisHTML.SetOptBool( TidyOptionId.TidyForceOutput, 1 );
string fixedDoc = thisHTML.SaveString();
//I couldn't get HTML Tidy to put in the proper namespace designator, so here is a little hack:
fixedDoc = fixedDoc.Replace("xmlns=", "xmlns:html=");
StreamWriter sw = new StreamWriter(@"c:\xml\spurl.html", false);
sw.Write(fixedDoc);
sw.Flush();
sw.Close();
//These are the nodes I want from my newly created XHTML file:
string hrefXPath = "/html/body/div[@class='results']/div[@class='spurlResLink']/a";
string descXPath = "/html/body/div[@class='results']/div[@class='spurlResLink']/div[1]";
string titleXPath = "/html/body/div[@class='results']/div[@class='spurlResLink']/a";
XmlDocument myDoc = new XmlDocument();
XmlTextReader myRdr;
myRdr = new XmlTextReader(@"c:\xml\spurl.html");
myRdr.WhitespaceHandling = WhitespaceHandling.None;
try
{
myDoc.Load(myRdr);
//put the nodes into node collection for looping through
XmlNodeList thisNodes = myDoc.SelectNodes(hrefXPath);
XmlNodeList titleNodes = myDoc.SelectNodes(titleXPath);
XmlNodeList descNodes = myDoc.SelectNodes(descXPath);
int i=0;
foreach(XmlNode xn in thisNodes)
{
links l = new links(); //my own links class
l.LinkUrl = xn.Attributes.GetNamedItem("href").Value;
l.LinkId = i+990; //just some arbitrary link id #
if(titleNodes[i]==null||descNodes[i]==null)
{
l.LinkTitle = "";
l.LinkDescription = "";
}
else
{
l.LinkTitle = titleNodes[i].InnerText;
l.LinkDescription = descNodes[i].InnerText;
}
l.LinkTags = tagList;
i++;
theseLinks.Add(l);
}
myRdr.Close();
//bind my link collection to my DataGrid for presentation
spurlDG.DataSource = theseLinks;
spurlDG.DataBind();
}
catch (Exception goner)
{
myRdr.Close();
}
}
}
Monday, March 14, 2005
RDF to ASP.NET Control: Step 4
Now to bind the returned DataView to your DataList (goes in your aspx page codebehind):
public void getMyDelLinks(string urlString,
string transformFile)
{
DataView dv = RDFToDataView(urlString, transformFile);
try
{
myDelLinksDG.DataSource = dv;
myDelLinksDG.DataBind();
}
catch (Exception goner)
{
Response.Write(goner.ToString());
}
}
RDF to ASP.NET Control: Step 3
Create the code (goes in your aspx file codebehind) that will grab your transformed RSS stream and stream it to a DataSet, then convert to a DataView (the variables urlString would be your RDF link, such as "http://del.icio.us/rss/tag/.NET" and the stylesheet is the file name from Step 1):
public DataView RDFToDataView(string urlString, string stylesheet)
{
System.IO.Stream str = new System.IO.MemoryStream();
System.Xml.XPath.XPathDocument doc;
// Create the XslTransform.
System.Xml.Xsl.XslTransform xslt = new System.Xml.Xsl.XslTransform();
// Load the stylesheet that creates XML Output.
xslt.Load(System.Web.HttpContext.Current.Server.MapPath(stylesheet));
// Load the XML data file into an XPathDocument.
// if this is in the cache, grab it from there...
if (System.Web.HttpContext.Current.Cache["cached" + urlString] == null)
{
// No cache found.
doc = new System.Xml.XPath.XPathDocument(urlString);
// Add to cache.
System.Web.HttpContext.Current.Cache.Add("cached" + urlString, doc,
null, DateTime.Now.AddMinutes(60), TimeSpan.Zero,
System.Web.Caching.CacheItemPriority.High, null);
}
else
{
// Cache found.
doc = (System.Xml.XPath.XPathDocument)System.Web.HttpContext.Current.Cache
["cached" + urlString];
}
// Create an XmlWriter which will output to our stream.
System.Xml.XmlWriter xw = new System.Xml.XmlTextWriter(str,
System.Text.Encoding.UTF8);
// Transform the feed.
xslt.Transform(doc, null, xw, null);
// Flush the XmlWriter and set the position of the stream to 0
xw.Flush();
str.Position = 0;
// Create a dataset to bind to the control.
DataSet ds = new DataSet();
ds.ReadXml(str);
// Close the writer and thereby free the memory stream.
xw.Close();
DataView dv = null;
try
{
dv = ds.Tables[0].DefaultView;
}
catch (Exception nothing)
{
}
return dv;
}
RDF to ASP.NET Control: Step 2
Create the DataList control (that will be in your .aspx page):
<asp:DataList ID="myDelLinksDG" Runat="server" >
<ItemTemplate>
<table width=100% border="0">
<tr><td valign="top" align="left" width="60%">
<a href='<%# DataBinder.Eval(Container.DataItem, "link") %>'><%# DataBinder.Eval(Container.DataItem, "title") %></a></td>
</tr>
<tr><td width="10px"></td><td><%# DataBinder.Eval(Container.DataItem, "description") %></td>
<tr><td></td><td><%# DataBinder.Eval(Container.DataItem, "subject") %></td>
</tr>
</table>
</ItemTemplate>
</asp:DataList>
RDF to ASP.NET Control: Step 1
I wanted to get my delicious links and create my own DataGrid or DataList control to put on my personal site, but I didn't want to have to download and save my del.icio.us file every time: I wanted the "live" version. In order to do that, you have to "transform" the RDF to RSS format that your delicious links are served up in, stream the result into a dataset, and bind that dataset to your web control (in my case, a DataList control). I gleaned most of this from Build an RSS DataList Control in ASP.NET.
RDF to RSS XSLT file:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:rss="http://purl.org/rss/1.0/"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<xsl:output omit-xml-declaration="yes" />
<xsl:template match="rss|/rdf:RDF">
<feed>
<xsl:apply-templates select="channel/item|/rdf:RDF/rss:item"/>
</feed>
</xsl:template>
<xsl:template match="channel/item|/rdf:RDF/rss:item">
<item>
<title>
<xsl:value-of select="title|rss:title"/>
</title>
<link>
<xsl:value-of select="link|rss:link"/>
</link>
<description>
<xsl:value-of select="description|rss:description"/>
</description>
<subject>
<xsl:value-of select="dc:subject|rss:subject"/>
</subject>
<pubdate>
<xsl:value-of select="dc:date|rss:pubdate"/> </a>
</pubdate>
</item>
</xsl:template>
</xsl:stylesheet>
Hacking .NET
I don't consider myself a true-blue hacker in the sense that Paul Graham writes (Great Hackers). I don't think I qualify as a hacker, since I like working with C#. Don't get me wrong: languages such as Perl and Ruby are cool. I just don't have a lot of time on my hands to dink around with them. I know C#, so that's what I "hack" with.
I began working on a personal project to build a personal bookmark manager because I was a little ticked that the del.icio.us founder/programmer doesn't want to allow users to "privatize" links or link categories. So I built my own front end for delicious, which has now turned into a full grown project to utilize APIs for searching for additional bookmarks. I figure, why stop with delicious? Why not use the "semantic web" to search for a multitude of links that I may need? So my personal bookmark manager finds related links to Google, Technorati, and right now I'm researching how to stream HTML and transform it to XML for those sites that have great search tools but no API nor RSS.
I'm learning a lot. As it turns out, this research has brought me a great way to learn about applying XML/XSLT for a new module I'm building for a system at work. Instead of hitting the database for every update, I'm thinking of putting the changes in an XML file for that user, and when they are completely done, the update pushes the XML file to the database. It seems like it will be a LOT faster user experience.