Upcoming webinar: maximize video in the enterprise

David Lozzi

David Lozzi

Microsoft SharePoint is a powerful solution that can manage content across organizations in a variety of ways. Recently, I had the opportunity, as part of a Slalom Consulting team, to work with RAMP and help create its enterprise product MediaCloud for SharePoint. We integrated RAMP’s impressive media solutions with SharePoint to provide a rich video experience inside SharePoint itself.

MediaCloud for Sharepoint enables team members to upload a video directly into SharePoint, which then handles processing, querying, etc., using RAMP’s secure cloud-based storage and delivery. Once the video is ready, the user is notified and video playback is accessible through RAMP’s custom player. The video is searchable through SharePoint, including spoken words—meaning users can stay in SharePoint from start to finish. As a result, organizations can get the most from their video content through the SharePoint solution they already have in place. Read more of this post

Thinking About Data

“Though the mills of God grind slowly, yet they grind exceeding small…” Henry Wadsworth Longfellow translated this famous line from a German poem written by Friedrich von Logau in 1654.  Imagine if Longfellow worked as a data architect in today’s Information Technology industry.  Perhaps he would have written this now famous line as follows: “Though the databases of God grind slowly, yet they grind exceedingly small.”

This is often how I feel when I begin investigating a database to diagnose performance problems or to start documenting the schema and constructing ETL process to populate a reporting database or data warehouse.  As data modelers, data architects and database developers- all of whom I will collectively refer to as database people for the remainder of this article-we are taught to think about data relationally and dimensionally. Relationally we normalize data for production OLTP databases, organizing it in such a way as to minimize redundancy and dependency.

Dimensionally we design data warehouses to have facts and dimensions that are easily understandable and provide valuable decision support to various business entities.  High quality, reliable data that is easy to query and consume is the goal of these design patterns.  During my twenty year career, however, I have discovered that well-designed database schemas and data models are the rare exception, not the rule.  And there are a couple of common themes underpinning the bad designs I encounter.

Poorly designed databases are often the result of designers who instead of thinking about how data will flow through their databases and more importantly, which people and internal business entities will want to consume that data, simply view databases as a convenient resting place for data.  This erroneous view frequently stems from a lack of formal training in database normalization, relational algebra, dimensional modeling and data object modeling; skills that I believe are essential for anyone serious about enterprise level database design.  The database schema is foundational to almost every business system, so it is imperative to involve skilled database people early in the design process.  Failure to do this may result in flawed database schemas which will suffer from one or more of the follow issues:

  • Lack of extensibility
  • Difficult to maintain
  • Lack of scalability
  • Difficult to query
  • Contain a high degree of data anomalies which render the data unreliable
  • Performance problems

Having worked with a lot of weird and strange database designs-I could probably write an entire book on the subject- I want to briefly mention some of the more commonly encountered database design errors.  And I am going to classify these errors into two general groups: Design errors in production OLTP databases and design errors in databases intended for reporting.

Production OLTP databases

  1. A completely flattened, un-normalized schema. Databases designed this way will have all sorts of issues when placed under production load; performance and scalability for example. I often hear this line from developers, “Well it worked fine in QA with a small amount of data, but as soon as we deployed to production and threw serious traffic at it, all sorts of performance problems emerged.”  Flattened schemas like this frequently lack declarative referential integrity which leads to data anomalies. Getting reliable reporting from this environment is difficult at best.
  2. A highly normalized schema, possibly over-normalized for the context, but lacking in sufficient/useful payload.  Every part of the data model has been taken to its furthest Normal Form and data integrity is well enforced through Primary Key/Foreign Key relationships. But getting useful data requires both excessive table joins and deriving or calculating needed values at query time. For example, I worked on the invoicing portion of an application-the invoicing code was written in PERL-in which customer product usage was calculated during the invoicing process.  The final usage totals were printed on the paper invoices, but not stored anywhere in the database.  Finance needed this information, both current and twelve months historical and to get it, I had to recreate large parts of the invoicing process in SQL; a herculean task.  When I inquired of the developers as to why customer usage invoicing totals were not stored in the database they responded as follows, “I guess we never though anyone would want that data.”

Reporting databases intended to serve internal business entities

  1. Attempts to build internal reporting functions directly against production OLTP databases.  A discussion of OLTP optimization techniques vs. OLAP optimization techniques is beyond the scope of this article. But suffice it to say that attempts to run I/O intensive reporting against a production OLTP system which is not optimized for reporting will cause tremendous performance problems.
  2. Building out a secondary physical database server and placing an exact copy of a production OLTP database on it as a “reporting instance” database.  Doing this will certainly remove any chance of overwhelming the production server.  But it will not provide a database optimized for reporting.
  3. Adding a few somewhat haphazard aggregation tables to the “reporting instance” database mentioned above.  This may temporarily reduce query times for reports relying on aggregated data, but it is not a long-term substitute for a properly designed dimensional reporting model.

Data models are often given short shrift because the original developers, being inexperienced with relational and dimensional databases, do not think correctly about data. This error in data thought frequently results in poor database designs which may perform poorly and contain unreliable data that is difficult to query.  I want to leave you with a specific example of this point by briefly relating a somewhat recent client experience of mine.

My client at the time had recently purchased a start-up company whose product was a complex ad serving application/engine.  The SQL Server databases foundational to the application were suffering severe performance problems which rippled through the entire system and resulted in a less-than-stellar customer experience.

At my client’s behest I executed a major system review and quickly ascertained two primary issues; a data architecture that limited scalability and incorrect table indexing which was a direct result of the architecture. I kept their developers and Program Manager involved with my solution development process and after a successful deployment, which solved their performance issues, the program manager made a key statement to me. She said, “Wow, I never understood the value and importance that a person with strong data architecture and DBA skills could bring to project like this.  You can be certain that on all future projects of this magnitude I will insist on, and budget for a person with your skillset to be involved at the outset to ensure we avoid these types of database issues.”

Every IT, Program and Project Manager would do well to heed her advice.  Consider spending some time with your recruiting department to find an experienced data architect with a successful track record at the enterprise level.  It will be time and money well-spent.

MCM: Elevating Our Technical Expertise

There are a few things that make Portland unique—our affinity for great food (often food that comes out of a 4’ by 6’ metal food cart), our identity as a lifestyle destination (where else can you ski in the morning and be at the beach in the afternoon?), and just over 1 million distinct individuals all with their own flair and personal style. At Slalom’s Portland office, we are celebrating yet another thing that makes us unique: we now have the privilege of working alongside Microsoft Certified Master Kyle Petersen!  Kyle is one of one of a select few in the US (there are approximately 30) to have earned this certification for Microsoft SharePoint 2010 in the US and is only one of around 80 in the world! Please join me in congratulating Kyle in this momentous achievement and read more about the importance and value his MCM will bring to our clients and to Slalom in my Q&A with Kyle below:

What exactly is a “Microsoft Certified Master’?

The MCM Certification is the highest technical certification that Microsoft offers for some of its key technologies (e.g., Exchange, SQL, Lync, and SharePoint). What really differentiates this certification is the technical breadth, depth and the requirement to truly demonstrate your technical mastery.

With many of the Microsoft Certifications there are lots of people who can buy exam guides, study and pass the exams without ever having actually used the technology or skill.   That is not possible in the MCM program because you have to not only know the answers, but understand the concepts and be able to demonstrate your expertise.

You can learn more about the certification here and here.

What must one achieve in order to be considered a Master?

Assuming you have 3 years of experience with SharePoint 2007 and SharePoint 2010, you will have to:

1. Pass the four basic SharePoint certification exams:

  • Exam 70-573: TS: Microsoft SharePoint 2010, Application Development
  • Exam 70-576: PRO: Designing and Developing Microsoft SharePoint 2010 Applications
  • Exam 70-667: TS: Microsoft SharePoint 2010, Configuring
  • Exam 70-668: PRO: SharePoint 2010, Administrator

2. Submit an Application to the MCM program containing your Resume and descriptions of the types of projects you have worked on.

3. Pass a phone screen to ensure you are technically ready to enter the program.

4. Complete the pre-reading list to ensure you have the basic fundamentals covered.

5. Complete 3 weeks of in-depth technical training.

6. Pass a 4-hour online knowledge exam.

7. Pass an 8 hour hands-on Qualification Lab that demonstrates your expertise.

Briefly describe your experience in the MCM bootcamp for the 3 weeks prior to the exam.

First off, I don’t think the term “bootcamp” is really applicable. That has a connotation in the development community where you pay to go off and get trained and come out with the guaranteed certification.

The MCM training rotation is much more in depth and broad. Classes typically last 10 hours a day of 400+ level content. While there are lots of PowerPoint slides (over 2,000) the real information is delivered between the bullet points, so you have to stay engaged in the process. Class instructors are a mixture of MCMs, Microsoft Product Team employees, and Microsoft MVPs. They are the best in their fields and help provide amazing context to the subjects.

The training also provides hands-on labs to help solidify the skills that were covered and to help us explore the capabilities of various SharePoint features. Completing these labs is critical to fully understanding the concepts, so the labs consumed every evening and weekend.

So a typical day was get up and head into class. Class went from 8 a.m. to 6 p.m. with some nights going past 7 p.m. Breaks are brief, and you get a quick lunch at the Microsoft Cafeteria. After class it was back to my apartment for dinner and then spend time working on the labs. Then I would review the training materials and make notes that I could use for studying for my knowledge and qualification exams. Try and get some rest and then repeat. Weekends were a chance to get caught up on labs I had not completed or did not understand well enough yet.

The last 2 days are for the certification tests. First is a 4-hour knowledge exam that is extremely challenging.

On the second day is an 8-hour hands-on qualification exam where we must complete assigned tasks. You have to fully know the subjects covered because there just is not enough time to able to research an answer. That was the fastest 8 hours that ever slipped by because I was so absorbed and focused on trying to get it all completed within the time limit.

The pass rate for these exams is less than 50%. In my rotation there were 14 of us and only 7 passed. However, you are allowed up to three tries to pass the exams, but there are substantial costs involved with each re-take.

How many Masters are there? Why so few?

I think the number-one reason there are so few is because of the cost and time commitment required to complete the certification. While the actual training was only 3 weeks, I spent the prior 3 months going over the pre-reading list and working on labs and examples to be sure I understood the concepts. And for consultants, three weeks of unbillable time can really mess up your overall utilization rate.

The second reason is that it’s really hard. While there are a lot of amazing SharePoint developers, they don’t necessarily have the infrastructure experience to be able to setup a SharePoint farm. And there are a lot of great SharePoint administrators that don’t know how to write a custom web part.  A SharePoint MCM requires end-to-end and top-to-bottom knowledge of the SharePoint product.

You can find the list of all of the MCMs here. I believe there are about 80 MCMs for SharePoint 2010 worldwide and about 30 in the US—including Microsoft employees.

How will this certification ultimately benefit our clients?  

Slalom Portland has, for several years, had a very strong SharePoint team and has helped many clients in the Portland area use SharePoint to run their businesses better. In addition to providing our clients with a level of comfort that Slalom has the most qualified resources possible, the MCM program gives its members information about SharePoint which is not available to the public or even to Microsoft’s highest level partners in other programs. MCMs are provided with this information earlier and more in depth than any other non-Microsoft group and that enables Slalom to make better recommendations to our customers and be more efficient when troubleshooting issues. The MCM community also stays in close touch, jointly contributing on solutions to the toughest SharePoint challenges out there, so any member can raise questions and have the others weigh in. Additionally, MCMs have unprecedented access to the Microsoft product team which goes beyond even Slalom’s access as a nationally managed gold partner.

All of this enables Slalom to provide our clients with the best possible solutions, fully understanding the implications of design decisions. For example: over the past few months our clients have been asking us to design solutions in SharePoint 2010, sometimes highly customized, which will upgrade easily to 2013 or when moving to the cloud. The MCM program gives Slalom one more very powerful tool for making the best design choices and recommendations for our clients on their SharePoint roadmaps.

Why do it? 

When I first heard about the MCM program it was in the context of the 3 weeks of deep technical training. I thrive on the 400-level sessions at SharePoint conferences and thought that having access to 3 weeks of that level of training was an amazing opportunity.

Then I learned about the rest of the program—the prerequisites, the exams, the pass rate—and I was really intimidated and not sure I had the “right stuff.” Portland is a small market and we don’t often get the chance to work on large-scale enterprise solutions, so I felt I just didn’t have the exposure to the breadth of skills.

But ultimately the prize of the deep technical knowledge pushed me to take the chance and apply to the program. Getting training from the people responsible for the product features and the technical subject matter experts is such an amazing experience. It is not for everyone. It is hard, and it will test you. But passing the MCM means I can say “I know SharePoint.”

Congratulations,Kyle! We are proud to work alongside a true Master!

KyleMCP

Tech Trends for 2013

Daniel Maycock is one of Slalom’s acknowledged thought leaders in the realm of new and emerging technology.

There were many significant technology advances during 2012 in a number of key areas, including the mainstream adoption of LTE, Big Data, and analytics dominating the enterprise IT agenda.

Companies went from adopting cloud platforms and services to leveraging those services and transforming their businesses.

  • Windows 8 has shown just how important Internet connectivity will be for computing in many capacities.
  • Every major IT vendor has focused to some extent on the convergence of mobile, cloud, analytics, social, and helping companies make IT a central part of their business in every aspect.
  • From SalesForce to Azure, cloud-based solutions are expected to grow even more in 2013.

As more and more companies begin waking up to this new reality, the question is not if adoption of key technologies such as cloud and mobile will take place, but how quickly and what can be done to make them work for the business as fast as possible. Furthermore, as these technologies are integrated deeper into the enterprise, it will be critical to keep in mind what other technologies will follow in their path. Read more of this post

Migrating Audience Targeted Information

Slalom Consultant Maarten Sundman

Slalom Consultant Maarten Sundman specializes in .NET, SharePoint, and Silverlight solutions and has experience in the Financial Services, Software, and Utilities and Energy sectors.

Sometimes you’ll encounter a scenario where you need to move a site from one environment to another and the site is using Audiences. Now I’m personally a bit of a fan of Audiences for simple out of the box targeting of information. However, it has one pretty major flaw. Audiences are fundamentally environment specific. There is no out of the box method for remapping or moving audience targeted information and have it still work properly on the other side. This is due to a number of reasons which I’m not going to go in to in this blog post. However, here is a tool that can help with this.

This tool is pretty straight forward with only two command, import and export:

  • Export—Generates a file with a mapping of audiences to be used when updating the new environment.
  • Import—Based on your mapping file updates the content in the new environment to use the new environments audiences.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Office.Server;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Navigation;
using Microsoft.Office.Server.WebControls.FieldTypes;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.Office.Server.Audience;
using System.IO;

namespace WithinSharePoint.MigrateAudiences
{
class Program
{
internal static List Audiences = null;
internal static Dictionary<string, Audience> RemappedAudiences = null;
static void Main(string[] args)
{
try
{
if (args.Count() == 2)
{
SPSite site = new SPSite(args[1]);
foreach (Audience a in GetAudiences(site))
{
Console.WriteLine(a.AudienceName + “,”+ a.AudienceID.ToString());
}
}
if (args.Count() == 3)
{
SPSite site = new SPSite(args[1]);
SPWeb RootWeb = site.OpenWeb();
Audiences = GetAudiences(site);
RemappedAudiences = AudienceMap(args[2]);
ScanWeb(RootWeb);
}
else if ((args.Count() < 2) | (args.Count() > 3))
{
WriteUsage();
}
}
catch(Exception ex)
{
Console.WriteLine(“ERROR:” + ex.Message);
WriteUsage();
}
}

internal static void WriteUsage()
{
Console.WriteLine(“Usage: WithinSharePoint.MigrateAudiences -EXPORT [URL TO RETRIEVE AUDIENCES] >> AudienceExport.csv”);
Console.WriteLine(“Usage: WithinSharePoint.MigrateAudiences -IMPORT [URL TO SCAN AND UPDATE] [FILEPATH]“);
Console.WriteLine(“File is a text document with this format (Audience ID is the ID of the audience in the SOURCE environment, not the destination):”);
Console.WriteLine(“AudienceName,AudienceID”);
}

internal static void ScanWeb(SPWeb web)
{
web.AllowUnsafeUpdates = true;
ScanLists(web.Lists);
try
{
ScanNavigation(web.Navigation.GlobalNodes);
ScanNavigation(web.Navigation.QuickLaunch);
}
catch (Exception ex)
{
Console.WriteLine(“Error updating navigation. ” + ex.Message);
Console.WriteLine(web.Url);
}
foreach (SPWeb child in web.Webs)
{
ScanWeb(child);
}
web.AllowUnsafeUpdates = false;
}

internal static void ScanLists(SPListCollection Lists)
{
foreach (SPList list in Lists)
{
if (list.Fields.ContainsField(“Target_x0020_Audiences”))
{
ScanItems(list.Items, “Target_x0020_Audiences”);
}
else if (list.Fields.ContainsField(“Audience”))
{
ScanItems(list.Items, “Audience”);
}
}
}

///

/// Scans and updates all audience targetted sharepoint navigation nodes with audiences from the new environment
///

///internal static void ScanNavigation(SPNavigationNodeCollection Nodes)
{
string value = “”;
string[] values;
bool pendingupdate = false;
Char[] splitter = new Char[] { ‘;’ };
SPNavigationNode node;
for (int i = 0; i < Nodes.Count; i++ )
{
node = Nodes[i];
string newvalue = “”;
if (node.Properties.Contains(“Audience”))
{
value = node.Properties["Audience"].ToString();
value = value.Replace(‘,’, ‘;’);
values = value.Split(splitter, StringSplitOptions.RemoveEmptyEntries);
foreach (string val in values)
{
if (RemappedAudiences.ContainsKey(val))
{
//update with new audiences
pendingupdate = true;
newvalue += RemappedAudiences[val].AudienceID + “,”;
}
else
{
//this is to preserve existing unknown audiences
newvalue += val + “,”;
}
}
if (pendingupdate)
{
node.Properties["Audience"] = newvalue;
node.Update();
}
}
}
}

///

/// Scans all items in an audience targetted list and updates them with new environments audiences
///

//////internal static void ScanItems(SPListItemCollection items, string AudienceField)
{
Console.WriteLine(“Scanning and updating list ” + items.List.Title);
bool ListUpdate = false;
Char[] splitter = new Char[] { ‘;’ };
SPListItem item;
for(int i = 0; i < items.Count;i++)
{
try
{
item = items[i];
string value = “”;
if(item[AudienceField] != null)
{
if (!String.IsNullOrEmpty(item[AudienceField].ToString()))
{
bool PendingUpdate = false;
string NewValue = “”;
value = item[AudienceField].ToString();
if (value.Contains(“,”))
{
value = value.Replace(‘,’, ‘;’);
}
//Console.WriteLine(value);
string[] audiences = value.Split(splitter, StringSplitOptions.RemoveEmptyEntries);
foreach (string a in audiences)
{
//Console.WriteLine(a);
if (RemappedAudiences.ContainsKey(a))
{
//add remapped audience to update
PendingUpdate = true;
NewValue += RemappedAudiences[a].AudienceID + “,”;
}
else
{
//keep unknown audiences in the item
NewValue += a + “,”;
}
}
if (PendingUpdate)
{
//don’t ask why sharepoint uses csv for audience id’s and then appends ;;; at the end
item[AudienceField] = NewValue + “;;;;”;
ListUpdate = true;
item.UpdateOverwriteVersion();
}
}
}
}
catch(Exception ex)
{
Console.WriteLine(“Error reading line item:” + ex.Message);
Console.WriteLine(items[1][AudienceField].ToString());
Console.WriteLine(ex.StackTrace);
}
}
if (ListUpdate)
{
items.List.Update();
}
}

///

/// Reads the contents of the audience export file to generate a mapping of equivalent audiences in the target environment
///

//////
/// String – AudienceID of Source Environment
/// Audience – Audience in New/Target Environment
///
internal static Dictionary<string, Audience> AudienceMap(string FilePath)
{
Dictionary<string, Audience> map = new Dictionary<string, Audience>();
StreamReader reader = new StreamReader(FilePath);
string input = null;
while ((input = reader.ReadLine()) != null)
{
string[] line = input.Split(‘,’);
if(line.Count() == 2)
{
var match = from a in Audiences where a.AudienceName == line[0] select a;
foreach (Audience m in match)
{
map.Add(line[1], m);
}
}
}
return map;
}

///

/// Returns a list of all audiences from the target site collection
///

//////
internal static List GetAudiences(SPSite site)
{
List audiences = new List();
SPServiceContext context = SPServiceContext.GetContext(site);
AudienceManager aMan = new AudienceManager(context);
foreach (Audience a in aMan.Audiences)
{
audiences.Add(a);
}
return audiences;
}
}
}

Importing Profiles in SharePoint

Slalom Consultant Maarten Sundman

Slalom Consultant Maarten Sundman specializes in .NET, SharePoint, and Silverlight solutions and has experience in the Financial Services, Software, and Utilities and Energy sectors.

A common need when testing SharePoint solutions is having test accounts setup a certain way so they can do something. Whether this is a test account for a country, language, etc. Normally it’s painful to set up these accounts as typically they end up being setup by hand in the user profile service application; therefore I’ve gone ahead and built a tool which  reads all the attributes for a profile from a CSV and updates the users profiles (code below). This also works for normal accounts if you’re trying to import in bulk via Excel or some other data source for a one time import which can happen during migrations from other systems.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using Microsoft.SharePoint;
using Microsoft.Office.Server;
using Microsoft.Office.Server.UserProfiles;
using System.Web;

namespace WithinSharePointUpdateUserProfile
{
class Program
{
static Char[] Splitter = new Char[] { ‘,’ };
static UserProfileManager uMan = null;
static Dictionary<UserProfile,Dictionary> UsersToUpdate = new Dictionary<UserProfile,Dictionary>();

static void Main(string[] args)
{
if (args.Count() == 1)
{
SPSite site = new SPSite(args[0]);
DisplayProfileAttributes(SPServiceContext.GetContext(site));
}
else if (args.Count() == 2)
{
try
{
SPSite site = new SPSite(args[0]);
uMan = UserManager(SPServiceContext.GetContext(site));
ReadFile(args[1]);
foreach (UserProfile K in UsersToUpdate.Keys)
{
UpdateUser(K, UsersToUpdate[K]);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Help();
}
}
else
{
Help();
}
}

static void DisplayProfileAttributes(SPServiceContext Context)
{
UserProfileConfigManager config = new UserProfileConfigManager(Context);
ProfilePropertyManager pMan = config.ProfilePropertyManager;
CorePropertyManager cMan = pMan.GetCoreProperties();
foreach (CoreProperty prop in cMan.PropertiesWithSection)
{
Console.WriteLine(“Display Name : ” + prop.DisplayName + ” Internal Name : ” + prop.Name);
}
}

static void Help()
{
Console.WriteLine(“Usage is: WithinSharePointUpdateUserProfile http://WEBAPPURL C:\\Path\\To\\File”);
Console.WriteLine(“To display a list of profile fields: WithinSharePointUpdateUserProfile http://WEBAPPURL&#8221;);
Console.WriteLine(“File Format is:”);
Console.WriteLine(“USERNAME,CORP\\Username,FIELDNAME,FIELDVALUE”);
}

static UserProfileManager UserManager(SPServiceContext Context)
{
return new UserProfileManager(Context, true);
}

static void ReadFile(string Path)
{
StreamReader reader = new StreamReader(Path);
string input = null;
UserProfile UP;
Dictionary User;
while ((input = reader.ReadLine()) != null)
{
User = UpdatedAttributes(input, out UP);
UsersToUpdate.Add(UP, User);
}
}

static bool UpdateUser(UserProfile Profile, Dictionary ProfileChanges)
{
Console.WriteLine(“Updating user: ” + Profile.DisplayName);
try
{
foreach (string k in ProfileChanges.Keys)
{
if (Profile[k] != null)
{
Profile[k].Value = ProfileChanges[k];
Profile[k].Privacy = Privacy.Public;
}
}
Profile.Commit();
}
catch (Exception ex)
{
Console.WriteLine(“Error updating profile ” + Profile.DisplayName);
Console.WriteLine(ex.Message);
return false;
}
return true;
}

static UserProfile GetProfile(string Username)
{
return uMan.GetUserProfile(Username);
}

static Dictionary UpdatedAttributes(string row, out UserProfile Profile)
{
Dictionary result = new Dictionary();
string[] split = row.Split(Splitter);
Profile = GetProfile(split[1]);
for (int i = 2; i < split.Count(); i++)
{
result.Add(split[i], split[i + 1]);
i++;
}
return result;
}
}
}

SharePoint and Chrome

Slalom Consultant Maarten Sundman

Slalom Consultant Maarten Sundman specializes in .NET, SharePoint, and Silverlight solutions and has experience in the Financial Services, Software, and Utilities and Energy sectors.

A few months ago I posted an entry called SharePoint 2010 Scrolling detailing a method to get scrolling working on less supported browsers in SharePoint 2010. Along with some background on to why the method works.

I’ve had reason recently to revisit Chrome and SharePoint compatibility specifically as of late. In the process a different fix came out of it. One that is less heavy-handed. But more importantly I found out that the cause of the scrolling inconsistency on some browsers like iOS Safari and Chrome is not due to the causes that have been previously documented by other bloggers or myself. The cause is, in fact, a timing issue around execution of a specific bit of very important onload JavaScript.

The bit that doesn’t execute (and causes a systemic issue, one of the issues it causes is the scrolling weirdness), is: Read more of this post

Webinar: Accelerating SharePoint for Mobile Solutions on the AWS Cloud

Slalom Consultant Joel Forman

Slalom Consultant Joel Forman specializes in cloud computing and the Windows Azure Platform.

I wanted to take the opportunity to post about an exciting upcoming live webinar that is being co-delivered by Slalom Consulting and Amazon Web Services entitled Accelerating SharePoint for Mobile Solutions on the AWS Cloud.

On Wednesday, August 15th at 10:00 AM PST, we will bring several emerging topics together around mobility and the cloud. You will have the opportunity to learn more about how to make SharePoint applications available to your mobile users using the AWS cloud directly from an AWS Solution Architect. Then, we will demonstrate how they can quickly and securely mobilize SharePoint content with our SharePoint Mobile Accelerator. The Accelerator is a framework that can target both on-premise and cloud SharePoint implementations, and allows for rapid development of custom iPhone & iPad applications to enable your growing mobile workforce while maintaining corporate security standards.

Here are the individuals that will be presenting during this live session: Read more of this post

Creating a Drop Down Navigation Menu in SharePoint Using JavaScript

Slalom Consulting—Dennis Jackson

Dennis Jackson is a Slalom Consultant and Architect in our Dallas Portals & Collaboration Practice. He has over 15 years’ experience using databases and the web creating corporate intranets, call center applications, content management platforms, e-commerce, and self-service sites for Fortune 500 clients.

The Case
The built in navigation for SharePoint in general can take some getting used to. In 2010 it is much better, but there are some occasions where a client may want something that is different than the way that SharePoint wants to do it. In this case the client was asking for a drop down menu where a user could pick from “any of the team team sites” and “just go there”.

And if you were in a team site, then list the other ones.

But we have a preference in our practice for using out-of-the-box SharePoint first, and avoiding customizations if possible. So the approach that we ended up going with was client side JavaScript (JS) using the content editor web part to host the script.

The Plan
There are a few main parts of the JS that needed to be ironed out.

  • First, communicating with SharePoint to get a list of all the webs that were under this site.
  • Second, iterating over the results and appending them to a select drop down box with the needed data and labels.
  • Third, recognizing the dropdown changes and routing the user appropriately. Read more of this post

Accelerating SharePoint Mobile Development

Slalom Consulting—Jon Allegre

Jon Allegre is a Solution Architect for Slalom Consulting. Jon’s focus is on delivering solutions and strategies around mobility (iOS, Android, mobile HTML5), user experience, and alternative application hosting models (AWS, Azure).

Liberate Your Content

The amount of structured and unstructured content in the Enterprise is ever increasing and it is a challenge for organizations to manage and access this content. Numerous systems and tools are used by organizations to store, manage and surface corporate information to users. Microsoft SharePoint is a popular choice for a majority of the world’s largest businesses, enabling these activities for tens of millions of users.

A second phenomena occurring in today’s businesses is the mobilization of the workforce and the need to access this content from anywhere, from any device, whether connected to the Internet or offline. From the sales person or field worker to the executive on the go, there is considerable need for employees to have access to the most up to date content that pertains to their role in the organization.

Furthermore, to make the information useful, it must be delivered in a meaningful way to the end user. In some cases, this may mean a need in delivering the content to the end user in a rich, compelling experience versus simply allowing them to browse to the location of the content and viewing it. Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 130 other followers

%d bloggers like this: