The BCS hybrid scenario supports connecting only to an Open Data protocol (Odata) source. If your external data already has an OData service endpoint, then you can skip the creation of an OData service endpoint portions of this procedure.
Using Visual Studio 2013, create an empty ASP.NET web application calling it NorthwindWeb, and follow these steps:
This should be for creating an OData Source.
<< Prevoius – Part 1 – Introduction to Hybrid BCS Architecture
Starting with the February 2015 CUs, all SharePoint updates will be part of the Windows Update.
In a blog from Stefan Goßner, he indicates the changes will start with this months' CU.
What does this mean to you?
1. Should I configure my server for auto update?
No, it is always best practice to schedule your windows server update. Change your Windows update configuration to notify you when there are updates ready to be installed, or schedule a manual check of your Windows Update. This way, you can coordinate your updates on all your servers in your farm at the same time.
2. Should I run the Windows Update on all the servers?
Yes. If you run your Windows Update on one server in the farm, then you will have to run it on all servers in the farm, so all servers have the same patch level.
3. Should I run the Configuration Wizard after the Windows Update?
Yes. You have to run with Configuration Wizard every time you have a Windows Update that includes a SharePoint patch.
In a nutshell, you need to plan your Windows Update for your SharePoint farms the same way you have done it in the past. What have changed is the patches are now pushed down to your server and it is up to you to manage it according to Microsoft's recommended approach. Always apply your patches in your Development and Pre-Production Environments before applying them to your Production farm. Make sure you do your homework from testing perspective and backups before applying any patches.
However, it is always recommended to apply the latest security updates to your environment as they contain important fixes to the platform.
SharePoint 2013 has made life much easier for developers to create a responsive website without much efforts and all done with no customization required. With the introduction of the design manager, you can easily brand your SharePoint publishing site from your HTML composites files. This can be done either with SharePoint on-premise farm or SharePoint Online.
This post will walk you through how you can create your master page and brand your site in few hours.
First, you need to create a Publishing Site Collection. Design Manager only works with a site collection.
Navigate to the site collection you have created, click on Site Settings icon then select Design Manager.
In the Design Manager page, you have few options to choose from, but in this post we are going to focus on 3 sections only that will allow us to brand a SharePoint site based on existing HTML5/CSS3 composites.
In the Quick Launch Navigation, click on Upload Design File.
This will show you a URL that you need to map the network drive to it:
Go ahead and map your network drive to the URL, in my case it is http://sp2013/_catalogs/masterpage
The picture above shows you how your SharePoint Site Collection Catalogs list shows up in File Explorer.
Now, Drag the folder that contains your HTML5, CSS3, JS, Images, etc… into the \_catalogs folder. Note that you need to drag and drop the entire folder and not only the files individually. I suggest to keep them all under the same folder, so you can better manage them within SharePoint.
As you can see, my HTML composites that I received from my designer are copied into my \_catalogs folder:
Next, go back to the Design Manager web page, and click on Edit Master Pages.
Click on the link that says Convert an HTML file to a SharePoint master page. Select your HTML file that you just upload. This will convert your HTML file to a master page, add all the necessary controls, and will fix the reference links to your JS, CSS, and Images locations. This will also create your master page. If you navigate to your \_catalogs list, you should see your master page has been created automatically.
You might have warnings and errors in the status of the conversion; click on the link that says warning and errors. The page will show you where are the errors. SharePoint 2013 validates your HTML5 structure, to make sure it is written properly. For example in my case, the HTML code did not have an empty space in one of the tags.
To fix it, open the HTML file from the \_catalogs list on your site. Go to your file explorer where you mapped the catalogs list, and open the HTML, NOT the master page in any HTML editor. You do not need to use SharePoint Designer. I personally prefer NotePad++, but you can chose any other HTML editor such as DreamWeaver, Eclipse, etc…
When you have corrected all the errors in the HTML, your master page associated with your HTML file is ready to be used as your default master page. Before doing so, you may want to add some dynamics controls to your master page, such as Top Navigation, Vertical Navigation, Search box, etc…, or you can add your own custom controls.
To do so, click on the Snippets links located on the Master Page preview. Note that this is the only place where you can find the Snippets link.
Clicking on each snippet will give you the code you need. Copy the code and paste it in the appropriate place in the HTML file. For example, you want to replace your static HTML navigation with the Top Navigation snippet to make it dynamic.
When you are done, you may want to clean up your converted HTML file to remove all the static content and replace them with either nothing, or placeholders for your Publishing HTML fields used in your page layouts.
When you are satisfied with your master page, you can publish it. Note that you need to publish your HTML page; this will automatically publish your master page.
At this time, you can start using your new master page as your default branding template.
Implementing SharePoint 2013 in a secure zone as an extranet application might be challenging, if you are deploying your farm in a zone with many restrictions.
Recently, I deployed a large SharePoint 2013 farm in a DMZ zone for a regulated portal. Regulated data in my case meant the following restrictive rules in the network and on the servers in the farm:
Configuring SharePoint in this environment was not a straight forward exercise. After disabling some GPO policies to allow the creation of the IIS web applications, we had to map out the communication between all the servers so the firewall ports are open, allowing each server in the farm to talk to each other.
To get a better understanding of the ports required in your farm, you can follow this TechNet article. It explains the details of each port and its use.
Configuration SharePoint was successful; everything worked, the portals are up and running, content is being populated, User Profile Service Synchronization is working, and the Search Service Application is up and running.
However, I was faced with a very challenging issue when crawling content. Crawling the SharePoint content source always returned a "timeout" error in the logs. Resolving this issue took a lot log monitoring, custom code to monitor the traffic, and long nights.
This means that the search crawl is sending an HTTP request to your portal, but it is not receiving an answer back. The authentication is fine, security is OK, but there is no HTTP trip back to the crawl server.
There are my suggestions to a Search Crawl Timeout issue; one of the following suggestions might resolve your issue:
I suggest to first looking into the firewall rules again. 9 out of 1, it is the firewall that is doing funny things to block traffic between the servers. In my case, the security team were using Cisco Smart Care firewall, which is an advanced firewall and it does not only look at the ports' rules. You will have to create exception for applications, because it detects SharePoint and it automatically blocks it if SharePoint as an app is not listed as one of the trusted apps.
Distributed Cache Service (DCS) is a customized version of Windows App Fabric deployed in SharePoint 2013.
The Distributed Cache service provides caching functionality to features (not to be confused with site features) in SharePoint Server 2013. The Distributed Cache service is either required by or improves performance of the following features:
For more detailed about managing and deploying DCS, you can visit this TechNet article.
While implementing a SharePoint 2013 farm, I encountered some errors loading newsfeed, and a decrease in the farm performance.
While examining through the ULS logs, I immediately noticed a lot of dozens of DCS errors logged every 60 seconds. The error was:
Unexpected error occurred in method 'GetObject' , usage 'Distributed Logon Token Cache' – Exception 'Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out |
---|
The Distributed Logon Token Cache stores the security token issued by a Secure Token Service for use by any web server in the server farm. Any web server that receives a request for resources can access the security token from the cache, authenticate the user, and provide access to the resources requested.
Tracing through the logs, I saw that when a user accesses a page, SharePoint attempts to authorize the user to ensure access can be granted. SharePoint stores the user’s token in the user's browser session and in the DistributedCacheLogonTokenCache container. When SharePoint tried to retrieve the token from distributed cache, the connection would time out or a connection would be unavailable and the comparison would fail. Since it couldn't validate the presented token SharePoint had no choice but to log the user out and redirect them to the sign in page.
In general, the problem might cause failures or performance problems of the following:
After further research, I found out that Out of the box, AppFabric 1.1 contains a bug with garbage collection and this impacts the SharePoint 2013 farm with the March 2013 CU.
Resolution
In SharePoint 2013, all the content can now be surfaced using search. The Search driven web parts have their own Querying Builder user Interface which makes it very easy to select, filter and display the data that you want. However, content Search Web Part is only available in SharePoint 2013 Enterprise Edition. If you are using the Enterprise CALs, then you should see the search driven web part in your web part gallery.
But, this is not always the case if you have played around with the licensing in the farm. SharePoint 2013 provides a new feature called SharePoint User License Enforcement (SPULE) that a lot of people may not be aware of. SPULE means that we can have a mix of different licenses in a single farm. What this means, is that Enterprise features can be made available to those who need it, and Standard features to others. This can save an organization a substantial amount related to cost of Client Access Licenses.
If for some reason you ran this command line: Set-SPUserLicensing, this will actually disable all your search driven web parts. Note that by default, the SPULE is not enabled.
To get an overview of the SPULE in your farm, run this command: Get-SPUserLicensing. If true is returned, this means that the SPULE has been enabled on your farm.
What you need to do is to disable the SPULE, and the Search driven web parts will appear again. Run this command Disable-SPUserLicensing, and voila! Your web parts are back in the gallery!
Note: You can set the SPULE based on different AD groups, and you can set it for different type of licenses. This TechNet article will explain to you how you can manipulate different SPULE in your farm.
Many customers are excited about the new features that SharePoint 2013 brings to the table. Small or large organizations who have implemented any SharePoint implementation project size hesitate to upgrade for many reasons, but they want to take advantage of some the new features of 2013.
In the first section of this article, I am going to show how you can create SharePoint 2013 Search Service Application Using PowerShell. This list of commands will allow you to name your own database, instead having a GUID based database name for search.
The architecture and design of search in SharePoint 2013 have changed a bit. There are more added components and more flexibility for high availability search farm, allowing the farm to index more than 100 million items.
There are several steps involved in the creation of a Search Service Application and defining the Search Topology. The steps are:
Instead of using Central Admin, I will be showing PowerShell commands to create SSA:
# Define the variables
$SSADB = “SharePoint_Demo_SearchAdmin”
$SSAName = “Search Service Application”
$SVCAcct = “mcm\sp_search”
$SSI = get-spenterprisesearchserviceinstance -local
#1. Start the search services for SSI
Start-SPEnterpriseSearchServiceInstance -Identity $SSI
#2. Create the Application Pool
$AppPool = new-SPServiceApplicationPool -name $SSAName”-AppPool” -account $SVCAcct
#3. Create the search application and set it to a variable
$SearchApp = New-SPEnterpriseSearchServiceApplication -Name $SSAName -applicationpool $AppPool -databaseserver SQL2012 -databasename $SSADB
#4. Create search service application proxy
$SSAProxy = new-SPEnterpriseSearchServiceApplicationProxy -name $SSAName” Application Proxy” -Uri $SearchApp.Uri.AbsoluteURI
#5. Provision Search Admin Component
Set-SPEnterpriseSearchAdministrationComponent -searchapplication $SearchApp -searchserviceinstance $SSI
#6. Create the topology
$Topology = New-SPEnterpriseSearchTopology -SearchApplication $SearchApp
#7. Assign server(s) to the topology
$hostApp1 = Get-SPEnterpriseSearchServiceInstance -Identity “SPWFE”
New-SPEnterpriseSearchAdminComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchCrawlComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchIndexComponent -SearchTopology $Topology -SearchServiceInstance $hostApp1 –IndexPartition 0
#8. Create the topology
$Topology | Set-SPEnterpriseSearchTopology
Once SSA is created, we will need to clone the topology to be able to extend to other servers in the farm. In this script, we will be replicating all the Search components onto two servers in the farm, also creating 2 indexes. Here are the steps:
#1. Extend the Search Topology:
$hostApp1 = Get-SPEnterpriseSearchServiceInstance -Identity “AppSearch1”
$hostApp2 = Get-SPEnterpriseSearchServiceInstance -Identity “AppSearch2”
Start-SPEnterpriseSearchServiceInstance -Identity $hostApp1
Start-SPEnterpriseSearchServiceInstance -Identity $hostApp2
#3. Keep running this command until the Status is Online:
Get-SPEnterpriseSearchServiceInstance -Identity $hostApp1
Get-SPEnterpriseSearchServiceInstance -Identity $hostApp2
#4. Once the status is online, you can proceed with the following commands:
$ssa = Get-SPEnterpriseSearchServiceApplication
$active = Get-SPEnterpriseSearchTopology -SearchApplication $ssa -Active
$newTopology = New-SPEnterpriseSearchTopology -SearchApplication $ssa
#Assign components to the hosts
New-SPEnterpriseSearchAdminComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchCrawlComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1
New-SPEnterpriseSearchIndexComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp1 –IndexPartition 0
New-SPEnterpriseSearchAdminComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2
New-SPEnterpriseSearchCrawlComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2
New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2
New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2
New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2
#Below is creating another index on host 2. If you want to replicate the index to the second server, then you don’t need this step.
New-SPEnterpriseSearchIndexComponent -SearchTopology $newTopology -SearchServiceInstance $hostApp2 –IndexPartition 1
#5. Activate the topology:
Set-SPEnterpriseSearchTopology -Identity $newTopology
The above scenario is creating a search topology over 2 server farm. For larger search topology, you can just add more hosts to the topology and select which components to run on them.
I recently ran into some issues applying KBs on a SharePoint 2013 farm. My farm consists of 6 servers:
As usual, I installed all the KBs on all servers first, starting with the application server, then I started running the SharePoint Configuration Wizard on all the servers. The wizard successfully completed on the 2 Application Servers, but failed on the rest of the servers.
The error I was getting was:
“Error: Some farm products and patches were not detected on this or other servers. If products or patches are missing locally, you must quit this program and install the required products and patches on this server before starting this wizard. If products or patches are missing on your servers, you must install the required products and patches on the specific servers, and you may then click the Refresh button to perform the status check again.”
I have tried rebooting the servers and running the psconfig -cmd installcheck -noinstallcheck, but this did not help. I was getting the same error through the script. The farm thinks that the servers did not get the required KBs.
After further investigation, it turns out that the application registry on the servers needed to get refreshed. I ran the following PS command (as administrator):
Get-SPProduct –local
This will force a refresh of the server. You will need to run the command on all of the affected servers before you can run the configuration wizard again.