Search This Blog

Thursday, March 4, 2010

Iframe Alternative

In my company’s corporate web site, we have implemented required third party web site contents for reporting which located on other domain websites. These content are secured and need authentication with SSL. To displays these content on our website we had IFRAME.

Our problem starts when that other domain web site has placed new version of their site with rich UI using Ajax and Dojo script. After this new version, our website pages start giving some weird JavaScript messages as Permission is denied in Firefox and Access is denied in IE. We know that there is some problem with other website object and the big issue is we cannot do anything on that other web site part as it is from different merchant with different domain. We people have made conclusion that we need to find out alternative of Iframe, which can run third party content on our website.

After some goggling, we have found some solution which is as below.

For Container (Just like a Iframe container) we have used DIV tag and make it to runat server in our ASPX page.

<div id="ReportContainer" runat="server" />

For source content we have used <Object> tag which will get the dynemic content from other third party secure site. But now we need to display that dynemic contents to our DIV. for that we have used below line of code in code behind file(.cs)

ReportContainer.InnerHtml = "<object type='text/html' style='width: 720px;height: 600px;' data='" + gsURL + "'></object>";

After adding these two line of code we are able to run that third party contents to our site but it is only working with IE. We are still in process to run it with all variety of browsers.

Please share with us if you have any other good alternative of Iframe.

Kind Regards and Happy Coding

Tuesday, January 12, 2010

performance comparison of WCF with other well known distributed communication technologies.

Hello friends,

Please go through the link below, it’s pertaining to performance comparison of WCF with other well known distributed communication technologies.

http://msdn.microsoft.com/en-us/library/bb310550.aspx

Please don’t forget to read conclusion part.

Many Thanks

Performance TIPS

Hello friends,

Here are some performances TIPS, Please start using the same in your application.

Performance
The more the code is separated into different assemblies, the slower it Becomes. The reason for this slow performance is simple. It takes longer for a method in one assembly to call a method in another assembly than it would take if the method was in the same assembly. Some time is needed to refer to the other assembly, and read, find, and execute the required code.

Scalability
scalable means being able to handle an increased load in future.

Re-usability
If application code can be re-used within itself, or for some other external application, then not only do we save development and maintenance costs, but we also avoid code replication and make our code componented.

Loose-Coupling
UI interacted with business logic (BL) classes, which in turn called DAL methods. So if the DAL method breaks, the UI may not break as easily as if BL is directly interacted to Database, because we have made the layers loosely-coupled by bringing in a third layer (BL).

Plug and Play
Example: Application should work with MS SQL Server as well as with Oracle or any other database. So we need to make our DAL code capable of switching between databases. To achieve this, we create different DAL assemblies each having the code targeted to each database type, and we load a specific DAL assembly at runtime based on a key value in a config file. This means that our application is Plug and Play.

Communication between tiers via Data Transfer Objects (Data class which need some changes)
DTOs are simple and Flexible objects with no defined methods, and having only public members. They are like carriers of data to and from the layers. Some people use strongly typed datasets for the same purpose. The reason why we need DTOs is because passing business objects through layers is cumbersome as business objects are quite "heavy" and carry a lot of extra information with them, which is usually not needed outside the layers. So instead of passing business objects we create lightweight DTOs and make them serializable.

Lazy Loading
Using lazy loading pattern, we can defer the loading of all of the properties of an object until they are really needed. Let me explain with an example. In simple Customer data System application, consider a form that shows the list of all of the customers in a grid. Now, in this form, only the Customer ID, the first name and the last name are shown, along with an edit and delete button. The Customer address, email address, password and other fields are shown only
when someone edits an existing customer or adds a new one, a process which is done through another form. So if we want to load a list of Customer objects on this Customer List form, we don't need to fetch all the fields at once from the database. We only need to fetch the Customer ID, first name and last name fields to make our application more performance-efficient (by getting only the data required). And when we are on the Edit Customer form, we need to fetch all of the details. This can be done by having two methods in the DAL: one for a partial fetch and another for a complete fetch. But this approach is cumbersome, and we cannot always write two methods for each entity like this, So we follow the lazy loading design pattern, and use an enum like LoadStatus in our code, which can have several status.

Use Collection Object instead of List for data communication
instead of returning List we need to return Collectionfrom the DAL methods due to the fact that List should be used for internal use only, not in public APIs. Now our DAL is made in an API-like fashion. To make sure that our API is extendable, we need to use Collection as this is extendable, unlike Listwhere we cannot override any member. For example, we can override the SetItem protected method in Collection to get notified when the collection is changed (such as adding a new item and so on). Besides this, List has too much extra stuff that is useful for internal use only, and not as a return type to an API.

Call Database through Data Reader
Note the use of data readers for better performance. Data readers are read-only, one-way pointers to the database. Hence, they are much lighter and faster than data sets and data adapters (which in turn use data readers to map data). It is always better to use data readers for filling in custom entities so that we have a fine level of control over thedata access process, in addition to gaining the performance advantage of a data reader.

Paging Data
When you have to retrieve a large amount of data, it is a good idea to consider using data paging techniques to avoid scalability problems. Generally, follow the simple rule of not retrieving any more data than you require at any one time. For example, if you have to display 1,000 rows of data in a grid with 20 rows per page, implement data retrieval logic that retrieves 20 rows at a time. Data paging techniques help to reduce the size of data sets, and to avoid expensive and unnecessary heap allocations that are not reclaimed until the process is recycled.

Many Thanks,

Courtesy by Vivek Thakur from his book “ASP.NET 3.5 Application Architecture and Design”


Take care when you design country code table

Hello Friends,
When you design country code table please Note that
1). there can be one Code for two different countries like US and Canada,
2). There can be a multiple Code for Country/County
Below is link for List of country calling codes (ISD code) for reference purpose.
http://en.wikipedia.org/wiki/List_of_country_calling_codes
Take care while you design/develop any system related to the same
Many Thanks,

SQL Server 2000 Best Practices

Courtesy by: VinodKumar

Reference: http://extremeexperts.com/SQL/Articles/BestPractices.aspx

Hello friends,

In many of the sessions and usergroups I've been asked by many for the best or rather should I say better practices of coding while using SQL Server 2000. The views listed below are mine and I'am sure any SQL Server Guru might not argue it otherwise. Just thought of adding sections for coding practices. And here I am ... You can also consider these as guidelines for development in SQL Server. Hope you get good milage out of this article ...

1. Normalize your tables

There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. And, more frequent than the inverse, the resulting design is slower. DBMS’s were designed to be used with normalized databases and SQL Server is no exception, so design with normalization in mind.

2. Avoid using cursors

Use cursors wisely. Cursors are fundamentally evil. They force the database engine to repeatedly fetch rows, negotiate blocking, manage locks, and transmit results. They consume network bandwidth as the results are transmitted back to the client, where they consume RAM, disk space, and screen real estate. Consider the resources consumed by each cursor you build and multiply this demand by the number of simultaneous users. Smaller is better. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?

Having said this the other question that comes is, If I were to use cursors then .... ? Well here are my 20Cents on cursor usage. Use the appropriate cursors for the job in hand.

· Don't use scrollable cursors unless required

· Use readonly cursors if you donot intend to update. This would be 90% of the situations.

· Try to use Forward Only cursor when using cursors

· Don’t forget to close and deallocate the cursors used.

· Try to reduce the number of columns and records fetched in a cursor

3. Index Columns

Create Index on columns that are going to be highly selective. Indexes are vital to efficient data access; however, there is a cost associated with creating and maintaining an index structure. For every insert, update and delete, each index must be updated. In a data warehouse, this is acceptable, but in a transactional database, you should weigh the cost of maintaining an index on tables that incur heavy changes. The bottom line is to use effective indexes judiciously. On analytical databases, use as many indexes as necessary to read the data quickly and efficiently.

Now a classic example is DONOT index an column like "Gender". This would have a selectivity of 50% and if your table is having 10 Million records, you can be least assured that using this index you may have to travel half the number of rows ... Hence maintaining such indexes can slow your performance.

4. Use transactions

Use transaction judiciously. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash. See that the transaction starts as late as possible and ends as early as possible. This would reduce the requirement to lock down the resources while accessing. In short,

5. Analyze deadlocks

Access your tables on the same order always. When working with stored procedures and transactions, you may find this soon. Any SQL programmer / database analyst would have come across this problem. If the order changes then there wold be a cyclic wait for resources to be released and the users would experience a permanent hang in the application. Deadlocks can be tricky to find if the lock sequence is not carefully designed. To summarize, Deadlock occurs when two users have locks on separate objects and each user is trying to lock the other user's objects. SQL Server automatically detects and breaks the deadlock. The terminated transaction will be automatically rolled back and an error code 1205 will be issued.

6. GOTO Usage

Avoid using the infamous GOTO. This is a time-proven means of adding disorder to program flow. There are some cases where intelligent use of GOTO is preferable to dogmatically refusing to use it. On the other hand, unintelligent use of GOTO is a quick ticket to unreadable code.

7. Increase timeouts

When querying a database, the default timeout is often low, like 30 seconds. Remember that report queries may run longer than this, specially when your database grows. Hence increase this value to an acceptable value.

8. Avoid NULLable columns

When possible, normalize your table and separate your nullable columns. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. It will be more flexible and faster, and will reduce the NULLable columns. I'm not saying that NULLs are the evil incarnation. I believe they can simplify coding when "missing data" is part of your business rules.

9. TEXT datatype

Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better. You can also look at the "text in row" feature with the table options for SQL Server 2000. But still I would stick to the first statement, Avoid using them on first place.

10. SELECT * Usage

Its very difficult to get out of this habit, but believe me this is very essential. Please DONOT use this syntax. Always qualify the full list of columns. Using all columns increases network traffic, requires more buffers and processing, and could prove error prone if the table or view definition changes.

11. Temporary tables usage

Unless strictly necessary. More often than not a subquery can substitute a temporary table. In SQL Server 2000, there are alternatives like the TABLE variable datatype which can provide in-memory solutions for small tables inside stored procedures too. If I were to recollect some of the advantages of using the same:

· A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared. Within its scope, a table variable may be used like a regular table.

· However, table may not be used in the following statements: INSERT INTO table_variable EXEC stored_procedure SELECT select_list INTO table_variable statements.

· Table variables are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined.

· Table variables used in stored procedures result in fewer recompilations of the stored procedures than their counterparts temporary tables.

· Transactions involving table variables last only for the duration of an update on the table variable. Thus, table variables require less locking and logging resources

12. Using UDF

UDF can replace stored procedures. But be careful in their usage. Sometimes UDFs can take a toll on your applications performance. And UDFs have to prefixed with the owners name. This is not a drawback but a requirement. I support usage of SPs more than UDFs.

13. Multiple User Scenario

Sometimes two users will edit the same record at the same time. While writing back, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. Code for these practical situations and test your application for these scenarios.

14. Use SCOPE_IDENTITY

Dont do SELECT max(ID) from MasterTable when inserting in a Details table. This is a common mistake, and will fail when concurrent users are inserting data at the same instance. Use one of SCOPE_IDENTITY or IDENT_CURRENT. My choice would be SCOPE_IDENTITY as this would give you the identity value from the current context in prespective.

15. Analyze Query Plans

The SQL Server query analyzer is a powerful tool. And surely is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it. Understand the execution plan that the execution plan window shows for potential bottlenecks.

16. Parameterized queries

Parameterize all your queries using the sp_executesql. This would help the optimzer to chace the execution plans and use the same when requested teh second time. You can cache-in the time required to parse, compile and place the execution plan. Avoid using of D-SQL as much as possible.

17. Keep Procedures Small

Keep SPs small in size and scope. Two users invoking the same stored procedure simultaneously will cause the procedure to create two query plans in cache. It is much more efficient to have a stored procedure call other ones then to have one large procedure.

18. Bulk INSERT

Use DTS or the BCP utility and you'll have both a flexible and fast solution. Try avoiding use of Insert statement for the Buld loading feature, they are not efficent and are not designed for the same.

19. Using JOINS

Make sure that there are n-1 join criteria if there are n tables.

Make sure that ALL tables included in the statement are joined. Make sure that only tables that

· Have columns in the select clause

· Have columns referenced in the where clause

· Allow two unrelated tables to be joined together are included.

20. Trap Errors

Make sure that the @@ERROR global variable is checked after every statement which causes an update to the database (INSERT, UPDATE, DELETE). Make sure that rollbacks (if appropriate) are performed prior to inserting rows into an exception table

21. Small Result Set

Retrieving needlessly large result sets (for example, thousands of rows) for browsing on the client adds CPU and network I/O load, makes the application less capable of remote use, and limits multi-user scalability. It is better to design the application to prompt the user for sufficient input so queries are submitted that generates modest result sets.

22. Negative Arguments

Minimize the use of not equal operations, <> or !=. SQL Server has to scan a table or index to find all values to see if they are not equal to the value given in the expression. Try rephrasing the expression using ranges:

WHERE KeyColumn < 'TestValue' AND KeyColumn > 'TestValue'

23. Date Assumption

Prevent issues with the interpretation of centuries in dates, do not specify years using two digits. Assuming dates formats is the first place to break an application. Hence avoid making this assumption.

24. SP_ Name

DONOT start the name of a stored procedure with SP_. This is because all the system related stored procedures follow this convention. Hence a valid procedure today may clash with the naming convention of a system procedure that gets bundled with a Service pack / Security patch tomorrow. Hence do not follow this convention.

25. Apply the latest Security Packs / Service Packs

Even though this point applies to the network and the database administrators, it is always better to keep up-to date on the software’s. With the "slammer" virus and many more still outside, it is one of the best practices to be up-to date on the same. Consider this strongly.

26. Using Count(*)

The only 100 percent accurate way to check the number of rows in a table is to use a COUNT(*) operation. The statement might consume significant resources if your tables are very big because scanning a large table or index can consume a lot of I/O. Avoid these type of queries to the maximum. Use short circuting methods as EXISTS etc. Here is one other way you can find the total number of rows in a table. SQL Server Books Online (BOL) documents the structure of sysindexes; the value of sysindexes.indid will always be 0 for a table and 1 for a clustered index. If a table doesn't have a clustered index, its entry in sysindexes will always have an indid value of 0. If a table does have a clustered index, its entry in sysindexes will always have an indid value of 1.

SELECT object_name(id) ,rowcnt
FROM sysindexes
WHERE indid IN (1,0) AND OBJECTPROPERTY(id, 'IsUserTable') = 1

27. Ownership Chaining

Try using this feature (available from SQL Server 2000 SP3), for permission management within a single database. Avoid using this feature to manage permissions across database.

28. SQL Injection

Security has been a prime concern for everyone. Hence validate all the incoming parameters at all levels of the application. Limit the scope of possible damage by permitting only minimally privileged accounts to send user input to the server. Adding to it, run SQL Server itself with the least necessary privileges.

29. Fill-factor

The 'fill factor' option specifies how full SQL Server will make each index page. When there is no free space to insert new row on the index page, SQL Server will create new index page and transfer some rows from the previous page to the new one. This operation is called page splits. You can reduce the number of page splits by setting the appropriate fill factor option to reserve free space on each index page. The fill factor is a value from 1 through 100 that specifies the percentage of the index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor value of 100, the difference in that SQL Server leaves some space within the upper level of the index tree for FILLFACTOR = 0. The fill factor percentage is used only at the time the index is created. If the table contains read-only data (or data that very rarely changed), you can set the 'fill factor' option to 100. When the table's data modified very often, you can decrease the 'fill factor' option to 70 percent, for example. Having explained page splits in detail I would warn you in over looking at this point because more free space means that SQL Server has to traverse through more pages to get the same amount of data. Hence try to strike a balance and arrive at an appropriate value.

30. Start-up Procedures

Verify all the stored procedures for safety reasons.

31. Analyze Blocking

More often than not any implementers nightmare would be to see a blocking process. Blocking occurs when a process must wait for another process to complete. The process must wait because the resources it needs are exclusively used by another process. A blocked process will resume operation after the resources are released by the other process. Sometimes this can become cyclic and the system comes to a stand still. The only solution is to analyze your indexing strategy and table design. Consider these points strongly.

32. Avoid Un-necessary Indexes

Avoid creating un-necessary indexes on table thinking they would improve your performance. Understand that creating Indexes and maintaining them are overheads that you incur. And these surely do reduce the throughput for the whole application. You can create a simple test on a large table and find it for yourself how multiple indexes on the same column decrease performance.

33. Consider Indexed Views

Sometimes we would require an view to be indexed. This feature is bundled with SQL Server 2000. The result set of the indexed view is persist in the database and indexed for fast access. Because indexed views depend on base tables, you should create indexed views with SCHEMABINDING option to prevent the table or column modification that would invalidate the view. Hence using them can reduce a lot of load on the base tables but increases the maintainability.

34. WITH SORT_IN_TEMPDB Option

Consider using this option when you create an index and when tempdb is on a different set of disks than the user database. This is more of a tuning recommendation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index. Time is precious, disk is cheaper.

35. Reduce Number of Columns

Try to reduce the number of columns in a table. The fewer the number of columns in a table, the less space the table will use, since more rows will fit on a single data page, and less I/O overhead will be required to access the table's data. This should be considered strongly by applications that talk across different machines. More the unwanted data passed more is the network latency observed.

Alternative of Cursor in SQL

Hello Friends,

We know that how cursors are useful in SQL server and we know its drawback even as it consume lots of resource like RAM, Disk space, network bandwidth etc. Don’t think too much here is answer of your question of Alternative of Cursor in SQL. Please go through link below written by Mr. Vinodkumar.

http://extremeexperts.com/SQL/Articles/IterateTSQLResult.aspx


Nice tool to script individual objects in SQL Server 2005

Hello Friends,

Please find following tool to script individual object in SQL Server 2005.

http://www.sqlteam.com/publish/scriptio/

Happy Coding!!!

Script to generate Create Index Scripts for all indexes in table

Hi Friends,

In most of the cases we design the database first and then we create the indexes. We had a need to provide scripts for all indexes in the tables. Following is the script to generate scripts for all non clustered indexes.

-- Get all existing indexes, but NOT the primary keys

DECLARE cIX CURSOR FOR

SELECT OBJECT_NAME(SI.Object_ID), SI.Object_ID, SI.Name, SI.Index_ID

FROM Sys.Indexes SI

LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC ON SI.Name = TC.CONSTRAINT_NAME AND OBJECT_NAME(SI.Object_ID) = TC.TABLE_NAME

WHERE TC.CONSTRAINT_NAME IS NULL

AND OBJECTPROPERTY(SI.Object_ID, 'IsUserTable') = 1

ORDER BY OBJECT_NAME(SI.Object_ID), SI.Index_ID

DECLARE @IxTable SYSNAME

DECLARE @IxTableID INT

DECLARE @IxName SYSNAME

DECLARE @IxID INT

-- Loop through all indexes

OPEN cIX

FETCH NEXT FROM cIX INTO @IxTable, @IxTableID, @IxName, @IxID

WHILE (@@FETCH_STATUS = 0)

BEGIN

DECLARE @IXSQL NVARCHAR(4000) SET @IXSQL = ''

SET @IXSQL = 'CREATE '

-- Check if the index is unique

IF (INDEXPROPERTY(@IxTableID, @IxName, 'IsUnique') = 1)

SET @IXSQL = @IXSQL + 'UNIQUE '

-- Check if the index is clustered

IF (INDEXPROPERTY(@IxTableID, @IxName, 'IsClustered') = 1)

SET @IXSQL = @IXSQL + 'CLUSTERED '

SET @IXSQL = @IXSQL + 'INDEX ' + @IxName + ' ON ' + @IxTable + '('

-- Get all columns of the index

DECLARE cIxColumn CURSOR FOR

SELECT SC.Name

FROM Sys.Index_Columns IC

JOIN Sys.Columns SC ON IC.Object_ID = SC.Object_ID AND IC.Column_ID = SC.Column_ID

WHERE IC.Object_ID = @IxTableID AND Index_ID = @IxID

ORDER BY IC.Index_Column_ID

DECLARE @IxColumn SYSNAME

DECLARE @IxFirstColumn BIT SET @IxFirstColumn = 1

-- Loop throug all columns of the index and append them to the CREATE statement

OPEN cIxColumn

FETCH NEXT FROM cIxColumn INTO @IxColumn

WHILE (@@FETCH_STATUS = 0)

BEGIN

IF (@IxFirstColumn = 1)

SET @IxFirstColumn = 0

ELSE

SET @IXSQL = @IXSQL + ', '

SET @IXSQL = @IXSQL + @IxColumn

FETCH NEXT FROM cIxColumn INTO @IxColumn

END

CLOSE cIxColumn

DEALLOCATE cIxColumn

SET @IXSQL = @IXSQL + ')'

-- Print out the CREATE statement for the index

PRINT @IXSQL

FETCH NEXT FROM cIX INTO @IxTable, @IxTableID, @IxName, @IxID

END

CLOSE cIX

DEALLOCATE cIX

JQuery based Tree - Free

Hello All,

Please follow the link for Jquery based tree

http://www.jstree.com/

Features at a glance:

  • Various data sources - HTML, JSON, XML
  • Supports AJAX loading
  • Drag & drop support
  • Highly configurable
  • Theme support + included themes
  • Numerous callbacks to attach to
  • Optional keyboard navigation
  • Maintain the same tree in many languages
  • Inline editing
  • Open/close optional animation
  • Define node types and fine tune them
  • Configurable multitree drag & drop
  • Optional multiple select
  • Search function
  • Supports plugins & datastores
  • Optional state saving using cookies

Currently supported browsers are:

  • Internet Explorer 6+
  • Mozilla Firefox 2+
  • Safari 3+
  • Opera 9+
  • Google Chrome

Reporting: RDLC object data source problem and its temporary workaround.

Hi All,

Here is RDLC object data source problem and its temporary workaround.
Sometimes RDLC object data source seems to work sometimes it does not, like when you just editing your RDLC or close the whole VS editor and open up it again.

MS people saying
Both Reporting Services and Visual Studio are currently investigating this problem. At this point we don't have a better workaround to offer you.

But below is general workaround for this problem.
This works for us because we use a data layer which is stored as a reference in our ASP.Net bin folder in our project
1. Right click on the bin folder and choose Add Reference
2. Choose 'Browse' then browse to the bin folder of your site and choose an existing dll
3. This will create a .refresh file underneath the selected dll
4. Check your Website data sources, choose refresh (will likely still be empty or incorrect)
5. Right click on the .refresh file and choose Delete
6. (optionally) Close/open a .aspx file from the root of your web project. This often causes a refresh.
7. Switch back to your report file, if not showing your data sources, choose refresh again - if still not showing try step 6 once more.
8. If still not showing phone microsoft tech support and spend your life on hold

Steps for implementing full text and Directory Search


Steps for implementing full text search.

1).SQL Server Enterprise manager

2).Open Desire server and database (In My case its sq10à Intechnology2

3).Click on tables object

4). Find table named as tbl_content and right click on it

5). Find “Full-text Index Table” and click on “Define full-text Indexing on table” from opened context menu.

6).Popup wizard will appear, click next to it.

7). Verify Unique index is selected as “Pk_Content” click on Next

8).Select check box for “sBody” and “sSubject” from “Available columns” column Click to next

9). Give name “FullTextSearch” for new catalog in Name text box and click next.

10).No Schedule at the moment just press to next

11). Press finish

12). Click ok to new popup.

13). Switch to Full-text Catalogs for current database (In our case Intechnology 2)

14) Select catalog “FullTextSearch” Right click on it and “Start full population”


For Directory Search

1). Goto run type “compmgmt.msc” click ok

2). Click on “Services and Applications” and find out “Indexing Service” (You required right for Indexing service if you can’t see “Indexing Service”, Please contact to your administrator.)

3).Right click “Indexing Service” à New à Catalog à Give name to your catalog and place select location for placing catalog search indexing on Hard drive and click on “OK” also click “ok” to message.

4). Select your recently added Catalog à Click on Directory à Right click to it Select àNew à Directory from Context menu.

5).Browse the path for Documents to indexing.

e.g. “D:\Maulik\Projects\Intechnology 2.0\Documents”

6). Verify “Include in Index?” radio button Set to “Yes”, click ok

7). Right click “Indexing Service” and click on “Stop”

8). Right click “Indexing Service” and click on “Start”

9). Select your recently added Catalog à Right click on it à Property from Context menu.

10).From Property popup Select 3rd Tab “generation”

11).Uncheck “Inherit above setting from services”

12) Check rest of two check box named as “Index files with unknown extension” and “Generate Abstracts” and click to “OK”

13).Select your recently added Catalog and click on Property find out “Characterization” and Double click on it.

14). Checked “Cached” checkbox and give size to 100 instead of 20 from opened popup window and click on OK. Click Ok to message.

15). Right click “Indexing Service” and click on “Stop”

16). Right click “Indexing Service” and click on “Start”

18). Select your recently added Catalog click to Directory and select your browsed document path from detail pane.

19).Right click on it select à All Taskà Rescan (full). Say “Yes” to message.

20). Please verify your catalog by Selecting “Query the catalog” and search for desires text.

fulltext search management after implementing it into your database

Hello friends,

Below is question and very good reply about fulltext search management after implementing the same into your database. Please have a take a look.

QUESTION:

Hi All,

How do I refresh my full text indexes which I created using unique indexes as I can see two optionsstart full population and Start Incremental population in full-text index on right click of table. on click of anyone its showing successful.

Can anyone please let me know what is the standard way to refresh full text indexes periodically in terms of best performance?

What enterprise settings required increasing performance?

Thanks in advance.

ANSWER:

When you start a population of a fulltext catalog, it reports successful but that only means it’s started successfully. The population can take a while depending on size of columns, number of rows etc.

Depending on the size of the index and how long it takes to do a full population, you may want to do run regular incremental populations which will only update the records that have been added/changed since the last full population. Population is quite a resource intensive process so be aware of what effect it has on your application. Also remember you'll need to add a timestamp field to any tables which you want to do incremental populations on.

If you need to real-time updates to happen, look into enabling change tracking/background update which will keep your index in sync with the data. Otherwise, new/changed rows won't be reflected in your fulltext index until you do another population.

As with any index, as data gets added and removed the index becomes fragmented so i'd look to schedule a rebuild of the catalog periodically (how frequent is up to you).

The first query may have been slow as the population may still have been in progress. Look at FULLTEXTCATALOGPROPERTY to check the size of the index and the current population status.

Have a look at the following link which has some great tips on Full Text Searching:

http://technet.microsoft.com/hi-in/library/cc917695(en-us).aspx

Happy Knowledge Sharing,

App_Code Folder in ASP .NET 3.5

In Visual Studio Professional 2008, you can still create the App_Code folder by right-clicking on the Web Project and selecting Add, then Add Folder. Rename the new folder to App_Code.

Contrary to some recommendations on the Web that you should not use the App_Code folder because you cannot place common Web UI codes or classes in this folder, you can do so by setting Build Action of each class to Compile. Suppose this step is not done, the classes defined in this folder will not be visible to your other codes. This explains why people recommended against the use of App_Code folder in VS 2008.

Suppose you have a class called WebCommon.cs in this folder. All you have to do is to right-click on this file and select Properties. A properties window will appear. Set Build Action to Compile. Viola! The class can be accessed exactly like what you have seen in VS 2005.

Courtesy: http://startclass0830.blogspot.com/2009/03/appcode-folder-in-asp-net-35.html


Save your time produce number of classes automatically

Hello Friends,

Here is nice link which can produce number of classes automatically used in our daily application requirement and saving our most time. I am glad to share this with you all, please use it wisely.

http://www.eggheadcafe.com/articles/adonet_source_code_generator.asp

LINQ tool

Hi All,

If you are working with LINQ, please find a tool from below path that may be useful while developing with LINQ.

Below is the description of the same

Description
Visual LINQ Query Builder is an add-in to Visual Studio 2008 Designer that helps you visually build LINQ to SQL queries. Functionally it provides the same experience as, for instance the Microsoft Access Query Builder, but in the LINQ domain. The entire UI of this add-in uses Windows Presentation Foundation. The goal of this tool is to help users become more familiar with the LINQ syntax. The tool may also demonstrate to users how to create their own Visual Studio 2008 add-in using Windows Presentation Foundation.

This academic project was developed by two students during an internship at Microsoft France, under the supervision of one of our Developer Evangelists. The project was in full collaboration with the STB International CPE team in Redmond.

Here is a quick presentation: http://blogs.msdn.com/mitsu/archive/2008/04/02/visual-linq-query-builder-for-linq-to-sql-vlinq.aspx
See it in French here: http://blogs.msdn.com/mitsufu/archive/2008/04/02/visual-linq-query-builder-pour-linq-to-sql-vlinq.aspx

Thanks,

string.format functionality

Hello friends,

Following are some very good explanation of string.format functionality. Please go through it will really help.

"I see stuff like {0,-8:G2} passed in as a format string. What exactly does that do?" -- Very Confused String Formatter

The above format can be translated into this:

"{[,][:]}"

argument index: This represent which argument goes into the string.

String.Format("first = {0};second = {1}", "apple", "orange");

String.Format("first = {1};second = {0}", "apple", "orange");

gives the following strings:

"first = apple;second = orange"

"first = orange;second = apple"

alignment (optional): This represent the minimal length of the string.

Postive values, the string argument will be right justified and if the string is not long enough, the string will be padded with spaces on the left.

Negative values, the string argument will be left justied and if the string is not long enough, the string will be padded with spaces on the right.

If this value was not specified, we will default to the length of the string argument.

String.Format("{0,-10}", "apple"); //"apple "

String.Format("{0,10}", "apple"); //" apple"

format string (optional): This represent the format code.

Numeric format specifier is available here. (e.g. C, G...etc.)
Datetime format specifier is available here.

Enumeration format specifier is available here.

Custom Numeric format specifier is available here. (e.g. 0. #...etc.)

Custom formatting is kinda hard to understand. The best way I know how to explain something is via code:

int pos = 10;

int neg = -10;

int bigpos = 123456;

int bigneg = -123456;

int zero = 0;

string strInt = "120ab";

String.Format("{0:00000}", pos); //"00010"

String.Format("{0:00000}", neg); //"-00010"

String.Format("{0:00000}", bigpos); //"123456"

String.Format("{0:00000}", bigneg); //"-123456"

String.Format("{0:00000}", zero); //"00000"

String.Format("{0:00000}", strInt); //"120ab"

String.Format("{0:#####}", pos); //"10"

String.Format("{0:#####}", neg); //"-10"

String.Format("{0:#####}", bigpos); //"123456"

String.Format("{0:#####}", bigneg); //"-123456"

String.Format("{0:#####}", zero); //""

String.Format("{0:#####}", strInt); //"120ab"

While playing around with this, I made an interesting observation:

String.Format("{0:X00000}", pos); //"A"

String.Format("{0:X00000}", neg); //"FFFFFFF6"

String.Format("{0:X#####}", pos); //"X10"

String.Format("{0:X#####}", neg); //"-X10"

The "0" specifier works well with other numeric specifier, but the "#" doesn't. Umm... I think the "Custom Numeric Format String" probably deserve a whole post of it's own. Since this is only the "101" post, I'll move on to the next argument in the format string.

zeros (optional): It actually has a different meaning depending on which numeric specifier you use.

int neg = -10;

int pos = 10;

// C or c (Currency): It represent how many decimal place of zeros to show.

String.Format("{0:C4}", pos); //"$10.0000"

String.Format("{0:C4}", neg); //"($10.0000)"

// D or d (Decimal): It represent leading zeros

String.Format("{0:D4}", pos); //"0010"

String.Format("{0:D4}", neg); //"-0010"

// E or e (Exponential): It represent how many decimal places of zeros to show.

String.Format("{0:E4}", pos); //"1.0000E+001"

String.Format("{0:E4}", neg); //"-1.0000E+001"

// F or f (Fixed-point): It represent how many decimal places of zeros to show.

String.Format("{0:F4}", pos); //"10.0000"

String.Format("{0:F4}", neg); //"-10.0000"

// G or g (General): This does nothing

String.Format("{0:G4}", pos); //"10"

String.Format("{0:G4}", neg); //"-10"

// N or n (Number): It represent how many decimal places of zeros to show.

String.Format("{0:N4}", pos); //"10.0000"

String.Format("{0:N4}", neg); //"-10.0000"

// P or p (Percent): It represent how many decimal places of zeros to show.

String.Format("{0:P4}", pos); //"1,000.0000%"

String.Format("{0:P4}", neg); //"-1,000.0000%"

// R or r (Round-Trip): This is invalid, FormatException is thrown.

String.Format("{0:R4}", pos); //FormatException thrown

String.Format("{0:R4}", neg); //FormatException thrown

// X or x (Hex): It represent leading zeros

String.Format("{0:X4}", pos); //"000A"

String.Format("{0:X4}", neg); //"FFFFFFF6"

// nothing: This is invalid, no exception is thrown.

String.Format("{0:4}", pos)); //"4"

String.Format("{0:4}", neg)); //"-4"

In summary, there are four types of behaviour when using this specifier:

Leading Zeros: D, X

Trailing Zeros: C, E, F, N, P

Nothing: G

Invalid: R,

Now, that we've gone through the valid specifiers, you can actually use this in more than just String.Format(). For example, when using this withByte.ToString():

Byte b = 10;

b.ToString("D4"); //"0010"

b.ToString("X4"); //"000A"

Reference : http://blogs.msdn.com/kathykam/archive/2006/03/29/564426.aspx