SQL Interview Questions and Answers
SQL Interview Questions and Answers
SQL Interview Questions and Answers for beginners and experts. List of frequently asked SQL Interview Questions with answers by Besant Technologies.
We hope these SQL interview questions and answers are useful and will help you to get the best job in the networking industry. This SQL interview questions and answers are prepared by SQL Professionals based on MNC Companies expectation. Stay tuned we will update New SQL Interview questions with Answers Frequently. If you want to learn Practical SQL Training then please go through this SQL Training in Chennai
Besant Technologies supports the students by providing SQL interview questions and answers for the job placements and job purposes. SQL is the leading important course in the present situation because more job openings and the high salary pay for this SQL and more related jobs. We provide the SQL online training also for all students around the world through the GangBoard medium. These are top SQL interview questions and answers, prepared by our institute experienced trainers.
Best SQL Interview Questions and Answers
Here is the list of most frequently asked SQL Interview Questions and Answers in technical interviews. These SQL questions and answers are suitable for both freshers and experienced professionals at any level. The SQL questions are for intermediate to somewhat advanced SQL professionals, but even if you are just a beginner or fresher you should be able to understand the SQL answers and explanations here we give.
In this post, you will get the most important and top SQL Interview Questions and Answers, which will be very helpful and useful to those who are preparing for jobs.
SQL is a domain language used for managing and maintaining data which are done in a database system. Besant Technologies provides the best SQL training in Chennai. Our trainers are having more experience in working as a developer, designers, and debuggers in top companies. They have received many excellence awards in work and also for teaching and training. They have prepared this SQL interview questions and answers by doing complete analyze and research and also by integrating with other company SQL interview questions and answers.
Here is the best and top SQL interview questions and answers.
- Using rowi
SQL > delete from emp where rowid not in (select max (rowid) from emp group by empno);
This technique can be applied to almost scenarios. Group by operation should be on
The columns which identify the duplicates.
- using self-join
SQL > delete from emp e1 where rowid not in (select max (rowid) from emp e2 where e1.empno = e2.empno );
- Using row_number()
SQL > delete from emp where rowid in (select rid from ( select rowid rid, row_number() over(partition by empno order by empno) rn from emp )where rn > 1);
This is another efficient way to delete duplicates
- Using dense_rank ( )
SQL > delete from emp where rowid in (select rid from (select rowid rid, dense_rank() over(partition by empno order by rowid) rn from emp ) where rn > 1 );
Here you can use both rank ( ) and dens_rank()
since both will give unique records when order by rowid.
- Using group by
SQL > delete from emp where empno,empname,salary) in ( select max(empno),empname,salary from emp group by empname,salary );
This technique is only applicable in few scenarios.
Materialized views are also logical view of our data driven by select query but the result of the query will get stored in the table or disk, also definition of the query will also store in the database .When we see the performance of Materialized view it is better than normal View because the data of materialized view will stored in table and table may be indexed so faster for joining also joining is done at the time of materialized views refresh time so no need to every time fire join statement as in case of view.
1) In Views query result is not stored in the disk or database but Materialized view allow to store query result in disk or table. When we create view using any table, rowid of view is same as original table but in case of Materialized view rowid is different. In case of View we always get latest data but in case of Materialized view. we need to refresh the view for getting latest data. Performance of View is less than Materialized view. In case of view its only the logical view of table no separate copy of table but in case of Materialized view we get physically separate copy of table. In case of Materialized view we need extra trigger or some automatic method so that we can keep MV refreshed, this is not required for views in database.
-to_many_rows
-zero_divid_errors
-no_data_found
-invalid_cursor
-dup_value_on_index
SELECT *
FROM EMP
WHERE ROWNUM <= 10
MINUS
SELECT *
FROM EMP
WHERE ROWNUM < 10;
DELETE
FROM EMP
WHERE ENAME= (SELECT ENAME
FROM EMP
WHERE ROWNUM<=10
MINUS SELECT ENAME
FROM EMP
WHERE ROWNUM<10);
Schema objects is Structures of database’s data. tables, views, sequences, synonyms, indexes are schema objects.
Data is collection of information or collection of attribute, example Ravi it’s a data. The information of Ravi is height, weight, education, age, etc..
A table is collection of data to storage in any database. Table data is stored in rows and columns.
A View is not a original table It’s called virtual table. It’s based on original table
There are three type of tables
- Simple view
- Complex view
- Inline view
An index is direct access to rows, used to increase the performance of data retrieval. Index can be created on one or more columns of a table.
UNIQUE constraints allowed NULLs values. PRIMARY KEY constraints cannot contain NULLs.
Truncate is after deleting entire table, Cannot be rolled back options. Delete allows the single row and multiple rows. Deleted records can be rolled back or committed
Join is retrieves data from related columns or rows from multiple tables.
- Cross Join
- Inner join
- equi join
- non equi join
- Outer join
- left outer join
- right outer join
- full outer join
- Self join
Subquery is a query embedded with another query called subquery its return values from mainquery
INSTR(str1, str2 [,starting,[howmuch]])
INSTR is returns the position of the character.
SUBSTR(stri1, starting, howmuch)
SUBSTR returns a single or multiple character string
UNION returns all unique rows .
UNION ALL returns all rows included all duplicates.
INTERSECT returns all common row and unique rows selected by both queries.
MINUS returns all unique rows selected by the first query but not by the second.
ROWID is a pseudo column attached to each row of a table. Its create table database assign the rowed in each rows
Primary key is a column or a combination of columns of a table which cab be used to uniquely identify a row of the table. PRIMARY KEY cannot be null.
UNIQUE KEY is a column or a combination of columns of a table, which can be used to identify a row of the table uniquely. UNIQUE KEY can be null.
FOREIGN KEY is a column or a combination of column which is the primary key or unique key of the referenced table. FOREIGN KEY can be null.
Database maintains referential integrity by automatically removing dependent foreign key values if a referenced primary or unique key value is removed.
- 1
- 22
- 0
- NULL
- >=
- <=
- in
Ans Intersect
A database is a collection of data. Representing some aspect of real world.
select decode(mod(2004,4),0,’leap year’, ‘not a leap year’) from dual
Date is used to store date and time values including month, day, year, century, hours, minutes and seconds. It fails to provide granularity and order of execution when finding difference between 2 instances (events) having a difference of less than a second between them. Time Stamp datatype stores everything that Date stores and additionally stores fractional seconds.
Maximum 254 in single table
To be created when table is queried for less in the table rows and need fastest Retrival
Foreign key is which refers to another table primary key. Reference key is the primary key of table referred by another table.
SELECT NAME FROM SYS.V_$DATAFILE;
SELECT NAME FROM SYS.V_$CONTROLFILE;
SELECT MEMBER FROM SYS.V_$LOGFILE;
SQL*Loader is a loader utility used for moving data from external files into the Oracle database in bulk. It is used for high-performance data loads.
SQL*Loader, External Tables
You new invoke the insert_location procedure using the followingcommand:
Subqueries, or nested queries, are used to bring back a set of rows to be used by the parent query. Depending on how the subquery is written, it can be executed once for the parent query or it can be executed once for each row returned by the parent query. If the subquery is executed for each row of the parent, this is called a correlated subquery.
SQL>select deptno, count(deptno) from emp group by deptno having
count(*)>3;
The output should contain only those jobs with more than three employees.
SQL>select job,count(empno) from emp group by job having count(job)>3
SQL>select ename from emp where job=’CLERK’ and sal>(select min(sal)
from emp
where job=’SALESMAN’);
SQL>select ename from emp where deptno=10 and sal>any(select sal from
emp where deptno not in 10).
- Rename t1 to t1_base;
Create view t1 <column list with new name> as select * from t1_base;
- create table t2 <column list with new name> as select * from t1;
Drop table t1;
Rename t2 to t1;
- SELECT DISTINCT SAL FROM EMP1 WHERE SAL NOT IN (SELECT SAL FROM EMP1 WHERE SAL < ANY (SELECT SAL FROM EMP1))
2.SELECT SAL FROM EMP WHERE SAL >= ALL (SELECT SAL FROM EMP)
“NULL”, CONCAT (DATA_TYPE, DATA_LENGTH) TYPE FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = ‘DEPT’
SELECT EMPNO, LPAD (‘ ‘, 6*(LEVEL – 1)) || ENAME “EMPLOYEE NAME” FROM EMP START
WITH ENAME=’KING’ CONNECT BY PRIOR EMPNO = MGR
Entity – Relational diagram
Could be (a) one to one
(b) One to many (crowfoot style)
(c) Many to many
A rule defined on a column (or set of columns) in one table that allows the insert or update of a row only if the value for the column or set of columns matches a value in a column of a related table. It also specifies the type of data manipulation allowed on referenced data and the action to be performed on dependent data as a result of any action on referenced data.
UPDATE and DELETE Restrict – A referential integrity rule that disallows the update or deletion of referenced data.
DELETE CASCADE – when a referenced row is deleted, all associated dependent rows are deleted.
If a foreign key reference a parent key of the same table it is called self-referential integrity constraint.
The condition must be a Boolean expression evaluated using the values in the row being inserted or updated, and cannot contain subqueries, sequences, the SYSDATE, UID, USER, or USERENV SQL functions, or the pseudocolumns LEVEL or ROWNUM.
A. SELECT ROUND(enroll_date, ‘DAY’) FROM student;
B. SELECT ROUND(enroll_date, ‘YEAR’) FROM student;
C. SELECT ROUND(enroll_date, ‘MONTH’) FROM student;
D. SELECT ROUND(TO_CHAR(enroll_date, ‘YYYY’)) FROM student;
A. The statement will achieve the desired results.
B. The statement will execute, but will NOT enable the PRIMARY KEY constraint.
C. The statement will execute, but will NOT verify that values in the ID column do NOT violate the constraint.
D. The statement will return a syntax error.
The synonym is another name of the table used for multiple links of a database. A view is created with many tables, and virtual columns and with conditions.
SQL>select ename,sal from emp where sal>(select sal from emp where ename=’JONES’)and sal> (select sal from emp where ename=’SCOTT’);
- The locations, departments, and employees tables are empty.
- The departments table has one row. The locations and the employees tables are empty.
- The location table has one row. The departments and the employees tables are empty.
- The locations table and the departments table both have one row. The employees table is empty.
Answer(s) D
Auto close is a database property : if this is turned ON, the SQL server database engine will automatically close the connectivity to user database if its not being used. When new user tried to access the database, the database will come online.
When Auto create stats is enabled the Query Optimizer creates statistics on individual columns used in a predicate, if these statistics are not already available. These statistics are necessary to generate the query plan. They are created on columns that do not have a histogram in an existing statistics object. The name of the auto-created statistics includes the column name and the object ID in hexadecimal format. Its going to increase the database performance instead of waiting for the maintenance plan and manual creation of stats.
Auto update stats is going to update the stats every time whenever a new stats are creating, which in turn leads optimizer to select the best execution plan and increases the query performance.
SQLOS is thin layer which allows the communication between the SQL server database engine and Operating system
Main Objective of SQL OS ::::
Scheduler and IO completion. The SQLOS is responsible for scheduling threads for CPU consumption. Most threads in SQL Server are run in cooperative mode, which means the thread is responsible for yielding so that other threads can obtain CPU time. Most IO is asynchronous. The SQLOS is responsible for signalling threads when IO is completed.
Synchronization primitives: SQL server is a multi-threaded application, so SQLOS is responsible for managing thread synchronizations.
Memory management: Different components within SQL Server, example plan cache, CLR, lock manager etc request memory from the SQLOS. Therefore, the SQLOS can control how much memory a component within SQL Server is consuming.
Deadlock detection and management of the same.
Exception handling framework.
Hosting services for external components such as CLR and MDAC. SQL Server will run threads that are associated with external component in pre-emptive mode. Pre-emptive mode allows the SQLOS to prevents runaway threads (threads which will not yield and allow other threads to get CPU execution). Also the SQLOS can keep track of the memory these external components consume. For example, for CLR the SQLOS can invoke garbage collection if the CLR process is taking up too much memory.
Limiting the memory of the SQL server, means assigning the maximum and minimum memory to SQL server. Depending on the Memory allocated to the OS we can assign/allocate memory to SQL server. As per the recommendation we need to assign approx. 80% of total allocated memory to SQL server.
Maximum memory is the amount of memory a SQL server instance can pull from the physical memory allocation at OS level, when ever the SQL server is required. Depending on the memory utilization SQL server demands the memory from OS.
Minimum memory is the amount of memory SQL server needed to start, generally SQL server database engine required only 20MB of memory to start the services. But once will allocate minimum memory in GB then what will happen is SQL server will not release that memory once it will approach that value.
SQL server setup.exe file is corrupted.
Hardware and softwareís are not compatible.
MSI and MSP files are missing.
Donít have enough permission to perform write operation on installation path.
AgentSigning Certificate is not creating.
SQL service account is not having enough permission or it has not been configured in the proper way.
We need to read the details.txt.
SQL server error logs.
Eventviewer at OS level.
Temp folder.
ACID (an acronym for Atomicity, Consistency Isolation, Durability) is a concept that Database Professionals generally look for when evaluating databases and application architectures.
Atomicity is an all-or-none proposition means its going to treat all the transaction as one or none.
Consistency guarantees that a transaction never leaves your database in a half-finished state, what all transactions are got committed that transaction will be available and if not the transaction will be rollback.
Isolation keeps transactions separated from each other until theyíre finished, again its depends on which isolation level the transaction is running.
Durability guarantees that the database will keep track of pending changes in such a way that the server can recover from an abnormal termination, its going to calculate the recovery phase based on last checkpoint.
A transaction consists of a single command or a group of commands that execute as a package. Transactions allow you to combine multiple operations into a single unit of work. If a failure occurs at one point in the transaction, all of the updates can be rolled back to their pre-transaction state.
A transaction must conform to the ACID propertiesóatomicity, consistency, isolation, and durabilityóin order to guarantee data consistency. Most relational database systems, such as Microsoft SQL Server, support transactions by providing locking, logging, and transaction management facilities whenever a client application performs an update, insert, or delete operation.
Concurrency is the ability of multiple users to access data at the same time. When the number of simultaneous operations that the database engine can support is large, the database concurrency is increased. In Microsoft SQL Server Compact, concurrency control is achieved by using locks to help protect data. The locks control how multiple users can access and change shared data at the same time without conflicting with each other.
Optimistic concurrency: Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely, and it permits transactions to execute without locking any resources. The resources are checked only when transactions are trying to change data. This determines whether any conflict has occurred (for example, by checking a version number). If a conflict occurs, the application must read the data and try the change again. Optimistic concurrency control is not provided with the product, but you can build it into your application manually by tracking database access.
Pessimistic concurrency: Pessimistic concurrency control locks resources as needed, for the duration of a transaction. SQL Server Compact supports pessimistic concurrency control that locks resources as needed for the duration of a transaction.
Uncommitted Read: Uncommitted Read, or dirty read, lets a transaction read any data currently on a data page, whether or not that data has been committed. For example, although another user might have a transaction in progress that has updated data, and that transaction is holding exclusive locks on the data, your transaction can read the data anyway, and possibly take further actions based on the values you read. The other user might then decide to roll back his or her transaction, so logically, those changes never occurred. Although this scenario isn’t desirable, with Uncommitted Read you won’t get stuck waiting for a lock, nor will your reads acquire share locks that might affect others.
Committed Read: Committed Read is SQL Server’s default isolation level. It ensures that an operation will never read data another application has changed but not yet committed. Because you can never read uncommitted data, if a transaction running with Committed Read isolation revisits data, that data might have changed, or new rows might appear that meet the criteria of the original query. Rows that appear in this way are called phantoms.
Repeatable Read: If you want the read operations to be repeatable, choose the third repeatable Read. The Repeatable Read isolation level adds to the properties of Committed Read by ensuring that if a transaction revisits data or if a query is reissued, the data will not have changed. In other words, issuing the same query twice within a transaction won’t pick up any changes to data values that another user’s transaction has made. No other user can modify the data that your transaction visits as long as you have not yet committed or rolled back your transaction.
Serializable: The Serializable isolation level ensures that if a query is reissued, no data will have changed and no new rows will appear in the interim. In other words, you won’t see phantoms if the same query is issued twice within a transaction. Rerun the example from the Repeatable Reads section, inserting a row with a col1 value of 35. But this time, set your isolation level to SERIALIZABLE. The second connection will block when you try to do the INSERT, and the first connection will read exactly the same rows each time.
SNAPSHOT: Specifies that data read by any statement in a transaction will be the transaction ally consistent version of the data that existed at the start of the transaction. The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transactions.
No, it not possible to restore the database in point of time in simple recovery mode.
Cause in simple recovery mode, we are not having the transaction log backup.
TUF File: Its Transaction Undo File. It Generated only when you Have Configured Log Shipping with Stand by Option. Since in Stand by Log Shipping Secondary Database is Available to User. So TUF Keep Pending Transaction Which are in Log File Came from Primary So That when Next Log Backup Will Come from Primary They Can Be Synchronized at Secondary.
WRK: This Extension Is Given to A File Which is Being Copied from Primary Backup Location to Secondary and Once Copy Process has been completed these files are renamed with .trn file.
Yes, we can configure the multiple secondaries in log shipping. As the log shipping is a database level configuration. We can have the appropriate permission to the network path where the log backup is getting generated.
RTO stands for Recovery Time Objective: In simple terms, you can think of RTO as a measure of how much downtime is acceptable, or how quickly must the data be made accessible again. RTO is often talked about in terms of the number of nines of desirable up time or accessibility for the data/database/system.
RPO stands for Recovery Point Objective: Again, in simple terms, you can think of RPO as a measure of how much data or work itís acceptable to lose. Itís relatively easy to achieve very minimal or even zero data/work loss using backups but depending on the amount of damage the database suffered when the disaster hit, recovering might take a lot of time. For instance, if an entire database is destroyed, depending on the architecture of the database and the backups that exist, it may take a significant amount of time to recover the database up to the point of the disaster. Most RPOs are defined as the amount of time for which work may be lost.
Both RTO and RPO together called as SLA.
Generally: We will schedule a Full backup on every weekend followed by differential backup every day post business hours like 10 PM, followed by log backup every one hour.
If the database is highly critical, we will schedule a full backup every day post business hour like 10PM followed by log backup every 1 hour or 30 mins.
Depending on the Recovery model and transactions will configure the log backup in range of 15mins to 60 mins.
Fill factor is the value that determines the percentage of space on each leaf-level page to be filled with data. In an SQL Server, the smallest unit is a page, which is made of page with size 8K. Every page can store one or more rows based on the size of the row. The default value of the Fill Factor is 100, which is same as value 0. The default Fill Factor (100 or 0) will allow the SQL Server to fill the leaf-level pages of an index with the maximum numbers of the rows it can fit. There will be no or very little empty space left in the page, when the fill factor is 100.
Fill factor will play a significant role in database performance by minimising the affect of page split which in turns leads to the optimal performance at IO level.
Yes we can configure the mirroring between two different SQL server versions.
But this is not recommended, as its not going to support the backward compatibility during mirroring failover.
Lock escalation is the process of converting many fine-grained locks (such as row or page locks) into table locks. Microsoft SQL Server dynamically determines when to perform lock escalation. When making this decision, SQL Server takes into account the number of locks that are held on a particular scan, the number of locks that are held by the whole transaction, and the memory that is being used for locks in the system as a whole.
wait types represent the discrete steps in query processing, where a query waits for resources as the instance completes the request.
To get the information for wait type we will use below mentioned DMV.
Sys.dm_os_wait_stats
Sys.dm_os_waiting_tasks
Page Latch ñ These are waits that occur when a worker needs to wait for a page to become available. This typically occurs on a page is already available in memory.
Page IO Latch ñ These are waits that occur when a needs to wait for a page due to physical I/O. Such as when a page needs to be made available in the buffer pool for reading or writing and SQL Server needs to retrieve it from disk or mount points.
Virtual network name for the SQL Server, Virtual IP address for SQL Server, IP addresses for the Public Network and Private Network(also referred as Heartbeat) for each node in the failover cluster, shared drives for SQL Server Data and Log files, Quorum Disk, and MSDTC Disk.
We canít read the mirror database, as the database is in restoring mode always. So read and write operations cannot be performed on the database.
To read a mirror database we need to create a snapshot of that database.
Yes, we can migrate a 500GB or of any size database without downtime or minimal downtime of few mins like 5mins.
By configuring a high availability concepts like log shipping or mirroring.
Yes, we can add the new data file on primary database in log shipping.
If the primary and secondary servers is having the same disk configuration i.e. the data file locations on both the servers are exactly same in that case there will be no affect on the log shipping behaviour and if not then the changes made on the primary will not reflect on the secondary.
No, its not possible, cause the blank database will take the template from model database and the size of model database is 16 MB (8 MB for data and 8 MB for log). So, we must have to provide size either equal to model or more than model database.
Sp_replcounters is a system stored procedure that returns information about the transaction rate, latency, and first and last log sequence number (LSN) for each publication on a server. This is run on the publishing server. Running this stored procedure on a server that is acting as the distributor or subscribing to publications from another server will not return any data.
sys.dm_repl_articles – Contains information about each article being published. It returns data from the database being published and returns a row for each object being published in each article.
sys.dm_repl_schemas – Contains information about each table and column being published. It returns data from the database being published and returns one row for each column in each object being published
sys.dm_repl_traninfo – Contains information about each transaction in a transactional replication.
In this case we need to pull out the dependency report first. After that we need to see on which all factors the SQL server service is dependent and which factor is right now creating issue while SQL services is trying to come online.
Like : SQL server service is dependent on shared disks, ip addresses, storage. May be the dependency is either OR or AND.
No, it supports only on Full recovery mode.
No, we cannot take a T-Log backup of a newly created database. Until and unless we will not take the full backup of database in spite of database recovery model, the database will behave as its being configured on simple recovery mode.
No, itís not possible.
No, but we can allow SQL server to automatically create statistics on read only secondary replicas.
There are two options to configure secondary replica for running read workload. The first option ëRead-intent-onlyí is used to provide a directive to AlwaysOn secondary replica to accept connections that have the property ApplicationIntent=ReadOnly set. The word ëintentí is important here as there is no application check made to guarantee that there are no DDL/DML operations in the application connecting with ëReadOnlyí but an assumption is made that customer will only connect read workloads.
Yes. If a corrupt page is detected, SQL Server will attempt to repair the page by getting it from another replica.
SQL Server 2014ís biggest improvement is that the replicaís databases stay visible when the primary drops offline ñ as long as the underlying cluster is still up and running. If we have one primary and four secondary replicas, and we lose just my primary, the secondaries are still online servicing read-only queries. (Now, you may have difficulties connecting to them unless youíre using the secondaryís name, but thatís another story.) Back in SQL 2012, when the primary dropped offline, all of the secondariesí copies immediately dropped offline ñ breaking all read-only reporting queries.
When SQL Server runs on a computer with more than one processor or CPU, it detects the best degree of parallelism, that is the number of processors employed to run a single statement, for each query that has a parallel execution plan. You can use the max degree of parallelism option to limit the number of processors to use for parallel plan execution and to prevent run-away queries from impacting SQL Server performance by using all available CPUs.
The default value for MAXDOP is 0 (zero) and can be set or viewed using (sp_configure). A value of 0 means that SQL Server will use all processors if a query runs in parallel.
It will tell us why the database log file isnít clearing out. By reading the column we will get to know what action needs to be performed on log file in order to truncate the log file.
No.
The purpose of a checkpoint is to bring the pages in the data files up-to-date with whatís in the transaction log. When a checkpoint ends, thereís a guarantee that as of the LSN of the LOP_BEGIN_CKPT log record, all changes from log records before that point are persisted in the data files on disk. Thereís no guarantee about logged changes after that point, only before it. In other words, all the log records before the LSN of the LOP_BEGIN_CKPT log record are no longer required for crash recovery, unless thereís a long running transaction that started before that LSN.
The answer is that log clearing/truncation of a VLF containing an LOP_BEGIN_CKPT log record cannot happen until another complete checkpoint has occurred. In other words, a complete checkpoint has to occur since the last log clearing/truncation before the next one can happen. If a checkpoint hasnít occurred, the log_reuse_wait_desc for the database in sys.databases will return CHECKPOINT. Itís not common to see this occur, but you might see it if thereís a very long running checkpoint (e.g. a very large update on a system with a slow I/O subsystem so the flushing of data file pages takes a long time) and very frequent log backups, so two log backups occur over the time taken by a single checkpoint operation. It could also happen if youíve messed with the sp_configure recovery interval and set it higher than the default.
No.
When snapshot isolation is enabled, deletes from a heap are ghosted, as part of the overall versioning process, which can lead to some interesting side-effects. A versioned record has an extra 14-bytes tagged on the end, so a heap record that suddenly becomes versioned is 14-bytes longer ñ which may mean it doesnít fit on the page any longer. This could lead to it being moved, resulting in a forwarding/forwarded record pair ñ just because the record was deleted! Now, the page has to be full for this time happen, and the Storage Engine will take steps to avoid this happening for rows less than 32 bytes long.
All the pages and extents are locked. The table doesnít show them as allocated any more but because theyíre exclusively locked, the allocation subsystem canít really deallocate them until the locks are dropped (when the transaction commits).
They canít be reused until theyíre really deallocated. If a transaction rollback happens, the pages are just marked as allocated again.
Use the DMV sys.dm_tran_locks.
If there was a clean shut down, SQL Server marks it in database boot page, and so knows that it does not need to run recovery, during restart, and therefore does not need the original transaction log. In such cases, we can attach the database without the log, and SQL Server will simply create a new log.
However, if there is no mark indicating a clean shutdown, and this is never guaranteed, then SQL Server must run recovery during restart, which it canít do without the original log, and which is the reason why youíve ended up with a RECOVERY_PENDING database. It simply means there wasnít a clean shutdown and SQL Server canít start the recovery process because you whacked the log.
The recovery process uses the contents of the log to re-establish transactional consistency. During recovery SQL Server will redo the effects of any transaction that was not included in the last CHECKPOINT to run before the database went offline, and undo the effects of any uncommitted transaction that was included in the last CHECKPOINT.
It could be a problem, yes. Generally, it depends on the overall size of the log. Itís common, and not really an issue, to see a very large transaction log with several hundred VLFs, but several thousand is a concern regardless of overall log size.
Always try to shrink the log file, which will help us in reducing the VLF numbers.
The TRUNCATE command is not a non-logged operation. In fact, it is fully logged. With very few exceptions, namely one or two operations in tempdb such as operations on the version store, there is no such thing as a non-logged operation in SQL Server. Operations such as TRUNCATE TABLE and DROP TABLE are fully logged, but SQL Server uses an efficient, deferred de-allocation mechanism that means the commands seem to be instantaneous, regardless of the size of the table. This fact, coupled with some misleading terminology, including in several Microsoft articles, lead to the mistaken belief that these operations are not logged at all.
When operating a database in FULL or BULKLOGGED recovery model, all log records must remain in the log, as part of the active log, until they have been captured in a log backup. Otherwise, the log backups could not guarantee to capture the complete, unbroken chain of LSNs that are required for database restore and recovery.
In these recovery models, only a log backup will cause SQL Server to run the ìlog clearingî process, also referred to as log truncation. During log truncation, SQL Server will mark as inactive any VLFs that contain no part of the active log, i.e. VLFs that contain no log records that SQL Server still requires for recovery, or for log backup, or for any other process.
SQL Server will run log truncation after every log backup and, each time, any inactive VLFs that result will be available for reuse to store new log records, overwriting the existing log records that are no longer required. Therefore, when SQL Server reaches the end of the log.
However, itís possible that there will be periods where successive log truncations produce zero inactive VLFS, because there are other factors that can prevent SQL Server reusing space in the log. If there are no inactive VLFs available, then SQL Server has no other choice but to grow the log, adding more VLFs.
This is likely what is happening in this case; some other process is forcing SQL Server to retain log records in the active log, and so successive executions of the log truncation process arenít producing any more inactive VLFs. Attempts to shrink the log will have no effect in this situation as there is no free space to remove! DBCCSHRINKFILE can only physically remove unused or inactive VLFs that are at the end of the log file.
We must have data files equivalent to number of CPU processors or if CPU is greater than 8, then at least we have 8 data files for TempDB and the size of data files must be same.
As well always mount the TempDB on separate disk for better performance.
A checkpoint is only done for tempdb when the tempdb log file reaches 70% full ñ this is to prevent the tempdb log from growing if at all possible, that a long-running transaction can still essentially hold the log hostage and prevent it from clearing.
This does not happen for tempdb. Tempdb is not recovered in the event of a crash, and so there is no need to force dirty tempdb pages to disk, except in the case where the lazy writer process (part of the buffer pool) has to make space for pages from other databases. Of course, when you issue a *manual* CHECKPOINT, all the dirty pages are flushed, but for automatic checkpoints theyíre not.
The reasons that checkpoint exists is to limit the duration of the “redo” phase of crash recovery ñ where log records are replayed on disk pages where the updated page image hadn’t been written to disk after the transactions committed. Automatic checkpoints are done in databases to do this.
Tempdb, however, isn’t recovered after a crash ñ it’s recreated. This means that the time for recovery of tempdb isn’t an issue so there’s no need for automatic checkpoints. This means that the trigger of an automatic checkpoint for tempdb when it’s log file reaches 70% full. For this reason, on busy systems, it’s likely that the user databases are being checkpointed way more often than tempdb, and on a system where tempdb is used heavily too, there will likely be way more dirty pages from tempdb in the buffer pool at any time.
The full backup has to include all the transaction log from the begin LSN of the oldest active transaction at the time the data read portion of the backup ends, until the LSN at which the data read portion ends. If that begin LSN is later in time than the LSN of the checkpoint that backup does initially.
Which is going to help in point in time Recovery.
The issue happened due to CDC ñ change data capture. Check the is_cdc_enabled flag in sys.databases.
CDC uses the replication log scanning mechanism to harvest changes from the database, either backing on replicationís Log Reader Agent job or creating itís own capture job if replication isnít configured. If CDC is configured but the capture job isnít running, the log_reuse_wait_desc will show as REPLICATION, as the log manager doesnít have any way to know why the replication log scanner is configured, just that it is, and it hasnít run.
SELECT DISTINCT(COL_NAME) FROM TABLE_NAME
View is work like sql query which can be write to select the data based on condition. So instead of executing that big query and frequently write the same query again we can use view.
Select * from view_name
Indexing applied on key columns of table which frequently used to get the records form database and it helps the database to search for the selected records and give the result fast.
Select * from employee where city=’’Bangalore’;
Revoke command used to get back the privileges.
Below is the sql command to perform the same.
CREATE TABLE EMPLOYEE_DETAILS AS SELECT * FROM EMPLOYEES_INFO WHERE 1=2
Using Drop along with alter table command , column can be drop. Below is the query for the same.
ALTER TABLE TAB_NAME DROP COLUMN COL_NAME
This stored procedure used to perform some activity based on threshold and get called based on configuration which saves code to perform some external operation.
Alias can can be given on table/column. This is just define a name for table/column which is referred the table/column with different name which helps to short query and achieve joins to identify the same column from different table. Below is the example for the same.
SELECT EMP.EMLOYEE_ID, DEPT.EMPLOYEE_ID FROM EMPLOYEE EMP , DEPARTEMENT DEPT
Truncate – This delete all the record from table
Drop – This delete/Remove the table from database.
Below is the sql to create database
CREATE DATABASE DB_NAME;
Below is the sql to drop database
DROP DATABASE DB_NAME;
CREATE TABLE NEW_TABLE_NAME LIKE EXISTING_TABLE_NAME;
Select * from employee where salary >50000;
This used to join multiple tables or combine result set of two or more tables and it allows only distinct values.
SELECT col1,col2 FROM TABLE_1
UNION
SELECT col1,col2 FROM TABLE_2;
This is used to combine multiple table and it allows duplicate values.
SELECT col1,col2 FROM TABLE_1
UNION ALL
SELECT col1,col2 FROM TABLE_2;
It’s an operator which is used filter the data based on keyword. This is mostly used to perform search operation when exact word don’t remember. SELECT * FROM EMPLOYEE WHERE EMP_NAME LIKE ‘Sac%’;
MIN()- This function returns the minimum value of a column on which applied.
MAX()- This function returns the maximum value of a column on which applied.
SELECT MIN(SALARY) AS LOWEST_SALARY FROM EMPLOYEE_DETAILS;
SELECT MAX(SALARY) AS HIGHEST_SALARY FROM EMPLOYEE_DETAILS;
It’s an operator which works to select values within a given range. Below is the example
SELECT * FROM EMPLOYEE WHERE EMP_ID BETWEEN 1011 AND 2021;
Update statement works concurrently , Below is the example for the same.
UPDATE EMPLOYEE_DETAILS SET EMP_NAME = ‘Bansal Ashok’, CITY= ‘KOTA’ WHERE EMP_ID = 1;
Join used to combine two or more tables using joins.Below are the list of joins which are categorized.
- Full join
- Left Join
- Right join
- Inner join
- Self Join
This gives the entire record form left table and the matched respective records from the linked table.
To get the count we use COUNT(*) function , below is the query for the same.
SELECT COUNT(*) FROM EMPLOYEES_DETAILS;
LIKE operator can be use in where clause to get the records for substring
Select EMP_ID,EMP_NAME EMPLOYEE_DETAILS WHERE EMP_NAME like ‘%Agar’;
LIMIT need to use with table while writing selection sql,below is the sql for the same.
SELECT * FROM EMP_DETAILS LIMIT 5;
Distinct keyword used to filter the record and works as exclude the duplicate values for the selected column in table.
When the data has to filter based on some condition that can be achieved using where clause.
This AND operator applied between multiple condition to select the records(Data) , if all the conditions are satisfied then only it produce the output.
NULL referred no value , It’s not referred to 0 as well. This is give definition as data not available.
This operators works when you need the data against the matched condition so this will be applied on condition which will exclude the satisfied data.
This operator works two or more condition on the selected data , This will be giving results even if one condition is satisfied.
Order by the statement is used to sort the produced result in SQL. When the result Is generated, and we want it to get produced in any of the sorted format, order by is the statement to be used. Also, order by works with both Ascending & Descending order.
When data need to insert in existing table , insert into can be use which will append data from one to another table.
Below is the query for the same.
Update table_name set col1=val1 , col2=val2 where col3=val3;
Delete works on records of table. Record can be deleted based on conditions which can be write with where clause and select data will get deleted from table.
Join used to perform to get the record from multiple tables and right join select the complete record from right table and matched records from the linked table.
This used to give the number of records from the table which you have observed and it works with order by to sort in the same manner.
MIN(),MAX() can be used to get the same.
SUM(), AVG() & COUNT() – These are mostly used in sql to work on calculation.
IN operator helps to get the multiple records pass as condition and IN helps to write sub query as well.
Join is helps to get the records from multiple tables based on conditions , that can be categorized in inner join , full outer join, left join , right join , self join.
Inner JOIN select the data from both the tables which has common reference key data in both the tables.
FULL OUTER JOIN select the all the records from both the tables. If any value not matching also it will put null on that place.
SELF JOIN used to perform on the same table based on the key matching.
Select “Life is all about the next step”
Select 5, 10, 15
Select 100 + 55
Use DATABASE_NAME
Below two is the conditional clause
Where
Having
Bothe are conditional clauses whereas Where can executed before retrieving the records but having can be used after getting the records.
Group by is used to categorized/classify the records and order by used to get the data in ascending and descending order.
To run the set of sql statement together which can accept input and can produce result. Whenever c that sql statements has to execute will call procedure so internally it will execute sql statements.
PL-SQL Interview Questions and Answers
PL/SQL is a procedural language that has both interactive SQL and procedurl programming language constructs such as iteration ,conditional branching etc.
PL/SQL uses block structure as its basic structure and Anonymous blocks or nested blocks can be used in PL/SQL.
Cursor is named private SQL area from where information can be accessed and Cursors are required to process rows individually for queries returning multiple rows.
There are two types of cursors
i. Implicit cursors
ii. Explicit cursors
FOUND
Errors during runtime processing due to hardware / network failure / application logic errors are exception.
This exception is raised when the program the program tries to open an already opened cursor. The cursor be closed before it can opened again.
Currently the maximum parsed size of a PL/SQL block is 64K and the maxium code size is 100K.
Precompile, Extensible, Modular, Reusable, Maintainable
No, Stand-alone programs cannot be overloaded. However packaged sub programs can be overloaded within the same package.
A database is a collection for quicker and easier access, storage, and manipulation in a manner. It can also be defined as a set of tables, schemas, views, and other objects in the database.
The data center refers to a shared of various data sources. These data are consolidated, converted, and made available for mining and online processing purposes.
A table is a database object to store in the form of columns and rows holding data in a field.
A field in a database is inside a table to store a specific record.
An set of similar data in a table is also called a row of data.
A column is a vertical object in a table containing all of the information associated with a particular table area.
The Database Management System is a series of programs that allowed a user to store, retrieve, set update and delete data from a database.
There are two types of DBMS that are
- Method for Relational Database Management (RDBMS)
- Method of Non-Relational Database Management
RDBMS stands for Management Framework for Relational Databases. RDBMS is a framework for database management (DBMS) based on a relational model. It is possible to access data from a relational database
Oracle, MySQL, Microsoft SQL Server, PostgreSQL, Sybase, Microsoft Access, MongoDB, DB2, etc.
SQL stands for Language of Queries. It is a standard from the American National Standard Institute (ANSI). It is a databases to be and manipulated. Some of the actions we can do using SQL are to tables, stored procedures (SP’s), queries, retrieve, attach, update.
To describe or alter the layout of the database, DDL commands are used. Build: To create databases and objects for databases
ALTER: To alter current artifacts of the database DROP: Removing items from databases and databases
For the management of data in the database, DML commands are used. SELECT: To select a database unique data
INSERT: Inserting new records into a table UPDATE: Reviewing latest records
To build positions, grant permission and monitor access to database objects, DCL commands are used. GRANT: Ensuring access for user
DENY: To deny account authorization to users REVOKE: Removing user access
To control the changes made by DML statements, TCL commands are used. COMMIT: To write and store database changes
ROLLBACK: To recover the database from the last committed date
To speed up the efficiency of queries, an index is used. This makes the extraction of data from the table easier. It is possible to construct an index for one column or a group of columns.
There are three kinds of indexes that are
Unique Index: Unique Indexes help to preserve data integrity by ensuring that there are no equivalent key values in two rows of data in a database. It is possible to automatically add a unique index when a primary key is specified. This means that the values are unique in the index key
Clustered Index: The clustered index registers the physical order of the table and searches for key values. Just cluster is going to
The clustered and non-clustered index variations in SQL are as follows:
Index Clustered:
It is used for quick database data retrieval and it is quicker. An only clustered index can have a table
It changes the way records are stored in a database by sorting rows by the column set to index clusters. Index Non-Clustered:
SQL constraints are the collection of rules that have placed such limits as data is added, removed or modified in databases.
Some of the benefits provided by Views are Views don’t occupy space
The database relationship is known as the link in the database between the tables. Different database relationships exist, namely,
- Relation One to One
- One Friendship with Many
- Relationship Many to One
- Relationship Self-referencing
A request for data or information from a database table or combination of tables is a database query. Either a pick query or an action query may be a database query.
A Subquery is a SQL query within a separate It is a subset of the Select statement whose return values are used when the key query.
Two types of sub- exist:
- Correlated: A Correlated is in a SQL database query that uses outer query values to complete the query. Since the needs that the outer query be first the Correlated be executed once for each row of the query. It is often referred to as a synchronized
- Non-Correlated: A Non-Correlated is a that includes both the internal and the external
The accuracy and of the data stored in a database is specified by data integrity. It also defines integrity constraints when entering into an application or a database to impose business rules on the data.
It’s one of the important questions for the Oracle DBA Interview.
The Auto Increment the user to create a number when a new record is inserted into a table to be created. You may use the Auto increase keyword whenever the primary key is used.
Oracle uses the AUTO INCREMENT keyword, and SQL Server uses the IDENTITY keyword.
Ans. ANS. A temporary storage system to temporarily store the data is a temp table.
By having a requirement for a SQL query, SQL CLAUSE to restrict the result collection. A CLAUSE helps to filter the rows from the record collection. SQL CLAUSES are Getting & When.
In SQL Server, each has a name and a data form in a database table. When constructing a SQL table, we need to data to store within each and every column of a table.
A Stored Procedure is a of SQL that have been created to perform a specific task and stored in the database. Input parameters are accepted and processed by the stored procedure and return a single value, such as a number or text value or a result set.