Artificial Intelligence Interview Questions and Answers
Artificial Intelligence Interview Questions and Answers
Artificial Intelligence Interview Questions and Answers for beginners and experts. List of frequently asked Artificial Intelligence Interview Questions with answers by Besant Technologies. We hope these Artificial Intelligence interview questions and answers are useful and will help you to get the best job in the networking industry. This Artificial Intelligence interview questions and answers are prepared by Artificial Intelligence Professionals based on MNC Companies expectation. Stay tuned we will update New Artificial Intelligence Interview questions with Answers Frequently. If you want to learn Practical Artificial Intelligence Training then please go through this Artificial Intelligence Training in Chennai and Artificial Intelligence Training in Bangalore
Best Artificial Intelligence Interview Questions and Answers
Besant Technologies supports the students by providing Artificial Intelligence interview questions and answers for the job placements and job purposes. Artificial Intelligence is the leading important course in the present situation because more job openings and the high salary pay for this Artificial Intelligence and more related jobs. We provide the Artificial Intelligence online training also for all students around the world through the Gangboard medium. These are top Artificial Intelligence interview questions and answers, prepared by our institute experienced trainers.
Artificial Intelligence interview questions and answers for the job placements
Here is the list of most frequently asked Artificial Intelligence Interview Questions and Answers in technical interviews. These questions and answers are suitable for both freshers and experienced professionals at any level. The questions are for intermediate to somewhat advanced Artificial Intelligence professionals, but even if you are just a beginner or fresher you should be able to understand the answers and explanations here we give
>>import turtle >>bob = turtle.Turtle() >>def arc(t,r,angle): >> circumference = 2*3.14*r >> arc_length = circumference * (angle/360) >> n = 50 >> for i in range(100): >> print(bob.fd(arc_length/n)) >> print(bob.lt(angle/n)) >>arc(bob,100,60) >>def petal(t,r,angle): >> for i in range(2): >> arc(bob,100,60) >> bob.lt(180-angle) >>def flower(t,n,r,angle): >> for i in range(n): >> petal(t,r,angle) >> bob.lt(360/n) >>flower(bob,7,100,60)
>>import turtle >>bob = turtle.Turtle() >>def circle(t,r,angle): >> circumference = 2*3.14*r >> arc_length = circumference * (angle/360) >> n = 50 >> for i in range(490): >> print(bob.fd(arc_length/n)) >> print(bob.lt(angle/n)) >>circle(bob,100,60) >>def square(t,length): >> print(bob.lt(45),length) >> print(bob.fd(145),length) >> print(bob.lt(90),length) >> print(bob.fd(140),length) >> print(bob.lt(90),length) >> print(bob.fd(145),length) >> print(bob.lt(90),length) >> print(bob.fd(140),length) >>square(bob,10)
- Chudithar – 5,000 For Daughter
- Make-up-kit – 10,000 For Daughter
- Movie ticket – 1500 For All
- Shirt & Jean – 1000For Son
- Saree – 6,000 For Mother
- Groceries – 4,000 For Mother
- Traveling Expense – 10,000 For All
- Food & Beverages – 3,000 For All
Answer: >>Father = 30000 >>Mother = 20000 >>Son = 16000 >>Daughter = 10000 >>Total = Father + Mother + Son + Daughter >>Daughter_expenditure = 15000 >>Son_expenditure = 1000 >>Mother_expenditure = 10000 >>Combined_expenditure = 14500 >>if Daughter_expenditure > Daughter: >> exp = Daughter_expenditure + Daughter >> Dexp = exp – Daughter >> print(“Daughter expenditure %d”, Dexp) >>else: >> Dexp = Daughter – Daughter_expenditure >> print(“Daughter expenditure %d”, Dexp) >>if Son_expenditure > Son: >> exp = Son_expenditure + Son >> Sexp = exp – Son >> print(“Son expenditure %d”, Sexp) >>else: >> Sexp = Son – Son_expenditure >> print(“Son expenditure %d”, Sexp) >>if Mother_expenditure > Mother: >> exp = Mother_expenditure + Mother >> Mexp = exp – Mother >> print(“Mother expenditure %d”, Mexp) >>else: >> Mexp = Mother – Mother_expenditure >> print(“Mother expenditure %d”, Mexp) >>if Combined_expenditure > Total: >> Texp = Combined_expenditure + Total >> print(“Combined expenditure %d”, Texp) >>else: >> Texp = Total – Combined_expenditure >> print(“Combined expenditure %d”, Texp)
Print = 77 is correct. The python interpreter allows the print statement to be used as a variable also, but once it is assigned a value and recognized as a variable then it is not recognized as a reserved word in the following statement.
- Mention what type of library function is used for classification technique and why?
- Explain the training methodology.
A) Scikit – learn is used for classification technique in the above neural network because the sci-kit – learn is the best library for classifying a limited set of data.
B)The training methodology is as follows:
- Loading the dataset
- Accessing the target and distribution values
- Splitting the data into training and testing set
- Clustering the data
- Evaluation of the model.
Debug the code: >>import tensorflow as tf
>>hello = tf.constant(‘Hello,TensorFlow!’)
>>sess = tf.session()
>>print(sess.run(hello))
The line sess = tf.session() is wrong and the correct line is sess = tf.Session() because the python is case sensitive and therefore there is a syntax error in this code.
64 bit is the minimum system bit requirement for installing tensorflow as they are not available for other types of system bit in windows.
The convolutional neural network is prescribed than other types of neural network for image processing because of the capability to enhance the multi-layered perceptron (MLP) by inserting the convolutional layers. This helps in training the network on a pixel – by – pixel layer.
Machine Learning | Artificial Intelligence |
The machine learning is a subset of AI. | The artificial intelligence is a mimic of human intelligence that can be done through machine learning. |
All machine learning belongs to AI. | All AI does not belong to machine learning. |
The two different categories of a deep learning AMI are Conda AMI and Base AMI.
When we terminate the instance in AWS then we will be no longer able to use the particular instance and we need to create a new instance for performing any operation.
The following are the list of libraries already available:
- MxNet
- TensorFlow
- Keras with TensorFlow as default backend
- Keras with MxNet as default backend
- Caffee
- CNTK
- Theano
- PyTorch
- NVidia
- CUDA
- CuDNN
1
Open the putty terminal
- Give the hostname as “ubuntu@PublicDNS” if you are using ubuntu server give as ubuntu and if in the place of PublicDNS give the statement shown while running the instance.
- Assign the port number as 22.
- Click the Auth column given in the left side of the putty below SSH.
- Click the browse button given on the right. Select the private key file for your instance that is with the .ppk extension.
- Click the open button given below and now a new pop up window opens which says whether you trust the host. Click yes then now the server gets opened.
The MxNet is useful for easy programming flexibility, portability and scalability. The MxNet supports multiple languages like C++, Python, R, Julia, Perl, etc. Therefore, it eliminates the need to learn a new language. It has faster training capabilities.
In general, the boundary line for classifying any two objects is done by finding the equation y=mx+c where y is the output, x is the input, m is the slope and c is the constant.
Classical AI:
It has more concerned with deductive thought as provided with set of constraints, deduce a conclusion
Weak AI:
It simply predicts that some features that are resembling to the human machine intelligence can be incorporated to computer make it more useful tools in it.
Production Rule:
It is a rule which comprises of a set of rule and a sequence of steps in it.
Search Method:
Depth First Search is the method that would takes the less memory for any of the process to follow.
A* Algorithm Search Methodology: A* algo’s is fully based on first search method and it provides an idea of optimization and quick choose of the path and along-with all the characterisitics lies on it the same.
Heuristic Function:
It is an alternatives function for ranks in search algorithms and at each branching step based on the available information to decide which branch that needs to be follow.
Neural Networks In AI:
In general it is an biological term, but in artificial intelligence it is an emulation of a biological neural system, which receives the data, process the data and gives output based on the algorithm and empirical data.
- Strong AI:
- It claims that the computer can be made to think on a level equal to humans. This is what the strong ai performs.
- Statistical AI:
- It has more concerned with inductive thought as provided with set of patterns, induce with the trend.
Game Playing Problem Methodology:
Heuristic Approach is the best way to proceed further for game playing problem, though it will use the technique based on intelligent guesswork. Let us say an example like chess game – Chess between human and computer as it will proceed with brute force computation and looking at hundreds of thousands of positions.
- Natural Key:
- It is one of the data element that is stored within a construct, and it is optimized as the primary key.
- Compound & Artifical Key:
- If there is no single data element that uniquely defines the occurrences within a construct, then integrating multiple elements to create a unique identifier for the construct and it is called as compound key.
- If there is no obvious key either stands alone or compound is available, then the last report is to simply create a key by assigning a number to each record or occurrence and it is called a artificial key.
- Agent:
- Like, anything that preceives its environment by the sensors, and act upon an environment by effectors are called as Agent. (e.g. Robots, Programs, Humans, HCI, HMI etc.)
- Partial Order or Planning:
- Instead of searching over possible situation that involves searching over the space of possbile plans. Then the idea can be construct as a plan piece by piece.
Ways to Construct a Plan:
- Action, nothing but add an operator
- Secondly, add an ordering constraints between the operators Property not desirable:
- Attachment is the one which not considered as a logical-rule based system in artificial intelligence.
- Generality:
- It is the ease measure with which the method can be adapted to different domains of application
- Top-Down Parser:
- It begins by hypothesizing a sentence and successively predicting lower level constituents until that the individual pre-terminal symbols are written.
- FOLP:
- It’s nothing but a first order predicate logic we called as shortly FOPL.
- Working Methodology:
- It needs a language to express assertions about certain world
- It needs an inteference system to deductive apparatus whereby we may draw conclusions from such assertions
- It needs a semantic based on set theory
- Frames:
- They are variant of semantic networks which is one of the popular ways of presenting non-procedural knowledge in an expert system.
- A frame which is an artificial data structure is used to divide the knowledge into substructure by representing in stereotyped-situations.
- Scripts:
- It is similar with respect to frames but except the values that fill the slots must be ordered.
- Though, scripts used in natural language understanding systems to organize a knowledge base in terms of the situation that the system should understand.
- Build Methodology of Bayesian Network:
- The Bayesian Network creation, falls under the consequence between a node and its predecessors.
- And also the node can be conditionally independent of its predecessors.
- Build Methodology of Bayes Model:
- There are three terminology that required to build a Bayes Model.
- And they should be one conditional probability based and the other two unconditional probability based.
The below are the few literal that currently used for inductive learning methodology:
- Predicates
- Equality & In-Equality
- Arithmetic Literals
- HMM:
- It is ubiquitous tool for modelling time series data or to model sequence behaviour of the system.
- It can be used in almost all the current speech based systems.
- It can be stated in single discrete random variable.
- Signal Flow Used:
- Acoustic signal flow is the one which can be used in the speech to identify the sequence of words.
Ans. Bagging; Dropout can be seen as bagging, it each training step it creates a different network which is trained with backpropagation. It is same as ensemble of many networks trained with a single sample.
Ans. Because of the Activation Functions; Yes, the activation functions are helpful in making the functions piecewise linear which in turn helps in representing the any complex function.
- Low learning rate
- High regularization
- Stuck at local Minima
Low learning rate : Because of this the minima is very slow by gradient descent algorithm
High regularization : This will bound the parameter values to the very low values and complexity is very much decreased
Stuck at local Minima : When it is stuck at local minima it requires more iterations or change in learning rate to get out of it.
Ans. To make the data standardized before sending it to the another layer. It reduces the impacts of previous layers by keeping the mean and variance constant, makes the layers independent of each other. The convergence becomes faster.
Ans. Machine translation, Sentiment Analysis, Question and Answer system
Machine translation : Sequence to sequence models are used for this.
Sentiment Analysis : Classification techniques on text using neural networks
Question and Answer system : This is again a Seq to seq model
Ans. 1. No similarities are captures, 2. Very high no. of dimensions to compute One hot vectors are very sparse vectors which are orthogonal to each other and each vector is represented by the same number of total number of different words in corpus. So it is high dimensional as well as no similarities are captured.
Ans. Trained Neural Network with one hidden layer gives the lookup table. First of all train a model NN model with one hidden layer to predict the context words, after the training the actual weight matrix that is learnt by hidden layer is user for representing the words.
Ans. When dropout rate is very high, regularization will be very low. It constrains adapting network to the data to avoid overfitting.
Ans. In BPTT, in each time step, for each weight the gradients are summed together. Errors will be calculated for each timestep. The weights are updated after the network is rolled back.
Ans. Gradient clipping, gradient is set to the threshold. Gradient clipping will chop the gradients or restricts them to a Threshold value to prevent the gradients from getting too large.
Ans. Statistical Average of the Output of the convolution layer, which is easy to compute on the further steps. This reduces the spatial size of the representation to reduce the amount of parameters and computation in the network. Pooling layer operates on each feature map independently.
Ans. TRUE, RNN’s neuron can be thought of as a neuron sequence of infinite length of time steps.
Ans. Horizontal flipping, Rescaling, Zooming; Deep learning model actually require so much of data to train the models. It is very data hungry. And to take advantage of training the models with various angles of objects we go with this data augmentation technique.
Ans. FALSE, it is hyperparameter so changing it we can increase or decrease performance. We initially randomly initialize the weights for these kernels and they learn the correct weight by back propagation. So it make more computation time and occupy resources.
Ans. Due to vanishing gradient, Vanishing gradient problem depends on the choice of the activation function. Activation functions (e.g sigmoid or tanh) usually ‘squash’ input into a very small number range in a very non-linear fashion.
Ans. Stemming and Lemmatization.
Stemming usually is the process that cuts off the ends of words in the hope of deriving the root word most of the time. So in simple word it just removes the affixes.
Lemmatization uses vocabulary and morphological analysis of words, and most of the time root it to the correct root words, ex: good for best. The root words are called as lemma.
Ans. Frequency counts, Vector Notation, POS, Dependency grammar
Ans. HMMs are generative models , models the joint distribution P(y,x). Therefore, model the distribution of the data P(x). These computations might take longer time compared to directly computing the conditional probability.
CRFs are discriminative models which model conditional probability P(y|x). As such, they do not require P(x) to be modelled. This results in faster performance, as they need fewer parameters to be learned.
Ans. Encoder network on input and Decoder network on output.
The encoder is a classification network which is pre trained and just like VGG/ResNet and followed by a decoder network.
The decoder is to project the lower resolution features learnt by encoder onto higher resolution space to get the dense classification.
Ans. NO, RNNs and LSTMs can have them. This is because the hidden state information is also passed to the consecutive layers in RNN and LSTM.
a) Composition
b) Mutation
c) Cross-over
d) Both Mutation & Cross-over
Answer: d
Explanation: New states are generated by mutation and by crossover, which combines a pair of states from the population.
a) To solve real-world problems
b) To solve artificial problems
c) To explain various sorts of intelligence
d) To extract scientific causes
Answer: c
a) reading
b) writing
c) speaking
d) seeing
Answer: d
a) natural language interfaces
b) natural language front ends
c) text understanding systems
d) all of the mentioned
Answer: d
a) Depth-first search algorithm
b) Breadth-first search algorithm
c) Hill-climbing search algorithm
d) All of the mentioned” open=”no” style=”default” icon=”plus” anchor=”” class=””]
Answer: a
Explanation: It is depth-first search algorithm because its space requirements are linear in the size of the proof.
a) 1
b) 2
c) 3
d) 4″ open=”no” style=”default” icon=”plus” anchor=”” class=””]
Answer: c
Explanation: The three required terms are a conditional probability and two unconditional probability.
a) Sound model
b) Model
c) Language model
d) All of the mentioned” open=”no” style=”default” icon=”plus” anchor=”” class=””]
Answer: c
Explanation: Because it contains the group of words which can help to specify the prior probability of each utterance.
a) 1.25+sqrt (1.44)
b) (1.25+sqrt (1.44))
c) (+1.25 sqrt (1.44)
d) All of the mentioned” open=”no” style=”default” icon=”plus” anchor=”” class=””]
Answer: c
a) True
b) False” open=”no” style=”default” icon=”plus” anchor=”” class=””]
Answer: b
Explanation: Utility values are always same and opposite.
Branches:
– Expert System
– Pattern Recognition
– Swarm Intelligence
– Data Mining
– Genetic Algorithm
– Neural Networks
– Statistical AI
– Fuzzy Logic
Game Theory:
– An AI System will use the game theory for the purpose of the requirement that enhance as the more than a participant. So, the relation between the game have two parts like,
- Participant Design
- Mechanism Design
Conference Information:
– A webpage that let you search for upcoming or previous conference in AI disciplinary way which can maintained by a Georg Thimm. So, that we called as AI based Conference Information.
Relational Knowledge:
– A knowledge representation scheme in which facts are represented as a set of relations. Let’s say e.g. Knowledge about a player can be represented using a relation called as player which consist of three fields,
- Player Name
- Height
- Weight
Inheritable Knowledge:
– A knowledge repesentation scheme which can be represented in the form of objects, their attributes and the corresponding values of the attributes.
– The relation between the object defined using a isa property in it.
– Let’s say an e.g. In a game two entities like Amature Male & Person are presented as objects than the relation between the two is that Amature Male is a Person.
NLP:
– Natural Language Processing shortly called as NLP.
– It’s nothing but an processing and prehaps based understanding.
– Like, process an computational linguistics with the help of read the scenario by natural human recognizable language.
Supervised Learning:
– It is one of the machine learning process.
– It process against the output that fed back from computer for software to learn from for more accurate result in the next time.
– It can receive initial training to start from machine.
Unsupervised Learning:
– Different Methodology of machine learning.
– Though, in contrast with unsupervised machine learning mean a computer will learn without training to base its learning on.
– In generic computing machinery & intelligence, computers would able to pass the tuning test at a reasonably sophiscated level, in a particular level.
– The average interrogator would not be able to identify the computer correctly more than 70 per cent of the time after a minute of conversation.
Semantic Analysis:
– Semantic will helps to extract the meaning from the group of
sentence. So, that the semantic would be helpful in AI.
Ans. It is a single layer feed-forward neural network
Ans. Error is transmitted back through the network to allow weights to be adjusted, using which system will learn
Ans. True, the main purpose of using the Activation function is to find non-linear decision surface.
Ans. The number will be 4.
Ans. True
Ans. False
Ans. True
Ans. True
Ans. True
Ans. False
Ans. False
Ans. Sigmoid Layer in forget gate.
Ans. Sigmoid in the write gate.
Ans. Sigmoid layer in the read gate.
Ans. RNN can be used when more contexts are needed.
Ans.
SynthTraining: 950,000
SynthValid: 50,000
RealValid: 5000
RealTest: 5000
Ans. Softmax
Ans. True
Ans. True
Ans. False
Ans. False
Ans. Linear
Ans. min {4x + 3y + (2/3) z}
Ans. Assignment problem
Ans. 16
Ans. The number of constraints in dual depends on Number of decision variables in primal.
Ans. Overfitting is a problem in Neural Networks.
Ans. False
Ans. Bagging
Ans. In Deep Learning the backpropagation uses weights that are randomly generated.
Ans. Rectified Linear Unit
Ans. A unit which doesn’t update during training by any of its neighbor.
Ans. Batch normalization is helpful because it normalizes (changes) all the input before sending it to the next layer.
Ans. True
Ans. True
Ans. True
Ans. It is a distribution with degree of freedom = 6.
Ans. Symmetrical
Ans. 50
Ans. It is always greater than one.
Ans. 0.1
Ans. Right tailed
Ans. 5 numerator and 114 denominator degrees of freedom.
Ans. x < 7.5
Ans. Var(X) + Var(Y)
Ans. Population mean, population standard deviation, sample mean, sample standard deviation.
Ans. False
Ans. To check the normality of the errors.
Ans. False
Besant Technologies does the best and gives the best to all students for all courses. For our students, we care much about giving the detailed knowledge and making them be master and get placed in best companies. We prepare AI interview questions and answers for our students to placed and also for exam related questions. Here are the few samples AI interview questions and answers. So, the first thing is to get into this Artificial intelligence training in Chennai by Besant technologies and also to get online coaching in GangBoard.