Wednesday, 31 May 2017

Movie/TV Series Rating Using IMDB

Hi friends,

In this post, we'll see a python script that takes a movie or a TV series name as an input and fetches its IMDB rating and summary from the IMDB website. The script uses Beautiful soup library in Python to extract the ratings and summary from the website. Here is the script to achieve the task.

import requests
from bs4 import BeautifulSoup

def getTitle(url):
    html = requests.get(url)
    soup = BeautifulSoup(html.text,'lxml')
    for title in soup.findAll('div',{'class':'title_wrapper'}):
        return title.find('h1').text.rstrip()

def getInfo(movieUrl):
    html = requests.get(movieUrl)
    soup = BeautifulSoup(html.text,'lxml')
    for div in soup.findAll('div',{'class':'ratingValue'}):
        print ('IMDB rating of \'' + userInput + '\' is: ', div.text)
        print ()

    for div in soup.findAll('div',{'class':'summary_text'}):
        print ('Summary of  \'' + userInput + '\' : ')
        print (div.text.lstrip())

userInput = input(('Enter Movie/Tv series name : '))
print ()
url = 'http://www.imdb.com/find?ref_=nv_sr_fn&q=' + userInput + '&s=all'

html = requests.get(url)
soup = BeautifulSoup(html.text,'lxml')

for td in soup.findAll('td',{'class':'result_text'}):
    link = td.find('a')['href']
    movieUrl = 'http://www.imdb.com' + link
    break

name = getTitle(movieUrl)
getInfo(movieUrl)
Share:

GATE - Relations 1

Hi friends,

This is a post under the series of posts labeled as GATE. I had created short hand written notes during my GATE preparation to revise for various topics. The posts under label GATE include formulae, equations, properties and other important points of various topics in Computer Science subjects summarized into point wise easy to remember hand written notes. 

Relations 1 Notes

Note: The quality of the notes is improved when downloaded or opened in another tab.


Share:

GATE - Relations 2

Hi friends,

This is a post under the series of posts labeled as GATE. I had created short hand written notes during my GATE preparation to revise for various topics. The posts under label GATE include formulae, equations, properties and other important points of various topics in Computer Science subjects summarized into point wise easy to remember hand written notes. 


Relations 2 Notes

Note: The quality of the notes is improved when downloaded or opened in another tab.


Share:

Live Cricket Scores Using Python Script

Hi friends,

In this post, we'll see a python script to get the live cricket score of a particular match using espncricinfo site. The script browses the espncricinfo website and lists all the live matches going on. The user can then select a specific match from the list and the script shows the live scores of the selected match. The script also fetches the score of the selected match after every 20 seconds which can be changed accordingly. So, here is the script to see live scores of a cricket match. 

import requests
from bs4 import BeautifulSoup
from time import sleep
import sys

print('Live Cricket Matches:')
print('=====================')
url = "http://static.cricinfo.com/rss/livescores.xml"
r = requests.get(url)
soup = BeautifulSoup(r.text,'lxml')

i = 1
for item in soup.findAll('item'):
 print(str(i) + '. ' + item.find('description').text)
 i = i + 1

links = []    
for link in soup.findAll('item'):
 links.append(link.find('guid').text)

print('Enter match number or enter 0 to exit:')
while True:
 try:
  userInput = int(input())
 except NameError:
  print('Invalid input. Try Again!')
  continue
 except SyntaxError:
  print('Invalid input. Try Again!')
 if userInput < 0 or userInput > 30:
  print('Invalid input. Try Again!')
  continue
 elif userInput == 0:
  sys.exit()      
 else:
  break

url = links[userInput - 1]
r = requests.get(url)
soup = BeautifulSoup(r.text,'lxml')  

while True:
 matchUrl = links[userInput - 1]
 r = requests.get(matchUrl)
 soup = BeautifulSoup(r.text,'lxml') 
 score = soup.findAll('title')       
 try:
  r.raise_for_status()
 except Exception as exc:
  print ('Connection Failure. Try again!')
  continue
 print(score[0].text + '\n')
 sleep(20)

Share:

Tuesday, 30 May 2017

GATE - Compilers

Hi friends,

This is a post under the series of posts labeled as GATE. I had created short hand written notes during my GATE preparation to revise for various topics. The posts under label GATE include formulae, equations, properties and other important points of various topics in Computer Science subjects summarized into point wise easy to remember hand written notes. 

Compilers Notes

Note: The quality of the notes is improved when downloaded or opened in another tab.

Share:

GATE - Computer Networks

Hi friends,

This is a post under the series of posts labeled as GATE. I had created short hand written notes during my GATE preparation to revise for various topics. The posts under label GATE include formulae, equations, properties and other important points of various topics in Computer Science subjects summarized into point wise easy to remember hand written notes. 

Computer Networks Notes

Note: The quality of the notes is improved when downloaded or opened in another tab.


Share:

Python script to detect changes in a webpage

Hi friends,

In this post, we'll see a python script to detect changes (if any) in a URL webpage. Results are around the corner and the anxiety among the students make them refresh the results page every second to see if the results have been declared or not. So, this post helps them to seat calmly and let the python script notify about the results. The python script keeps checking the results URL and play a beep sound if there are any changes in the URL. The same script can also be used to know about changes in the e-commerce websites like Flipkart, Amazon, etc. during the sale period to know about great offers.

from urllib.request import urlopen
import time
import os
import winsound

url = "http://upresults.nic.in/"  #results url
oldPage = urlopen(url).read()
updatedPage = oldPage 
counter = 0
flag = True

#while there are no new changes to the page
while oldPage == updatedPage:
 if flag:
  time.sleep(10)  #refresh every 10 seconds
 try:
  updatedPage = urlopen(url).read()
 except IOError:
  print("Error in reading url")
  flag = False
  continue 
 counter = counter + 1
 print(str(counter) + " times refreshed")
 flag = True

#play a beep sound
winsound.Beep(300, 2000)
Share:

Monday, 29 May 2017

Python script to list your most visited websites

Hi friends,

In this post, I'll share with you a python script to determine the top 10 visited websites using your Chrome browser. The python script given below does the following four important tasks which are implemented using four different functions:

  • Finding the location of the Chrome history database
  • Executing the query on the database to get the URLs visited and associated information
  • Count the URLs visited from the information obtained from the results of the above query
  • Plot the results of top ten most frequently visited URLs

import sqlite3
import os
import operator
from collections import OrderedDict
import pylab as plt

# Function to extract the domain name from a URL
def parseUrl(url):
 try:
  urlComponents = url.split('//')
  afterHttps = urlComponents[1].split('/', 1)
  domainName = afterHttps[0].replace("www.", "")
  return domainName
 except IndexError:
  print("Error in URL")

# Function to return the history database location
def getHistoryFile():
 #data_path = "C:\\Users\\Dell\\AppData\\Local\\Google\\Chrome\\User Data\\Default" 
 filePath = os.path.expanduser('~') + "\AppData\Local\Google\Chrome\\User Data\Default" # user's history database path (Chrome)
 getFiles = os.listdir(filePath)
 historyFile = os.path.join(filePath, 'history')
 return historyFile

# Function to query on the database file
def queryHistoryFile(historyFile):
 c = sqlite3.connect(historyFile)
 cursor = c.cursor()
 query = "SELECT urls.url, urls.visit_count FROM urls, visits WHERE urls.id = visits.url;"
 cursor.execute(query)
 results = cursor.fetchall()
 return results

# Function to count and plot the results barplot
def plotResults(results):
 sitesCount = {}
 # count the occurrences of each url visits
 for url, count in results:
  url = parseUrl(url)
  if url in sitesCount:
   sitesCount[url] += 1
  else:
   sitesCount[url] = 1

 #Sort in descending order
 sortedCount = OrderedDict(sorted(sitesCount.items(), key = operator.itemgetter(1), reverse = True))
 #Extracting the top 10
 index = list(range(1, 11))
 count = list(sortedCount.values())[:10]
 xLables = list(sortedCount.keys())[:10]

 #Plot the results
 plt.bar(index, count, align='center')
 plt.xticks(index, xLables)
 plt.show()
 historyFile = getHistoryFile()

#code execution starts here
historyFile = getHistoryFile()
queryResults = queryHistoryFile(historyFile)
plotResults(queryResults)

Note: The script requires the Chrome Browser to be closed during execution.
Share:

Download Udemy Videos Using Terminal

Hi friends,

In this post, we'll see how we can download the videos from the Udemy courses you have enrolled in. Although, Udemy do not support options to download their videos but there is a python library called udemy-dl which allows us to download Udemy videos. In this post, we'll see how to set up the same. 


Prerequisite: You need to have Python installed in your system to use udemy-dl.

You can install udemy-dl library using the following command:
sudo pip install udemy-dl

If you don’t have pip installer for installing python packages, you can get it installed via following command:
sudo apt-get install pip

Note: You don't need sudo for Windows Environment.
Once the udemy-dl library is successfully installed, you can start downloading the videos using the following command:
udemy-dl <COURSE_URL>

Next, it will ask for the username and the password of your udemy account. Once, you enter the details, your videos will start downloading.
Share:

Sunday, 28 May 2017

Download YouTube Videos Using Terminal

Hi friends,

There is a cool library named youtube-dl in Python that allows users to download their favorite YouTube videos using the video URL. In this post, I'll show you how to install and use the youtube-dl library in your system.

For Windows users:

You need to download the exe file from here and place it in any location on your PATH except for %SYSTEMROOT%\System32 (e.g. do not put in C:\Windows\System32)

For Linux Users:

You need to have pip in order to use the youtube-dl library. You can install the same by typing the following in your terminal:

pip install youtube-dl

Once you have installed the setup. You can then start downloading your favorite YouTube video using the following command from the command prompt/terminal:

youtube-dl "<your_youtube_url>"

You can check out additional parameters that can be given with the command here.
Share:

Wednesday, 24 May 2017

Remove Nth Node from List End

Given a linked list, remove the nth node from the end of list and return its head.
For example,
Given linked list: 1->2->3->4->5, and n = 2.
After removing the second node from the end, the linked list becomes 1->2->3->5.

Note:
* If n is greater than the size of the list, remove the first node of the list.
 
Try doing it using constant additional space.
Approach: Use one pointer first to move to the nth node and then start one from the start and other from the nth node. Once the nth node pointer reaches the end, the other one will be at the nth from the last which can be deleted easily.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
ListNode* Solution::removeNthFromEnd(ListNode* head, int n) {
    if(head == NULL){
        return NULL;
    }
    ListNode *t1, *t2, *prev;
    int count = 0;
    t1 = head;
    t2 = head;
    while(t1 && count < n){
        count++;
        t1 = t1->next;
    }
    //if already reached the end, then head needs to be moved forward
    if(t1 == NULL){
        head = head->next;
        return head;
    }
    //else move till we reach end in the first pointer
    while(t1){
        prev = t2;
        t2 = t2->next;
        t1 = t1->next;
    }
    //update the previous pointer
    prev->next = t2->next;
    return head;
}
Share: