[Tutor] script guidelines

2017-10-03 Thread renukesh nk
requirement:
i have a directory , that contains multiple sub directories, each sub
directory has multiple text and log files, my script fetches  the required
lines from all the sub directories and stores it in one text file.

but i want it to store separate text file for each  sub directory ,after
fetching the contents. can anyone please help me where to edit my script.

my script is currently dumping all in on file instead of separate file for
each directory.


import zipfile,fnmatch,os

import os, shutil, re, glob
from os.path import isfile, join
from os import walk

root_src_dir = r'E:\\New folder'
root_dst_dir = "E:\\destination"

for src_dir, dirs, files in os.walk(root_src_dir):
dst_dir = src_dir.replace(root_src_dir, root_dst_dir, 1)
if not os.path.exists(dst_dir):
os.makedirs(dst_dir)
for file_ in files:
src_file = os.path.join(src_dir, file_)
dst_file = os.path.join(dst_dir, file_)
if os.path.exists(dst_file):
os.remove(dst_file)
shutil.copy(src_file, dst_dir)

rootPath = r"E:\\destination"
pattern = '*.zip'
for root, dirs, files in os.walk(rootPath):
for filename in fnmatch.filter(files, pattern):
print(os.path.join(root, filename))
zipfile.ZipFile(os.path.join(root, 
filename)).extractall(os.path.join(root, os.path.splitext(filename)[0]))


os.chdir(rootPath)
for file in glob.glob(pattern):
f = open((file.rsplit(".", 1)[0]) + "ANA.txt", "w")

f.close()


##here folder output
mypath =r"E:\\destination"
newpath = os.path.expanduser("E:\\destination")
filenam = "1.txt"

f = []
path = ''
path1 = []

for (dirpath, dirnames, filenames) in walk(mypath):
if isfile(join(mypath, dirpath)):
path1.extend(join(mypath, dirpath))
if os.path.isdir(join(mypath, dirpath)):
for f in filenames:
path1.append(str(join(mypath, dirpath, f)))
print(path1)

newf = open(os.path.join(newpath, filenam ), "w+")

myarray = {"ERROR", "error"}
for element in myarray:
elementstring = ''.join(element)


for f in path1:
openfile = open(os.path.join(path, f), "r")
for line in openfile:
if elementstring in line:
 newf.write(line)


newf.close()
openfile.close()


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] script guidelines

2017-10-06 Thread renukesh nk
currently m using pycharm , interpreter = python 3.6
i am getting th error as below, what might be the reason for this.

UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 159:
character maps to 


On Tue, Oct 3, 2017 at 2:18 PM, renukesh nk  wrote:

> requirement:
> i have a directory , that contains multiple sub directories, each sub
> directory has multiple text and log files, my script fetches  the required
> lines from all the sub directories and stores it in one text file.
>
> but i want it to store separate text file for each  sub directory ,after
> fetching the contents. can anyone please help me where to edit my script.
>
> my script is currently dumping all in on file instead of separate file for
> each directory.
>
>
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


[Tutor] script guidance

2017-10-23 Thread renukesh nk
i want to download zip files from website , my script first lists all the
url links to a text file and then fetches each url and tries to download
zip files.


but i am getting error as below:
Running script..
https://sagamusix.dehttps://
sagamusix.de/other/Saga%20Musix%20-%20Colors%20of%20Synth1%20v1.0.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/bass.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/bass_drums.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/drums.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/fx.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/pads_strings.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/powerchords.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/synths.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/tr-808.zip
/n
https://sagamusix.dehttps://sagamusix.de/sample_collection/tr-909.zip
/n
Saga%20Musix%20-%20Colors%20of%20Synth1%20v1.0.zip

Trying to reach https://sagamusix.dehttps://
sagamusix.de/other/Saga%20Musix%20-%20Colors%20of%20Synth1%20v1.0.zip

We failed to reach a server.https://sagamusix.dehttps://
sagamusix.de/other/Saga%20Musix%20-%20Colors%20of%20Synth1%20v1.0.zip

Reason:  [Errno 11001] getaddrinfo failed
bass.zip

please help me to fix so that i acn download all the zip files

code:

import urllib2
from urllib2 import Request, urlopen, URLError
#import urllib
import os
from bs4 import BeautifulSoup
# import socket
# socket.getaddrinfo('localhost', 8080)

#Create a new directory to put the files into
#Get the current working directory and create a new directory in it named test
cwd = os.getcwd()
newdir = cwd +"\\test"
print "The current Working directory is " + cwd
os.mkdir( newdir);
print "Created new directory " + newdir
newfile = open('zipfiles.txt','w')
print newfile


print "Running script.. "
#Set variable for page to be open and url to be concatenated
url = "https://sagamusix.de";
page = urllib2.urlopen('https://sagamusix.de/en/samples/').read()

#File extension to be looked for.
extension = ".zip"

#Use BeautifulSoup to clean up the page
soup = BeautifulSoup(page, "html5lib")
soup.prettify()

#Find all the links on the page that end in .zip
for anchor in soup.findAll('a', href=True):
links = url + anchor['href']
if links.endswith(extension):
newfile.write(links + '\n')
newfile.close()

#Read what is saved in zipfiles.txt and output it to the user
#This is done to create presistent data
newfile = open('zipfiles.txt', 'r')
for line in newfile:
print line + '/n'
newfile.close()

#Read through the lines in the text file and download the zip files.
#Handle exceptions and print exceptions to the console
with open('zipfiles.txt', 'r') as url:
for line in url:
if line.find('/'):
print line.rsplit('/', 1)[1]

try:
ziplink = line
#Removes the first 48 characters of the url to get the
name of the file
zipfile = line[24:]
#Removes the last 4 characters to remove the .zip
zipfile2 = zipfile[:3]
print "Trying to reach " + ziplink
response = urllib2.urlopen(ziplink)
except URLError as e:

print 'We failed to reach a server.'+ziplink
if hasattr(e, 'reason'):
 print 'Reason: ', e.reason
 continue
elif hasattr(e, 'code'):
 print 'The server couldnt fulfill the request.'
print 'Error code: ', e.code
continue
else:
zipcontent = response.read()
completeName = os.path.join(newdir, zipfile2+ ".zip")
with open (completeName, 'w') as f:
print "downloading.. " + zipfile
f.write(zipcontent)
f.close()
print "Script completed"
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


[Tutor] CSV row and column width automation

2018-01-03 Thread renukesh nk
Hi,

Is there any way to automatically set the column and row width in a CSV
file through python script
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] CSV row and column width automation

2018-01-19 Thread renukesh nk
Hi,

Does  Web2py framework supports Python version 3.6 ?

On Wed, Jan 3, 2018 at 11:01 AM, renukesh nk  wrote:

> Hi,
>
> Is there any way to automatically set the column and row width in a CSV
> file through python script
>
>
>
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


[Tutor] fix overwriting issue

2018-02-06 Thread renukesh nk
Hi,

i am facing issue while writing files to a folder, where the files get
overwrite if they have same file names , so any help me to  fix

thanks
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor