spam-bot-3000开源项目

我要开发同款
匿名用户2021年11月08日
74阅读
开发技术Python
所属分类应用工具、IM/聊天/语音工具
授权协议Readme

作品详情

spam-bot-3000

Apythoncommand-line(CLI)botforautomatingresearchandpromotiononpopularsocialmediaplatforms(reddit,twitter,facebook,[TODO:instagram]).Withasinglecommand,scrapesocialmediasitesusingcustomqueriesand/orpromotetoallrelevantresults.

Pleaseusewithdiscretion:i.e.chooseyourinputargumentswisely,otherwiseyourbotcouldfinditself,alongwithanyassociatedaccounts,bannedfromplatformsveryquickly.Thebothassomebuiltinanti-spamfilteravoidancefeaturestohelpyouremainundetected;however,noamountofavoidancecanhideblatantlyabusiveuseofthistool.

featuresredditscrapesubreddit(s)forlistsofkeyword,dumpresultsinlocalfile(red_scrape_dump.txt)separatekeywordlistsforAND,OR,NOTsearchoperations(red_subkey_pairs.json)searchnew,hot,orrisingcategoriesreplytopostsinred_scrape_dump.txtwithrandompromotionfromred_promos.txtignorepostsbymarkingthemindumpfilewith"-"prefixpraw.errors.HTTPExceptionhandlingwriteallactivitytolog(log.txt)twittermaintainseparatejobsfordifferentpromotionprojectsupdateuserstatusunfollowuserswhodon'treciprocateyourfollowscantwitterforlistofcustomqueries,dumpresultsinlocalfile(twit_scrape_dump.txt)scancontinuouslyorinoverwatchmodeoptionalbypassingofproprietarytwitterAPIsandtheirinherentlimitationspromotionabilitiestweepyapifolloworiginalpostersfavoriterelevanttweetsdirectmessagerelevanttweetsreplytorelevanttweetswithrandompromotionaltweetfromfile(twit_promos.txt)SeleniumGUIbrowserfavorite,follow,replytoscrapedresultswhilebypassingAPIlimitsignoretweetsbymarkingthemindumpfilewith"-"prefixscriptfornewkeyword,hashtagresearchbygleeningscrapedresultsscriptforfilteringoutirrelevantkeywords,hashtags,screennamesscriptforautomatingscraping,filtering,andspammingonlymostrelevantresultsrelativelygracefulexceptionhandlingwriteallactivitytolog(log.txt)facebookzerorelianceonproprietaryfacebookAPIsandtheirinherentlimitationsSeleniumGUIbrowseragentscrapepublicandprivateuserprofilesforkeywordsusingAND,OR,NOToperatorsnote:accesstoprivatedatarequireslogintoauthorizedaccountwithassociatedaccessscrapepublicandprivategroupfeedsforkeywordsusingAND,OR,NOToperatorsdependenciesinstalldependenciesyouprobablydon'thavealready,errorswillshowupifyou'remissinganyothersinstallpip3sudoaptinstallpython3-pipinstalldependenciespip3install--usertweepybs4prawseleniumredditinitialsetupupdate'praw.ini'withyourredditappcredentialshowtoregisteranewredditappreplaceexamplepromotions(red_promos.txt)withyourownreplaceexamplesubredditsandkeywords(red_subkey_pairs.json)withyourownyou'llhavetofollowtheexistingjsonformatkeywords_and:allkeywordsinthislistmustbepresentforpositivematchingresultkeywords_or:atleastonekeywordinthislistmustbepresentforpositivematchkeywords_not:noneofthesekeywordscanbepresentinapositivematchanyofthethreelistsmaybeomittedbyleavingitempty-e.g."keywords_not":[]

<praw.ini>

...[bot1]client_id=Y4PJOclpDQy3xZclient_secret=UkGLTe6oqsMk5nHCJTHLrwgvHprpassword=pni9ubeht4wd50gkusername=fakebot1user_agent=fakebot0.1

<red_subkey_pairs.json>

{"sub_key_pairs":[{"subreddits":"androidapps","keywords_and":["list","?"],"keywords_or":["todo","app","android"],"keywords_not":["playlist","listen"]}]}redditusageusage:spam-bot-3000.pyreddit[-h][-sN][-n|-H|-r][-p]optionalarguments:-h,--helpshowthishelpmessageandexit-sN,--scrapeNscrapesubredditsinsubreddits.txtforkeywordsinred_keywords.txt;N=numberofpoststoscrape-n,--newscrapenewposts-H,--hotscrapehotposts-r,--risingscraperisingposts-p,--promotepromotetopostsinred_scrape_dump.txtnotmarkedwitha"-"prefixtwitterinitialsetupcreatenewdirectorytostorenewjobdatain(e.g.studfinder_example/)createnew'credentials.txt'fileinjobdirectorytostoreyourtwitterapp'scredentialsagoodguideforhowtogettwittercredentials

<credentials.txt>

your_consumer_keyyour_consumer_secretyour_access_tokenyour_access_token_secretyour_twitter_usernameyour_twitter_passwordcreatenew'twit_promos.txt'injobdirectorytostoreyourjob'spromotionstospamindividualtweetsonseperatelineseachlinemustby<=140characterslongcreatenew'twit_queries.txt'injobdirectorytostoreyourjob'squeriestoscrapetwitterforindividualqueriesonseperatelinesguidetoconstructingtwitterqueriescreatenew'twit_scrape_dump.txt'filetostoreyourjob'sreturnedscraperesultstwitterusageusage:spam-bot-3000.pytwitter[-h][-jJOB_DIR][-t][-uUNF][-s][-c][-e][-b][-f][-p][-d]spam-bot-3000optionalarguments:-h,--helpshowthishelpmessageandexit-jJOB_DIR,--jobJOB_DIRchoosejobtorunbyspecifyingjob'srelativedirectory-t,--tweet-statusupdatestatuswithrandompromofromtwit_promos.txt-uUNF,--unfollowUNFunfollowuserswhoaren'tfollowingyouback,UNF=numbertounfollowquery:-s,--scrapescrapefortweetsmatchingqueriesintwit_queries.txt-c,--continuousscapecontinuously-suppressprompttocontinueafter50resultsperquery-e,--englishreturnonlytweetswritteninEnglishspam->browser:-b,--browserfavorite,follow,replytoallscrapedresultsandthwartapilimitsbymimickinghumaninbrowser!spam->tweepyapi:-f,--followfolloworiginaltweetersintwit_scrape_dump.txt-p,--promotefavoritetweetsandreplytotweetersintwit_scrape_dump.txtwithrandompromofromtwit_promos.txt-d,--direct-messagedirectmessagetweetersintwit_scrape_dump.txtwithrandompromofromtwit_promos.txttwitterexampleworkflowscontinuousmode-cspfscrapeandpromotetoalltweetsmatchingqueriesoverwatchmode-sscrapefirstmanuallyedittwit_scrape_dump.txtadd'-'tobeginningoflinetoignoreleavelineunalteredtopromoteto-pfthenpromotetoremainingtweetsintwit_scrape_dump.txtgleencommonkeywords,hashtags,screennamesfromscrapedumpsbashgleen_keywords_from_twit_scrape.bashinputfile:twit_scrape_dump.txtoutputfile:gleened_keywords.txtresultsorderedbymostoccurrencesfirstfilteroutkeywords/hashtagsfromscrapedumpmanuallyeditgleened_keywords.txtbyremovingallreleventresultsfilter_out_strings_from_twit_scrape.bashkeywordsinputfile:gleened_keywords.txtinputfile:twit_scrape_dump.txtoutputfile:twit_scrp_dmp_filtd.txtbrowsermode-bthwartapilimitsbypromotingtoscrapedresultsdirectlyinfirefoxbrowseraddusernameandpasswordtolines5and6ofcredentials.txtrespectivelyautomaticscrape,filter,spamauto_spam.bashautomaticallyscrapetwitterforqueries,filteroutresultstoignore,andspamremainingresultsspecifyjob-jstudfinder_example/specifywhichjobdirectorytoexecute

Note:ifyoudon'twanttomaintainindividualjobsinseparatedirectories,youmaycreatesinglecredentials,queries,promos,andscrapedumpfilesinmainworkingdirectory.

facebookinitialsetupcreatenewclientfolderin'facebook/clients/YOUR_CLIENT'createnew'jobs.json'filetostoreyourclient'sjobinformationinthefollowingformat:

<jobs.json>

{"client_data":{"name":"","email":"","fb_login":"","fb_password":"","jobs":[{"type":"groups","urls":["",""],"keywords_and":["",""],"keywords_or":["",""],"keywords_not":["",""]},{"type":"users","urls":[],"keywords_and":[],"keywords_or":[],"keywords_not":[]}]}}facebookusagescrapeuserandgroupfeedurlsforkeywordsfacebook-scraper.pyclients/YOUR_CLIENT/resultsoutputto'clients/YOUR_CLIENT/results.txt'TODOFleshoutadditionalsuiteofpromotionandinteractiontoolforfacebookplatformOrganizeplatformsandtheirassociateddataandtoolsintotheirownfoldersandpythonscriptsFutureupdateswillincludemodulesforscrapingandpromotingtoInstagram.
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论