Checking if a File Exists in Python: Complete Guide
Table of Contents
1. [Introduction](#introduction) 2. [Methods Overview](#methods-overview) 3. [Using os.path Module](#using-ospath-module) 4. [Using pathlib Module](#using-pathlib-module) 5. [Using try-except Blocks](#using-try-except-blocks) 6. [Performance Comparison](#performance-comparison) 7. [Best Practices](#best-practices) 8. [Advanced Techniques](#advanced-techniques) 9. [Common Pitfalls](#common-pitfalls) 10. [Practical Examples](#practical-examples)
Introduction
Checking if a file exists is one of the most fundamental operations in file handling with Python. Before performing operations like reading, writing, or modifying files, it's crucial to verify their existence to prevent runtime errors and ensure robust application behavior.
Python provides multiple approaches to check file existence, each with its own advantages and use cases. Understanding these methods helps developers choose the most appropriate technique based on their specific requirements, performance considerations, and coding style preferences.
Methods Overview
| Method | Module | Python Version | Performance | Use Case |
|--------|--------|----------------|-------------|----------|
| os.path.exists() | os.path | 2.x, 3.x | Fast | General file/directory checking |
| os.path.isfile() | os.path | 2.x, 3.x | Fast | Specifically for files only |
| os.path.isdir() | os.path | 2.x, 3.x | Fast | Specifically for directories only |
| pathlib.Path.exists() | pathlib | 3.4+ | Fast | Modern object-oriented approach |
| pathlib.Path.is_file() | pathlib | 3.4+ | Fast | Modern approach for files only |
| pathlib.Path.is_dir() | pathlib | 3.4+ | Fast | Modern approach for directories only |
| try-except | Built-in | 2.x, 3.x | Variable | When you need to handle the file immediately |
Using os.path Module
The os.path module has been the traditional way to handle file path operations in Python. It provides several functions to check file existence with different levels of specificity.
os.path.exists()
The os.path.exists() function checks whether a path exists, regardless of whether it's a file or directory.
`python
import os
Basic usage
file_path = "example.txt" if os.path.exists(file_path): print(f"Path {file_path} exists") else: print(f"Path {file_path} does not exist")Checking multiple files
files_to_check = ["config.json", "data.csv", "logs.txt"] for file_path in files_to_check: status = "exists" if os.path.exists(file_path) else "does not exist" print(f"{file_path}: {status}")`Key Notes:
- Returns True if the path exists (file or directory)
- Returns False if the path does not exist
- Does not distinguish between files and directories
- Handles both absolute and relative paths
os.path.isfile()
The os.path.isfile() function specifically checks if a path exists and is a regular file.
`python
import os
def check_file_specifically(file_path): if os.path.isfile(file_path): return f"{file_path} is a file" elif os.path.exists(file_path): return f"{file_path} exists but is not a file" else: return f"{file_path} does not exist"
Examples
test_paths = ["document.txt", "my_folder", "nonexistent.pdf"] for path in test_paths: print(check_file_specifically(path))`Key Notes:
- Returns True only if the path exists and is a regular file
- Returns False for directories, even if they exist
- More specific than os.path.exists()
- Useful when you need to ensure you're working with files only
os.path.isdir()
The os.path.isdir() function checks if a path exists and is a directory.
`python
import os
def analyze_path(path): results = { 'exists': os.path.exists(path), 'is_file': os.path.isfile(path), 'is_directory': os.path.isdir(path) } return results
Comprehensive path analysis
paths_to_analyze = [".", "..", "sample.txt", "/usr/bin", "nonexistent"] for path in paths_to_analyze: analysis = analyze_path(path) print(f"Path: {path}") for key, value in analysis.items(): print(f" {key}: {value}") print()`Advanced os.path Usage
`python
import os
from datetime import datetime
def detailed_file_check(file_path): """ Perform comprehensive file existence check with additional information """ info = { 'path': file_path, 'exists': False, 'is_file': False, 'is_directory': False, 'size': None, 'modified_time': None, 'absolute_path': os.path.abspath(file_path) } if os.path.exists(file_path): info['exists'] = True info['is_file'] = os.path.isfile(file_path) info['is_directory'] = os.path.isdir(file_path) try: stat_info = os.stat(file_path) info['size'] = stat_info.st_size info['modified_time'] = datetime.fromtimestamp(stat_info.st_mtime) except OSError as e: info['error'] = str(e) return info
Usage example
file_info = detailed_file_check("example.txt") for key, value in file_info.items(): print(f"{key}: {value}")`Using pathlib Module
The pathlib module, introduced in Python 3.4, provides an object-oriented approach to handling filesystem paths. It's considered more modern and pythonic than the os.path module.
pathlib.Path.exists()
`python
from pathlib import Path
Basic usage
file_path = Path("example.txt") if file_path.exists(): print(f"File {file_path} exists") else: print(f"File {file_path} does not exist")Working with different path types
paths = [ Path("local_file.txt"), Path("/absolute/path/file.txt"), Path("../relative/path/file.txt"), Path.home() / "Documents" / "important.pdf" ]for path in paths:
status = "exists" if path.exists() else "missing"
print(f"{path}: {status}")
`
pathlib.Path.is_file() and is_dir()
`python
from pathlib import Path
def categorize_path(path_string): """ Categorize a path using pathlib methods """ path = Path(path_string) if path.is_file(): return "file" elif path.is_dir(): return "directory" elif path.exists(): return "other" else: return "nonexistent"
Test different path types
test_paths = [ ".", "..", "script.py", "/home/user", "missing_file.txt" ]for path_str in test_paths:
category = categorize_path(path_str)
print(f"{path_str}: {category}")
`
Advanced pathlib Operations
`python
from pathlib import Path
import stat
class FileChecker: """ A comprehensive file checking class using pathlib """ def __init__(self, path_string): self.path = Path(path_string) def exists(self): return self.path.exists() def is_file(self): return self.path.is_file() def is_directory(self): return self.path.is_dir() def get_info(self): """ Get comprehensive information about the path """ info = { 'path': str(self.path), 'absolute_path': str(self.path.absolute()), 'exists': self.path.exists() } if self.path.exists(): info.update({ 'is_file': self.path.is_file(), 'is_directory': self.path.is_dir(), 'parent': str(self.path.parent), 'name': self.path.name, 'suffix': self.path.suffix, 'stem': self.path.stem }) if self.path.is_file(): stat_info = self.path.stat() info.update({ 'size_bytes': stat_info.st_size, 'size_kb': round(stat_info.st_size / 1024, 2), 'modified_time': stat_info.st_mtime, 'is_readable': self.path.is_file() and stat_info.st_mode & stat.S_IRUSR, 'is_writable': self.path.is_file() and stat_info.st_mode & stat.S_IWUSR }) return info
Usage example
checker = FileChecker("example.txt") file_info = checker.get_info()print("File Information:")
print("-" * 40)
for key, value in file_info.items():
print(f"{key:15}: {value}")
`
pathlib vs os.path Comparison
`python
import os
from pathlib import Path
import time
def benchmark_file_checking(file_path, iterations=10000): """ Compare performance between os.path and pathlib methods """ # os.path benchmark start_time = time.time() for _ in range(iterations): os.path.exists(file_path) os_path_time = time.time() - start_time # pathlib benchmark path_obj = Path(file_path) start_time = time.time() for _ in range(iterations): path_obj.exists() pathlib_time = time.time() - start_time return { 'os.path.exists()': os_path_time, 'pathlib.exists()': pathlib_time, 'difference': abs(os_path_time - pathlib_time), 'faster': 'os.path' if os_path_time < pathlib_time else 'pathlib' }
Run benchmark
results = benchmark_file_checking("test_file.txt") print("Performance Comparison:") for key, value in results.items(): print(f"{key}: {value}")`Using try-except Blocks
The try-except approach involves attempting to open or access a file and catching exceptions if the file doesn't exist. This method follows the "Easier to Ask for Forgiveness than Permission" (EAFP) principle.
Basic try-except Implementation
`python
def check_file_with_exception(file_path):
"""
Check if file exists using try-except approach
"""
try:
with open(file_path, 'r') as file:
return True
except FileNotFoundError:
return False
except PermissionError:
print(f"File {file_path} exists but permission denied")
return True
except Exception as e:
print(f"Unexpected error: {e}")
return False
Usage examples
test_files = ["existing_file.txt", "nonexistent_file.txt", "/root/protected_file.txt"] for file_path in test_files: exists = check_file_with_exception(file_path) print(f"{file_path}: {'exists' if exists else 'does not exist'}")`Advanced Exception Handling
`python
import errno
def robust_file_check(file_path, mode='r'): """ Robust file existence check with detailed exception handling """ try: with open(file_path, mode) as file: return { 'exists': True, 'accessible': True, 'error': None } except FileNotFoundError: return { 'exists': False, 'accessible': False, 'error': 'File not found' } except PermissionError: return { 'exists': True, 'accessible': False, 'error': 'Permission denied' } except IsADirectoryError: return { 'exists': True, 'accessible': False, 'error': 'Path is a directory' } except OSError as e: error_messages = { errno.ENOENT: 'No such file or directory', errno.EACCES: 'Permission denied', errno.EISDIR: 'Is a directory', errno.EMFILE: 'Too many open files', errno.ENFILE: 'File table overflow' } return { 'exists': e.errno != errno.ENOENT, 'accessible': False, 'error': error_messages.get(e.errno, f'OS Error: {e}') }
Comprehensive testing
test_scenarios = [ ("existing_file.txt", 'r'), ("nonexistent_file.txt", 'r'), ("write_protected_file.txt", 'w'), ("directory_path", 'r') ]for file_path, mode in test_scenarios:
result = robust_file_check(file_path, mode)
print(f"File: {file_path} (mode: {mode})")
for key, value in result.items():
print(f" {key}: {value}")
print()
`
Context Manager Approach
`python
class FileExistenceChecker:
"""
Context manager for file existence checking
"""
def __init__(self, file_path, mode='r'):
self.file_path = file_path
self.mode = mode
self.file_handle = None
self.exists = False
self.error = None
def __enter__(self):
try:
self.file_handle = open(self.file_path, self.mode)
self.exists = True
return self
except Exception as e:
self.exists = False
self.error = e
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if self.file_handle:
self.file_handle.close()
Usage example
with FileExistenceChecker("test_file.txt") as checker: if checker.exists: print(f"File exists and is accessible") # Perform file operations here else: print(f"File check failed: {checker.error}")`Performance Comparison
Understanding the performance characteristics of different file existence checking methods helps in making informed decisions for performance-critical applications.
Comprehensive Benchmark
`python
import os
from pathlib import Path
import time
import statistics
def comprehensive_benchmark(file_path, iterations=10000): """ Comprehensive performance benchmark of all methods """ methods = {} # os.path.exists() start = time.perf_counter() for _ in range(iterations): os.path.exists(file_path) methods['os.path.exists()'] = time.perf_counter() - start # os.path.isfile() start = time.perf_counter() for _ in range(iterations): os.path.isfile(file_path) methods['os.path.isfile()'] = time.perf_counter() - start # pathlib.Path.exists() path_obj = Path(file_path) start = time.perf_counter() for _ in range(iterations): path_obj.exists() methods['pathlib.Path.exists()'] = time.perf_counter() - start # pathlib.Path.is_file() start = time.perf_counter() for _ in range(iterations): path_obj.is_file() methods['pathlib.Path.is_file()'] = time.perf_counter() - start # try-except approach start = time.perf_counter() for _ in range(iterations): try: with open(file_path, 'r'): pass except: pass methods['try-except'] = time.perf_counter() - start return methods
Run multiple benchmarks for statistical accuracy
def statistical_benchmark(file_path, runs=5, iterations=10000): """ Run multiple benchmarks and provide statistical analysis """ all_results = {} for run in range(runs): results = comprehensive_benchmark(file_path, iterations) for method, time_taken in results.items(): if method not in all_results: all_results[method] = [] all_results[method].append(time_taken) # Calculate statistics stats = {} for method, times in all_results.items(): stats[method] = { 'mean': statistics.mean(times), 'median': statistics.median(times), 'stdev': statistics.stdev(times) if len(times) > 1 else 0, 'min': min(times), 'max': max(times) } return statsCreate a test file
test_file = "benchmark_test.txt" with open(test_file, 'w') as f: f.write("test content")Run statistical benchmark
benchmark_results = statistical_benchmark(test_file)print("Performance Benchmark Results (seconds)") print("=" * 60) print(f"{'Method':<25} {'Mean':<10} {'Median':<10} {'StdDev':<10}") print("-" * 60)
for method, stats in benchmark_results.items(): print(f"{method:<25} {stats['mean']:<10.6f} {stats['median']:<10.6f} {stats['stdev']:<10.6f}")
Clean up
os.remove(test_file)`Performance Comparison Table
| Method | Typical Performance | Memory Usage | Best Use Case |
|--------|-------------------|--------------|---------------|
| os.path.exists() | Fastest | Low | General existence check |
| os.path.isfile() | Fast | Low | File-specific checks |
| pathlib.Path.exists() | Fast | Medium | Modern code, object-oriented |
| pathlib.Path.is_file() | Fast | Medium | Modern file-specific checks |
| try-except | Slowest | High | When immediate file access needed |
Best Practices
Choosing the Right Method
`python
from pathlib import Path
import os
class FileExistenceStrategy: """ Strategy pattern for choosing appropriate file existence checking method """ @staticmethod def for_modern_python(file_path): """ Best practice for Python 3.4+ """ path = Path(file_path) return path.exists() and path.is_file() @staticmethod def for_legacy_compatibility(file_path): """ Best practice for Python 2.x compatibility """ return os.path.isfile(file_path) @staticmethod def for_immediate_use(file_path, mode='r'): """ Best practice when you need to use the file immediately """ try: with open(file_path, mode) as f: return True, f except FileNotFoundError: return False, None @staticmethod def for_performance_critical(file_path): """ Best practice for performance-critical applications """ return os.path.exists(file_path)
Usage examples
file_path = "example.txt"Modern Python approach
if FileExistenceStrategy.for_modern_python(file_path): print("File exists (modern approach)")Legacy compatible approach
if FileExistenceStrategy.for_legacy_compatibility(file_path): print("File exists (legacy compatible)")Immediate use approach
exists, file_handle = FileExistenceStrategy.for_immediate_use(file_path) if exists: print("File exists and is ready for use") # Use file_handle here`Error Handling Best Practices
`python
import logging
from pathlib import Path
from typing import Union, Tuple, Optional
Configure logging
logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__)def safe_file_check(file_path: Union[str, Path], log_errors: bool = True) -> Tuple[bool, Optional[str]]: """ Safe file existence check with proper error handling and logging Args: file_path: Path to the file to check log_errors: Whether to log errors Returns: Tuple of (exists: bool, error_message: Optional[str]) """ try: path = Path(file_path) if isinstance(file_path, str) else file_path # Validate input if not isinstance(path, Path): error_msg = f"Invalid path type: {type(file_path)}" if log_errors: logger.error(error_msg) return False, error_msg # Check existence exists = path.exists() and path.is_file() if log_errors and not exists: logger.info(f"File does not exist: {path}") return exists, None except PermissionError as e: error_msg = f"Permission denied accessing {file_path}: {e}" if log_errors: logger.error(error_msg) return False, error_msg except OSError as e: error_msg = f"OS error checking {file_path}: {e}" if log_errors: logger.error(error_msg) return False, error_msg except Exception as e: error_msg = f"Unexpected error checking {file_path}: {e}" if log_errors: logger.error(error_msg) return False, error_msg
Usage example with proper error handling
files_to_check = [ "existing_file.txt", "/root/protected_file.txt", "nonexistent_file.txt", 123 # Invalid input type ]for file_path in files_to_check:
exists, error = safe_file_check(file_path)
if exists:
print(f"✓ {file_path} exists")
else:
print(f"✗ {file_path} - {error or 'does not exist'}")
`
Advanced Techniques
Batch File Existence Checking
`python
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor, as_completed
import time
from typing import List, Dict, Union
def check_single_file(file_path: Union[str, Path]) -> Dict[str, Union[str, bool]]: """ Check single file existence with detailed information """ path = Path(file_path) return { 'path': str(path), 'exists': path.exists(), 'is_file': path.is_file(), 'is_dir': path.is_dir(), 'size': path.stat().st_size if path.exists() else None }
def batch_file_check_sequential(file_paths: List[Union[str, Path]]) -> Dict[str, Dict]: """ Check multiple files sequentially """ results = {} start_time = time.time() for file_path in file_paths: results[str(file_path)] = check_single_file(file_path) results['_metadata'] = { 'method': 'sequential', 'total_time': time.time() - start_time, 'files_checked': len(file_paths) } return results
def batch_file_check_parallel(file_paths: List[Union[str, Path]], max_workers: int = 4) -> Dict[str, Dict]: """ Check multiple files in parallel using ThreadPoolExecutor """ results = {} start_time = time.time() with ThreadPoolExecutor(max_workers=max_workers) as executor: # Submit all tasks future_to_path = { executor.submit(check_single_file, path): path for path in file_paths } # Collect results as they complete for future in as_completed(future_to_path): path = future_to_path[future] try: result = future.result() results[str(path)] = result except Exception as e: results[str(path)] = { 'path': str(path), 'error': str(e), 'exists': False } results['_metadata'] = { 'method': 'parallel', 'total_time': time.time() - start_time, 'files_checked': len(file_paths), 'max_workers': max_workers } return results
Example usage
test_files = [ "file1.txt", "file2.txt", "directory1", "nonexistent.txt", "/etc/passwd", # System file Path("example.py") ]Sequential check
sequential_results = batch_file_check_sequential(test_files) print("Sequential Results:") print(f"Time taken: {sequential_results['_metadata']['total_time']:.4f} seconds")Parallel check
parallel_results = batch_file_check_parallel(test_files) print("\nParallel Results:") print(f"Time taken: {parallel_results['_metadata']['total_time']:.4f} seconds")Compare results
print(f"\nSpeedup: {sequential_results['_metadata']['total_time'] / parallel_results['_metadata']['total_time']:.2f}x")`File Watching and Existence Monitoring
`python
import time
import threading
from pathlib import Path
from typing import Callable, Optional
from dataclasses import dataclass
from datetime import datetime
@dataclass class FileEvent: """ Data class for file events """ path: Path event_type: str # 'created', 'deleted', 'exists', 'not_exists' timestamp: datetime class FileExistenceMonitor: """ Monitor file existence changes over time """ def __init__(self, file_path: Union[str, Path], check_interval: float = 1.0, callback: Optional[Callable[[FileEvent], None]] = None): self.path = Path(file_path) self.check_interval = check_interval self.callback = callback or self._default_callback self.running = False self.thread = None self.last_state = None def _default_callback(self, event: FileEvent): """ Default callback that prints events """ print(f"[{event.timestamp}] {event.path}: {event.event_type}") def _monitor_loop(self): """ Main monitoring loop """ while self.running: current_exists = self.path.exists() current_time = datetime.now() if self.last_state is None: # First check event_type = 'exists' if current_exists else 'not_exists' event = FileEvent(self.path, event_type, current_time) self.callback(event) elif self.last_state != current_exists: # State changed event_type = 'created' if current_exists else 'deleted' event = FileEvent(self.path, event_type, current_time) self.callback(event) self.last_state = current_exists time.sleep(self.check_interval) def start(self): """ Start monitoring """ if not self.running: self.running = True self.thread = threading.Thread(target=self._monitor_loop) self.thread.daemon = True self.thread.start() def stop(self): """ Stop monitoring """ self.running = False if self.thread: self.thread.join()
Example usage
def custom_file_callback(event: FileEvent): """ Custom callback for file events """ status_messages = { 'created': f"🟢 File created: {event.path}", 'deleted': f"🔴 File deleted: {event.path}", 'exists': f"✅ File exists: {event.path}", 'not_exists': f"❌ File does not exist: {event.path}" } print(f"[{event.timestamp.strftime('%H:%M:%S')}] {status_messages[event.event_type]}")Start monitoring
monitor = FileExistenceMonitor("watched_file.txt", check_interval=0.5, callback=custom_file_callback) monitor.start()Simulate file operations
print("Starting file monitoring...") time.sleep(2)Create file
with open("watched_file.txt", "w") as f: f.write("Hello, World!") time.sleep(2)Delete file
Path("watched_file.txt").unlink() time.sleep(2)Stop monitoring
monitor.stop() print("Monitoring stopped")`Common Pitfalls
Race Conditions
`python
import os
import time
import threading
from pathlib import Path
def demonstrate_race_condition(): """ Demonstrate potential race condition in file existence checking """ file_path = "race_condition_test.txt" def create_and_delete_file(): """ Continuously create and delete file """ for i in range(100): # Create file with open(file_path, 'w') as f: f.write(f"iteration {i}") time.sleep(0.001) # Delete file try: os.remove(file_path) except FileNotFoundError: pass time.sleep(0.001) def check_and_use_file(): """ Check file existence and try to use it (potential race condition) """ race_conditions_detected = 0 for i in range(100): # PROBLEMATIC: Check then use pattern if os.path.exists(file_path): try: with open(file_path, 'r') as f: content = f.read() except FileNotFoundError: # Race condition detected! race_conditions_detected += 1 print(f"Race condition detected at iteration {i}") time.sleep(0.002) return race_conditions_detected # Start file manipulation thread manipulation_thread = threading.Thread(target=create_and_delete_file) manipulation_thread.start() # Check for race conditions race_conditions = check_and_use_file() # Wait for manipulation thread to finish manipulation_thread.join() # Clean up try: os.remove(file_path) except FileNotFoundError: pass print(f"Total race conditions detected: {race_conditions}")
def safe_file_operation(file_path: str, operation: str = 'read'): """ Safe file operation that avoids race conditions """ try: if operation == 'read': with open(file_path, 'r') as f: return f.read() elif operation == 'write': with open(file_path, 'w') as f: f.write("Safe content") return True except FileNotFoundError: print(f"File {file_path} not found during {operation} operation") return None except PermissionError: print(f"Permission denied for {file_path} during {operation}") return None
Demonstrate race condition
print("Demonstrating race conditions:") demonstrate_race_condition()Show safe alternative
print("\nSafe file operation example:") result = safe_file_operation("nonexistent_file.txt") print(f"Result: {result}")`Platform-Specific Issues
`python
import os
import platform
from pathlib import Path
def demonstrate_platform_issues(): """ Demonstrate platform-specific file existence checking issues """ current_platform = platform.system() print(f"Current platform: {current_platform}") # Case sensitivity issues test_cases = [ ("TestFile.txt", "testfile.txt"), ("UPPERCASE.TXT", "uppercase.txt"), ("MixedCase.Py", "mixedcase.py") ] print("\nCase sensitivity test:") for original, lowercase in test_cases: # Create file with original name with open(original, 'w') as f: f.write("test content") original_exists = os.path.exists(original) lowercase_exists = os.path.exists(lowercase) print(f"File '{original}' exists: {original_exists}") print(f"File '{lowercase}' exists: {lowercase_exists}") print(f"Case sensitive filesystem: {original_exists != lowercase_exists}") # Clean up try: os.remove(original) except FileNotFoundError: pass print()
def cross_platform_file_check(file_path: str): """ Cross-platform safe file existence check """ # Normalize path separators normalized_path = os.path.normpath(file_path) # Use pathlib for better cross-platform support path = Path(normalized_path) return { 'exists': path.exists(), 'is_file': path.is_file(), 'absolute_path': str(path.absolute()), 'normalized_path': normalized_path, 'platform': platform.system() }
Test cross-platform function
test_paths = [ "test/file.txt", # Unix-style "test\\file.txt", # Windows-style "./relative/path.txt", # Relative path "/absolute/path.txt" # Absolute Unix path ]print("Cross-platform file checking:")
for path in test_paths:
result = cross_platform_file_check(path)
print(f"Path: {path}")
for key, value in result.items():
print(f" {key}: {value}")
print()
`
Practical Examples
Configuration File Checker
`python
import json
import os
from pathlib import Path
from typing import Dict, List, Optional, Any
class ConfigurationManager: """ Practical example: Configuration file management with existence checking """ def __init__(self, config_paths: List[str]): self.config_paths = [Path(path) for path in config_paths] self.config_data = {} self.loaded_config_path = None def find_config_file(self) -> Optional[Path]: """ Find the first existing configuration file from the list """ for config_path in self.config_paths: if config_path.exists() and config_path.is_file(): return config_path return None def load_config(self) -> bool: """ Load configuration from the first available file """ config_file = self.find_config_file() if not config_file: print("No configuration file found in any of these locations:") for path in self.config_paths: print(f" - {path}") return False try: with open(config_file, 'r') as f: self.config_data = json.load(f) self.loaded_config_path = config_file print(f"Configuration loaded from: {config_file}") return True except json.JSONDecodeError as e: print(f"Invalid JSON in config file {config_file}: {e}") return False except Exception as e: print(f"Error loading config file {config_file}: {e}") return False def get_config_value(self, key: str, default: Any = None) -> Any: """ Get configuration value with default fallback """ return self.config_data.get(key, default) def create_default_config(self, path: Optional[Path] = None) -> bool: """ Create default configuration file """ default_config = { "app_name": "MyApplication", "version": "1.0.0", "debug": False, "database": { "host": "localhost", "port": 5432, "name": "myapp" } } config_path = path or self.config_paths[0] try: # Ensure directory exists config_path.parent.mkdir(parents=True, exist_ok=True) with open(config_path, 'w') as f: json.dump(default_config, f, indent=2) print(f"Default configuration created at: {config_path}") return True except Exception as e: print(f"Error creating default config: {e}") return False
Usage example
config_manager = ConfigurationManager([ "config.json", "~/.myapp/config.json", "/etc/myapp/config.json" ])Try to load existing configuration
if not config_manager.load_config(): print("Creating default configuration...") config_manager.create_default_config() config_manager.load_config()Use configuration
app_name = config_manager.get_config_value("app_name", "Unknown App") debug_mode = config_manager.get_config_value("debug", False)print(f"Application: {app_name}")
print(f"Debug mode: {debug_mode}")
`
Log File Rotation System
`python
import os
import time
from pathlib import Path
from datetime import datetime
from typing import Optional
class LogFileManager: """ Practical example: Log file management with existence checking and rotation """ def __init__(self, log_file: str, max_size_mb: float = 10.0, max_files: int = 5): self.log_file = Path(log_file) self.max_size_bytes = max_size_mb 1024 1024 self.max_files = max_files # Ensure log directory exists self.log_file.parent.mkdir(parents=True, exist_ok=True) def should_rotate(self) -> bool: """ Check if log file should be rotated based on size """ if not self.log_file.exists(): return False try: return self.log_file.stat().st_size > self.max_size_bytes except OSError: return False def rotate_logs(self) -> bool: """ Rotate log files (log.txt -> log.1.txt -> log.2.txt, etc.) """ if not self.log_file.exists(): return True try: # Remove oldest log file if it exists oldest_log = self.log_file.with_suffix(f'.{self.max_files}.txt') if oldest_log.exists(): oldest_log.unlink() # Shift existing log files for i in range(self.max_files - 1, 0, -1): current_log = self.log_file.with_suffix(f'.{i}.txt') next_log = self.log_file.with_suffix(f'.{i + 1}.txt') if current_log.exists(): current_log.rename(next_log) # Move current log to .1 backup_log = self.log_file.with_suffix('.1.txt') self.log_file.rename(backup_log) print(f"Log rotated: {self.log_file} -> {backup_log}") return True except Exception as e: print(f"Error rotating logs: {e}") return False def write_log(self, message: str, level: str = "INFO") -> bool: """ Write log message with automatic rotation """ # Check if rotation is needed if self.should_rotate(): self.rotate_logs() try: timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") log_entry = f"[{timestamp}] {level}: {message}\n" with open(self.log_file, 'a') as f: f.write(log_entry) return True except Exception as e: print(f"Error writing to log: {e}") return False def get_log_info(self) -> dict: """ Get information about log files """ info = { 'current_log': { 'path': str(self.log_file), 'exists': self.log_file.exists(), 'size_bytes': 0, 'size_mb': 0.0 }, 'backup_logs': [] } # Current log info if self.log_file.exists(): size_bytes = self.log_file.stat().st_size info['current_log']['size_bytes'] = size_bytes info['current_log']['size_mb'] = round(size_bytes / (1024 * 1024), 2) # Backup logs info for i in range(1, self.max_files + 1): backup_log = self.log_file.with_suffix(f'.{i}.txt') if backup_log.exists(): size_bytes = backup_log.stat().st_size info['backup_logs'].append({ 'path': str(backup_log), 'size_bytes': size_bytes, 'size_mb': round(size_bytes / (1024 * 1024), 2) }) return info
Usage example
log_manager = LogFileManager("logs/application.log", max_size_mb=0.001, max_files=3) # Small size for demoWrite some log entries
for i in range(100): log_manager.write_log(f"This is log entry number {i + 1}", "INFO") if i % 20 == 0: log_manager.write_log(f"Checkpoint reached at entry {i + 1}", "DEBUG")Display log information
log_info = log_manager.get_log_info() print("Log File Information:") print(f"Current log: {log_info['current_log']['path']}") print(f" Exists: {log_info['current_log']['exists']}") print(f" Size: {log_info['current_log']['size_mb']} MB")print("\nBackup logs:")
for backup in log_info['backup_logs']:
print(f" {backup['path']}: {backup['size_mb']} MB")
`
This comprehensive guide covers all aspects of checking file existence in Python, from basic methods to advanced techniques and practical applications. The examples provided demonstrate real-world scenarios and best practices for robust file handling in Python applications.